LM Studio 0.3.5 build 2: __init__() missing 1 required positional argument: 'vocab_size' and then 6bit quantization not supported
#1
by
gue22
- opened
Trying to use paligemma2-10b-ft-docci-448-6bit in LM Studio 0.3.5 build 2 on a 36GB MBP M3: init() missing 1 required positional argument: 'vocab_size'
After correcting / removing the leading underscore in '_vocab_size' in config.json LM Studio says 6bit quantization not supported.
The bf16 has the same _quirk (?) in 'vocab_size' and then throws
🥲 Failed to load the model
Error when loading model: ValueError: Received parameters not in model:
language_model.model.layers.11.post_feedforward_layernorm.weight
language_model.model.layers.2.post_feedforward_layernorm.weight
language_model.model.layers.20.post_feedforward_layernorm.weight
...
Thanks for shedding any light. (Run in different environment? I see gemma2 for Ollama and my 256GB Xeon 3435 and 2 x 20GB RTX 4000, but not paligemma2.)
G.