Claudio Musso
Kalemnor
AI & ML interests
None yet
Recent Activity
liked
a model
about 2 months ago
ISTA-DASLab/Meta-Llama-3-8B-AQLM-PV-2Bit-1x16
Organizations
None yet
Kalemnor's activity
It's an excellent version, but...
20
#1 opened 7 months ago
by
Anderson452
What are the diffences of this with Qwen/CodeQwen1.5-7B
6
#5 opened 8 months ago
by
Kalemnor
For a context of at least 32K tokens which version on a 2x16GB Gpu Config?
1
#3 opened 8 months ago
by
Kalemnor
other quants available?
9
#1 opened 8 months ago
by
veryVANYA
Can use with v100 gpu?
4
#1 opened 10 months ago
by
jesulo
Nomic-embed-1.5 was 8192 in context size, what's the context size for this model?
1
#6 opened 8 months ago
by
Kalemnor
Amazing model
4
#1 opened 8 months ago
by
deleted
Can you make more quantized versions of this.
#1 opened 8 months ago
by
Kalemnor
Is this a finetune of Mixtral 8x7b base or Mixtral 8x7b Instruct?
2
#1 opened 8 months ago
by
Kalemnor
How about an instruct version, like deepseek, or solar did?
1
#10 opened 8 months ago
by
Kalemnor
IQ1_S not usable
3
#1 opened 9 months ago
by
Kalemnor
I've sen you doing this already with Bigstral 12B...
4
#1 opened 9 months ago
by
Kalemnor
Any particular difference between this model and TheBloke/laser-dolphin-mixtral-2x7b-dpo-GGUF?
4
#3 opened 9 months ago
by
Kalemnor
Difference between this and macadeliccc/laser-dolphin-mixtral-2x7b-dpo-GGUF?
#2 opened 9 months ago
by
Kalemnor
Low performance
1
#1 opened 9 months ago
by
rostialex
Any particular difference between this model and TheBloke/laser-dolphin-mixtral-2x7b-dpo-GGUF?
4
#3 opened 9 months ago
by
Kalemnor
Any particular difference between this model and TheBloke/laser-dolphin-mixtral-2x7b-dpo-GGUF?
4
#3 opened 9 months ago
by
Kalemnor