New 7B?
#1
by
IndrasMirror
- opened
So should I be using this 7B model in my wrapper instead, is this the 768 target size model?
So should I be using this 7B model in my wrapper instead, is this the 768 target size model?
No, this is the weight of the original Chameleon-7B (from Meta) model, used for mode initialization when reproducing the training of Lumina-mGPT.