Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
cognitivecomputations
/
dolphin-2.9-llama3-8b-256k
like
46
Follow
Cognitive Computations
940
Text Generation
Transformers
Safetensors
llama
conversational
text-generation-inference
Inference Endpoints
License:
llama3
Model card
Files
Files and versions
Community
3
Train
Deploy
Use this model
New discussion
New pull request
Resources
PR & discussions documentation
Code of Conduct
Hub documentation
All
Discussions
Pull requests
View closed (1)
Was trying to quantize to 8 bits to reduce VRAM footprint. Got the stuff below.
#3 opened 7 months ago by
BigDeeper
pls help
#2 opened 7 months ago by
UNITYA