The final version of Llama 3.0 will be followed by the next iteration starting from Llama 3.1.
Special Thanks:
- Lewdiculous's superb gguf version, thank you for your conscientious and responsible dedication.
- https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-2.3-Uncensored-32K-GGUF-IQ-Imatrix-Request
- mradermacher's superb gguf version, thank you for your conscientious and responsible dedication.
- https://huggingface.co/mradermacher/llama3-8B-DarkIdol-2.3-Uncensored-32K-i1-GGUF
- https://huggingface.co/mradermacher/llama3-8B-DarkIdol-2.3-Uncensored-32K-GGUF
These are my own quantizations (updated almost daily).
The difference with normal quantizations is that I quantize the output and embed tensors to f16. and the other tensors to 15_k,q6_k or q8_0. This creates models that are little or not degraded at all and have a smaller size. They run at about 3-6 t/sec on CPU only using llama.cpp And obviously faster on computers with potent GPUs
- the fast cat at ZeroWw/llama3-8B-DarkIdol-2.3-Uncensored-32K-GGUF
Model Description:
The module combination has been readjusted to better fulfill various roles and has been adapted for mobile phones.
- Saving money(LLama 3)
- only test en.
- Input Models input text only. Output Models generate text and code only.
- Uncensored
- Quick response
- The underlying model used is winglian/Llama-3-8b-64k-PoSE (The theoretical support is 64k, but I have only tested up to 32k. :)
- A scholarly response akin to a thesis.(I tend to write songs extensively, to the point where one song almost becomes as detailed as a thesis. :)
- DarkIdol:Roles that you can imagine and those that you cannot imagine.
- Roleplay
- Specialized in various role-playing scenarios
- more look at test role. (https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.2/tree/main/test)
- more look at LM Studio presets (https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.2/tree/main/config-presets)
virtual idol Twitter
Questions
- The model's response results are for reference only, please do not fully trust them.
Stop Strings
stop = [
"## Instruction:",
"### Instruction:",
"<|end_of_text|>",
" //:",
"</s>",
"<3```",
"### Note:",
"### Input:",
"### Response:",
"### Emoticons:"
],
Model Use
- Koboldcpp https://github.com/LostRuins/koboldcpp
- Since KoboldCpp is taking a while to update with the latest llama.cpp commits, I'll recommend this fork if anyone has issues.
- LM Studio https://lmstudio.ai/
- Please test again using the Default LM Studio Windows preset.
- llama.cpp https://github.com/ggerganov/llama.cpp
- Backyard AI https://backyard.ai/
- Meet Layla,Layla is an AI chatbot that runs offline on your device.No internet connection required.No censorship.Complete privacy.Layla Lite https://www.layla-network.ai/
- Layla Lite llama3-8B-DarkIdol-1.1-Q4_K_S-imat.gguf https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-2.3-Uncensored-32K/blob/main/llama3-8B-DarkIdol-2.3-Uncensored-32K-Q4_K_S-imat.gguf?download=true
- more gguf at https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-2.3-Uncensored-32K-GGUF-IQ-Imatrix-Request
character
- https://character-tavern.com/
- https://characterhub.org/
- https://pygmalion.chat/
- https://aetherroom.club/
- https://backyard.ai/
- Layla AI chatbot
If you want to use vision functionality:
- You must use the latest versions of Koboldcpp.
To use the multimodal capabilities of this model and use vision you need to load the specified mmproj file, this can be found inside this model repo. Llava MMProj
Thank you:
To the authors for their hard work, which has given me more options to easily create what I want. Thank you for your efforts. - Hastagaras - Gryphe - cgato - ChaoticNeutrals - mergekit - merge - transformers - llama - Nitral-AI - MLP-KTLim - rinna - hfl - Rupesh2 - stephenlzc - theprint - Sao10K - turboderp - TheBossLevel123 - winglian - .........
llama3-8B-DarkIdol-2.3-Uncensored-32K
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Model Stock merge method using ./llama3-8B-DarkIdol-2.3b as a base.
Configuration
The following YAML configuration was used to produce this model:
models:
- model: Sao10K/L3-8B-Niitama-v1
- model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
- model: Nitral-AI/Hathor_Tahsin-L3-8B-v0.85
- model: turboderp/llama3-turbcat-instruct-8b
- model: winglian/Llama-3-8b-64k-PoSE
merge_method: model_stock
base_model: winglian/Llama-3-8b-64k-PoSE
dtype: bfloat16
models:
- model: maldv/badger-writer-llama-3-8b
- model: underwoods/writer-8b
- model: Gryphe/Pantheon-RP-1.0-8b-Llama-3
- model: vicgalle/Roleplay-Llama-3-8B
- model: cgato/TheSalt-RP-L3-8b-DPO-v0.3.2-e0.15.2
- model: ./llama3-8B-DarkIdol-2.3a
merge_method: model_stock
base_model: ./llama3-8B-DarkIdol-2.3a
dtype: bfloat16
models:
- model: Rupesh2/Meta-Llama-3-8B-abliterated
- model: Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1
- model: Orenguteng/Llama-3-8B-Lexi-Uncensored
- model: theprint/Llama-3-8B-Lexi-Smaug-Uncensored
- model: vicgalle/Unsafe-Llama-3-8B
- model: vicgalle/Configurable-Hermes-2-Pro-Llama-3-8B
- model: ./llama3-8B-DarkIdol-2.3b
merge_method: model_stock
base_model: ./llama3-8B-DarkIdol-2.3b
dtype: bfloat16
- Downloads last month
- 585
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.