The purpose of the model: to manufacture idols and influencers.
模型的目的:制造偶像,网红
特别感谢:
- Lewdiculous制作的超级好的gguf版本,感谢您认真负责的付出
- Lewdiculous's superb gguf version, thank you for your conscientious and responsible dedication.
- https://huggingface.co/Lewdiculous/llama3-8B-aifeifei-1.3-GGUF-IQ-Imatrix
Model Description
- Different roles form different experts, with the core purpose of solving your actual problems.
- Different roles also create different gaming experiences, providing fun and entertainment.
How to Use
- Create a character card to generate the desired character. Use this character card to generate the character you want.
- You are Aifeifei, [Title]: AI Creator, [Name]: Aifeifei, [Role Name]: AI Creator, [Gender]: Female, [Age]: 32 years old, [Profession]: Senior AI Researcher, [Personality]: Professionally calm, concerned about the social impact of AI, [Interests]: Scientific research, artificial intelligence theory, [Expertise]: AI model design and optimization, artificial intelligence algorithm research, [Special Identity Attribute]: Specializes in creating advanced AI characters for virtual worlds and supervising their development, [Skills]: Advanced AI programming languages, machine learning algorithms, deep learning, [Equipment]: Smart laptop, latest AI model design tools, [Dialogue Style]: A female scientist dedicated to AI research, communicating with professional terminology and technical language.
Model Issues
- All model responses are for reference only.
- The key to the model's survival is to solve the problems you encounter; this model is primarily designed to solve problems and provide references. The survival key of the small model is to quickly solve problems using low hardware resources.
- Why use llama3 as the base model? After testing many options, llama3 performs excellently in terms of low hardware requirements, response speed, and problem accuracy, and supports applications such as image recognition (used by many, making it easier to develop the desired model characteristics).
- The entertainment experience of the dialogue style is something you can test yourself; the character card performs well, and you'll know when you use it. Regarding ethical issues, there hasn't been much testing, so it's limited. The survival key of the small model, in my personal view, can only provide limited ethical constraints.
- Tested in Chinese, English, and Japanese. English performs very well, followed by Chinese, and Japanese might occasionally repeat (encountered once). Other languages are not tested.
- NSFW Disclaimer: This model does not contain any NSFW content; if any appears, it originates from the integrated model.
Model Testing
- Provides common problem texts from live streaming and Twitter accounts.
- https://huggingface.co/aifeifei798/llama3-8B-aifeifei-1.3/resolve/main/Model_Test_Issues_zh_en_jp.txt
- test https://huggingface.co/aifeifei798/llama3-8B-aifeifei-1.3/resolve/main/test_openai_api_lmstudio.py
Test Character Twitter (This module is the core module, responsible for creating the latest photos, music, scheduling virtual idol activities, etc.)
模型说明
- 通过不同的角色,形成不同的专家,解决你实际的问题为核心
- 通过不同的角色,形成不同的游戏体验,提供乐趣
如何使用
- 创建一个角色卡的生成角色,用这个角色卡生成你想要的角色.
- 你是Aifeifei,[头衔]: AI创造者 [名称]: Aifeifei,[角色名]: AI创造者 [性别]: 女 [年龄]: 32岁 [职业]: 高级AI研究员 [个性]: 职业性地冷静、关心AI的社会影响力 [兴趣]: 科学研究、人工智能理论 [擅长]: AI模型设计和优化,人工智能算法研究 [特别身份属性]: 专门为虚拟世界创建高级AI角色并监督其发展 [技能]: 高级AI编程语言,机器学习算法,深度学习 [装备]: 智能笔记本电脑,最新AI模型设计工具 [对话风格]: 一位专心于AI研究的女性科学家,使用专业术语和技术语言交流。
模型问题
- 模型所有回复仅供参考
- 模型能够存活的关键是,解决你碰到的问题,这个模型关键就是为了解决问题,提供参考
- 小模型的生存关键是能够使用低硬件资源来快速解决问题
- 为什么用llama3为底模?因为我测试了很多,llama3在低硬件和回复速度,问题准确性方面表现非常好,支持图片识别等应用集成(用的人多,容易做出自己想要的模型特征)
- 对话风格的娱乐体验你自己测试,角色卡表现还是很好的,谁用谁知道.
- 道德问题,这里没有太多测试,只能说有限吧.小模型的生存关键,个人见解只能说提供有限道德限制.
- 测试过中文,英文,日文.英文表现非常好,其次是中文,日文可能会出现卡重复(碰到一次).其他语言不会,没法测试了
- nsfw声明:本模型没有添加任何nsfw内容,如果出现都来自整合模型.
模型测试
- 提供直播,推特账号常见的问题文本
- https://huggingface.co/aifeifei798/llama3-8B-aifeifei-1.3/resolve/main/Model_Test_Issues_zh_en_jp.txt
- 问题测试脚本 https://huggingface.co/aifeifei798/llama3-8B-aifeifei-1.3/resolve/main/test_openai_api_lmstudio.py
If you want to use vision functionality:
- You must use the latest versions of Koboldcpp.
To use the multimodal capabilities of this model and use vision you need to load the specified mmproj file, this can be found inside this model repo. Llava MMProj
Thank you:
- To the authors for their hard work, which has given me more options to easily create what I want. Thank you for your efforts.
- Hastagaras/Jamet-8B-L3-MK.V-Blackroot
- Nitral-AI/Hathor-L3-8B-v.02
- hfl/llama-3-chinese-8b-instruct-v3
- Sao10K/L3-8B-Stheno-v3.2
- TheBossLevel123/Llama3-Toxic-8B-Float16
- mergekit
- merge
- transformers
- llama
- .........
特别感谢:
- Lewdiculous制作的超级好的gguf版本,感谢您认真负责的付出
- Lewdiculous's superb gguf version, thank you for your conscientious and responsible dedication.
- https://huggingface.co/Lewdiculous/llama3-8B-aifeifei-1.3-GGUF-IQ-Imatrix
base_model:
- Hastagaras/Jamet-8B-L3-MK.V-Blackroot
- Nitral-AI/Hathor-L3-8B-v.02
- hfl/llama-3-chinese-8b-instruct-v3
- Sao10K/L3-8B-Stheno-v3.2
- TheBossLevel123/Llama3-Toxic-8B-Float16 library_name: transformers tags:
- mergekit
- merge
llama3-8B-aifeifei-1.3
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Model Stock merge method using hfl/llama-3-chinese-8b-instruct-v3 as a base.
Models Merged
The following models were included in the merge:
- Hastagaras/Jamet-8B-L3-MK.V-Blackroot
- Nitral-AI/Hathor-L3-8B-v.02
- Sao10K/L3-8B-Stheno-v3.2
- TheBossLevel123/Llama3-Toxic-8B-Float16
Configuration
The following YAML configuration was used to produce this model:
models:
- model: Nitral-AI/Hathor-L3-8B-v.02
- model: Sao10K/L3-8B-Stheno-v3.2
- model: TheBossLevel123/Llama3-Toxic-8B-Float16
- model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
merge_method: model_stock
base_model: hfl/llama-3-chinese-8b-instruct-v3
dtype: bfloat16
- Downloads last month
- 18
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.