|
--- |
|
language: |
|
- zh |
|
- en |
|
--- |
|
|
|
|
|
# ChatTruth-7B |
|
|
|
**ChatTruth-7B** 在Qwen-VL的基础上,使用精心设计的数据进行了优化训练。与Qwen-VL相比,模型在大分辨率上得到了大幅提升。创新性提出Restore Module使大分辨率计算量大幅减少。 |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/657bef8a5c6f0b1f36fcf28e/kwgU2AxZbJzxmgWULwv6A.png) |
|
|
|
## 安装要求 (Requirements) |
|
|
|
* transformers 4.32.0 |
|
* python 3.8 and above |
|
* pytorch 1.13 and above |
|
* CUDA 11.4 and above |
|
|
|
<br> |
|
|
|
## 快速开始 (Quickstart) |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
from transformers.generation import GenerationConfig |
|
import torch |
|
torch.manual_seed(1234) |
|
model_path = 'ChatTruth-7B' # your downloaded model path. |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) |
|
|
|
# use cuda device |
|
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="cuda", trust_remote_code=True).eval() |
|
|
|
model.generation_config = GenerationConfig.from_pretrained(model_path, trust_remote_code=True) |
|
model.generation_config.top_p = 0.01 |
|
|
|
query = tokenizer.from_list_format([ |
|
{'image': 'demo.jpeg'}, |
|
{'text': '图片中的文字是什么'}, |
|
]) |
|
response, history = model.chat(tokenizer, query=query, history=None) |
|
print(response) |
|
|
|
# 昆明太厉害了 |
|
``` |