📈 CFGPT: Chinese Financial Assistant with Large Language Model (CFGPT1-sft-7b-Full)
Introduction
We introduce CFGPT, an open-source language model trained by firstly further pretraining general LLMs on collected and cleaned Chinese finance text data (CFData-pt), including financial domain-specific data (announcement, finance articles, finance exams, finance news, finance research papers) and general data (Wikipedia), and secondly fine-tuning with knowledge-intensive instruction tuning data (CFData-sft). As for preliminary evaluation, we use CFBenchmark-Basic. CFGPT outperforms the baselines on objective and subjective tasks compared to several baseline models with similar parameters.
In this repository, we will share the supervised finetuning Full model.
- Supervised Finetuned Model (Full): Full model trained weights based on the pretrained model.
How to Use
1. Prepare the code and the environment
Clone CFGPT repository, create a Python environment, and activate it via the following command
git clone https://github.com/TongjiFinLab/CFGPT.git
cd CFGPT
conda create -n env_name python=3.10
source activate env_name
pip install -r requirements.txt
2. Use CFGPT1-sft-7B-Full
from transformers import AutoModel, AutoTokenizer
base_model = 'TongjiFinLab/CFGPT1-sft-7B-Full'
device_map = 'cuda:0'
tokenizer = AutoTokenizer.from_pretrained(base_model, trust_remote_code=True)
model = AutoModel.from_pretrained(
base_model,
trust_remote_code=True,
device_map=device_map,
torch_dtype=torch.bfloat16
)
model = model.eval()
inputs = tokenizer("""你是一名金融从业者,请对这篇新闻进行情感分析。请从(中性、积极、消极)中选取答案。新闻内容:挖贝快讯:特步国际发布2023年第二季度中国内地业务营运状况,披露截至2023年6月30日止3个月零售销售实现高双位数同比增长(包括线上线下渠道),零售折扣水平约七五折。同时,2022年7月MSCI首次予以特步ESG评级,一年后评级表现即迎来提升。明晟MSCI上调特步ESG评级,由“BB”升至“BBB”。\n回答:""", return_tensors='pt').to('cuda:4')
pred = model.generate(**inputs, max_new_tokens=64, do_sample=False, repetition_penalty=1.0)
print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True).split('回答:')[1])
简介
CFGPT是一个开源的语言模型,首先通过在收集和清理的中国金融文本数据(CFData-pt)上进行继续预训练,包括金融领域特定数据(公告、金融文章、金融考试、金融新闻、金融研究论文)和通用数据(维基百科),然后使用知识密集的指导调整数据(CFData-sft)进行微调。 我们使用CFBenchmark-Basic进行初步评估。与几个具有相似参数的基线模型相比,CFGPT在识别,分类和生成任务上表现优越。
在这个仓库中,我们将分享以下全参数有监督微调的模型。
- Supervised Finetuned Model (Full): 基于我们继续预训练模型的进一步全参数微调的完整模型训练权重。
如何使用
1. 准备代码和环境
克隆CFGPT的仓库,创建一个Python环境,并通过以下命令激活它:
git clone https://github.com/TongjiFinLab/CFGPT.git
cd CFGPT
conda create -n env_name python=3.10
source activate env_name
pip install -r requirements.txt
2. 使用 CFGPT1-sft-7B-Full
from transformers import AutoModel, AutoTokenizer
base_model = 'TongjiFinLab/CFGPT1-sft-7B-Full'
device_map = 'cuda:0'
tokenizer = AutoTokenizer.from_pretrained(base_model, trust_remote_code=True)
model = AutoModel.from_pretrained(
base_model,
trust_remote_code=True,
device_map=device_map,
torch_dtype=torch.bfloat16
)
model = model.eval()
inputs = tokenizer("""你是一名金融从业者,请对这篇新闻进行情感分析。请从(中性、积极、消极)中选取答案。新闻内容:挖贝快讯:特步国际发布2023年第二季度中国内地业务营运状况,披露截至2023年6月30日止3个月零售销售实现高双位数同比增长(包括线上线下渠道),零售折扣水平约七五折。同时,2022年7月MSCI首次予以特步ESG评级,一年后评级表现即迎来提升。明晟MSCI上调特步ESG评级,由“BB”升至“BBB”。\n回答:""", return_tensors='pt').to(device_map)
pred = model.generate(**inputs, max_new_tokens=64, do_sample=False, repetition_penalty=1.0)
print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True).split('回答:')[1])
- Downloads last month
- 673