haijunlv commited on
Commit
508c92b
·
verified ·
1 Parent(s): c4086b6

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -86,7 +86,7 @@ from transformers import AutoTokenizer, AutoModelForCausalLM
86
  model_dir = "internlm/internlm3-8b-instruct"
87
  tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
88
  # Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and might cause OOM Error.
89
- # model = AutoModelForCausalLM.from_pretrained(model_dir, trust_remote_code=True, torch_dtype=torch.float16)
90
  # (Optional) If on low resource devices, you can load model in 4-bit or 8-bit to further save GPU memory via bitsandbytes.
91
  # InternLM3 8B in 4bit will cost nearly 8GB GPU memory.
92
  # pip install -U bitsandbytes
@@ -357,7 +357,7 @@ print(outputs)
357
 
358
  ## Open Source License
359
 
360
- The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow **free** commercial usage. To apply for a commercial license, please fill in the [application form (English)](https://wj.qq.com/s2/12727483/5dba/)/[申请表(中文)](https://wj.qq.com/s2/12725412/f7c1/). For other questions or collaborations, please contact <internlm@pjlab.org.cn>.
361
 
362
  ## Citation
363
 
@@ -435,7 +435,7 @@ from transformers import AutoTokenizer, AutoModelForCausalLM
435
  model_dir = "internlm/internlm3-8b-instruct"
436
  tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
437
  # Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and might cause OOM Error.
438
- # model = AutoModelForCausalLM.from_pretrained(model_dir, trust_remote_code=True, torch_dtype=torch.float16)
439
  # (Optional) If on low resource devices, you can load model in 4-bit or 8-bit to further save GPU memory via bitsandbytes.
440
  # InternLM3 8B in 4bit will cost nearly 8GB GPU memory.
441
  # pip install -U bitsandbytes
@@ -712,7 +712,7 @@ print(outputs)
712
 
713
  ## 开源许可证
714
 
715
- 本仓库的代码依照 Apache-2.0 协议开源。模型权重对学术研究完全开放,也可申请免费的商业使用授权([申请表](https://wj.qq.com/s2/12725412/f7c1/))。其他问题与合作请联系 <internlm@pjlab.org.cn>。
716
 
717
  ## 引用
718
 
 
86
  model_dir = "internlm/internlm3-8b-instruct"
87
  tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
88
  # Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and might cause OOM Error.
89
+ model = AutoModelForCausalLM.from_pretrained(model_dir, trust_remote_code=True, torch_dtype=torch.float16)
90
  # (Optional) If on low resource devices, you can load model in 4-bit or 8-bit to further save GPU memory via bitsandbytes.
91
  # InternLM3 8B in 4bit will cost nearly 8GB GPU memory.
92
  # pip install -U bitsandbytes
 
357
 
358
  ## Open Source License
359
 
360
+ Code and model weights are licensed under Apache-2.0.
361
 
362
  ## Citation
363
 
 
435
  model_dir = "internlm/internlm3-8b-instruct"
436
  tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
437
  # Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and might cause OOM Error.
438
+ model = AutoModelForCausalLM.from_pretrained(model_dir, trust_remote_code=True, torch_dtype=torch.float16)
439
  # (Optional) If on low resource devices, you can load model in 4-bit or 8-bit to further save GPU memory via bitsandbytes.
440
  # InternLM3 8B in 4bit will cost nearly 8GB GPU memory.
441
  # pip install -U bitsandbytes
 
712
 
713
  ## 开源许可证
714
 
715
+ 本仓库的代码和权重依照 Apache-2.0 协议开源。
716
 
717
  ## 引用
718