dittops commited on
Commit
49b7c6f
•
1 Parent(s): 599355b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +85 -39
README.md CHANGED
@@ -1,60 +1,106 @@
1
  ---
2
- license: other
3
- base_model: meta-llama/Meta-Llama-3-8B
4
  tags:
5
- - llama-factory
6
- - full
7
- - generated_from_trainer
8
  model-index:
9
- - name: Meta-Llama-3-8B-pretrain-oss-evol
10
- results: []
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
- should probably proofread and complete it, then remove this comment. -->
15
 
16
- # Meta-Llama-3-8B-pretrain-oss-evol
17
 
18
- This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the oss-evol dataset.
 
19
 
20
- ## Model description
 
 
 
 
21
 
22
- More information needed
23
 
24
- ## Intended uses & limitations
25
 
26
- More information needed
 
 
27
 
28
- ## Training and evaluation data
29
 
30
- More information needed
31
 
32
- ## Training procedure
33
 
34
- ### Training hyperparameters
35
 
36
- The following hyperparameters were used during training:
37
- - learning_rate: 2e-05
38
- - train_batch_size: 8
39
- - eval_batch_size: 8
40
- - seed: 42
41
- - distributed_type: multi-GPU
42
- - num_devices: 8
43
- - total_train_batch_size: 64
44
- - total_eval_batch_size: 64
45
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
- - lr_scheduler_type: cosine
47
- - lr_scheduler_warmup_ratio: 0.1
48
- - num_epochs: 3.0
49
- - mixed_precision_training: Native AMP
50
 
51
- ### Training results
52
 
53
 
54
 
55
- ### Framework versions
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
 
57
- - Transformers 4.40.0
58
- - Pytorch 2.2.2+cu121
59
- - Datasets 2.19.0
60
- - Tokenizers 0.19.1
 
1
  ---
2
+ license: llama2
3
+ library_name: transformers
4
  tags:
5
+ - code
 
 
6
  model-index:
7
+ - name: Code Millenials
8
+ results:
9
+ - task:
10
+ type: text-generation
11
+ dataset:
12
+ name: HumanEval
13
+ type: openai_humaneval
14
+ metrics:
15
+ - type: pass@1
16
+ value: 0.671
17
+ name: pass@1
18
+ verified: false
19
  ---
20
 
 
 
21
 
22
+ # Bud Code Millenials 8B
23
 
24
+ Welcome to our Code Model repository! Our model is specifically fine-tuned for code generation tasks. Bud Millenial Code Gen open-source models are currently the State of the Art (SOTA) for code generation, beating all the existing models of all sizes. We have achieved a HumanEval value of 80.48 @ Pass 1, beating proprietary models like Gemini Ultra, Claude, GPT-3.5 etc. by a large margin, and on par with GPT-4 (HumanEval ~ 82. Ref. WizardCoder). Our proprietary model (Bud Code Jr) beats GPT-4 as well with a HumanEval value of 88.2 & a context size of 168K, we will be releasing an API for Researchers, Enterprises, and potential Partners by January 2024 end. If interested, please reach out to jithinvg@bud.studio
25
+ ### News 🔥🔥🔥
26
 
27
+ - [2024/04/21] We released **Code Millenials 8B** , which achieves the **67.1 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
28
+ - [2024/01/09] We released **Code Millenials 3B** , which achieves the **56.09 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
29
+ - [2024/01/09] We released **Code Millenials 1B** , which achieves the **51.82 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
30
+ - [2024/01/03] We released **Code Millenials 34B** , which achieves the **80.48 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
31
+ - [2024/01/02] We released **Code Millenials 13B** , which achieves the **76.21 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
32
 
 
33
 
34
+ ### HumanEval
35
 
36
+ <p align="center" width="100%">
37
+ <a ><img src="https://raw.githubusercontent.com/BudEcosystem/code-millenials/main/assets/result.png" alt="CodeMillenials" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
38
+ </p>
39
 
40
+ For the millenial models, the eval script in the github repo is used for the above result.
41
 
42
+ Note: The humaneval values of other models are taken from the official repos of [WizardCoder](https://github.com/nlpxucan/WizardLM), [DeepseekCoder](https://github.com/deepseek-ai/deepseek-coder), [Gemini](https://deepmind.google/technologies/gemini/#capabilities) etc.
43
 
 
44
 
45
+ ### Models
46
 
47
+ | Model | Checkpoint | HumanEval (+) | MBPP (+) |
48
+ |---------|-------------|---------------|----------|
49
+ |Code Millenials 34B | <a href="https://huggingface.co/budecosystem/code-millenials-34b" target="_blank">HF Link</a> | 80.48 (75) | 74.68 (62.9) |
50
+ |Code Millenials 13B | <a href="https://huggingface.co/budecosystem/code-millenials-13b" target="_blank">HF Link</a> | 76.21 (69.5) | 70.17 (57.6) |
51
+ |Code Millenials 3B | <a href="https://huggingface.co/budecosystem/code-millenials-8b" target="_blank">HF Link</a> | 67.1 (61.6) | - |
52
+ |Code Millenials 3B | <a href="https://huggingface.co/budecosystem/code-millenials-3b" target="_blank">HF Link</a> | 56.09 (52.43) | 55.13 (47.11) |
53
+ |Code Millenials 1B | <a href="https://huggingface.co/budecosystem/code-millenials-1b" target="_blank">HF Link</a> | 51.82 (48.17) | 53.13 (44.61) |
 
 
 
 
 
 
 
54
 
 
55
 
56
 
57
 
58
+ ### 🚀 Quick Start
59
+
60
+ Inference code using the pre-trained model from the Hugging Face model hub
61
+
62
+ ```python
63
+ import torch
64
+ from transformers import AutoTokenizer, AutoModelForCausalLM
65
+
66
+ tokenizer = AutoTokenizer.from_pretrained("budecosystem/code-millenials-8b")
67
+ model = AutoModelForCausalLM.from_pretrained("budecosystem/code-millenials-8b")
68
+
69
+ template = """You are an exceptionally intelligent coding assistant that consistently delivers accurate and reliable responses to user instructions.
70
+
71
+ ### Instruction: {instruction}
72
+
73
+ ### Response:"""
74
+
75
+ instruction = <Your code instruction here>
76
+
77
+ prompt = template.format(instruction=instruction)
78
+
79
+ inputs = tokenizer(prompt, return_tensors="pt")
80
+ sample = model.generate(**inputs, max_length=128)
81
+ print(tokenizer.decode(sample[0]))
82
+
83
+ ```
84
+
85
+
86
+ ## Training details
87
+
88
+ The model is trained of 16 A100 80GB for approximately 50hrs.
89
+
90
+ | Hyperparameters | Value |
91
+ | :----------------------------| :-----: |
92
+ | per_device_train_batch_size | 16 |
93
+ | gradient_accumulation_steps | 1 |
94
+ | epoch | 3 |
95
+ | steps | 2157 |
96
+ | learning_rate | 2e-5 |
97
+ | lr schedular type | cosine |
98
+ | warmup ratio | 0.1 |
99
+ | optimizer | adamw |
100
+ | fp16 | True |
101
+ | GPU | 16 A100 80GB |
102
+
103
+ ### Important Note
104
+
105
+ - **Bias, Risks, and Limitations:** Model may sometimes make errors, produce misleading contents, or struggle to manage tasks that are not related to coding.
106