File size: 7,922 Bytes
4a74714 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 |
---
library_name: transformers
license: apache-2.0
datasets:
- abideen/Cosmopedia-100k-pretrain
language:
- en
base_model:
- meta-llama/Llama-3.1-8B-Instruct
---
# π Llama3-8B-to2B-BitnetDownscaling (from 8B to 2B) Transformation & Training
This project transforms a Llama3 model from 8B parameters to a BitNet architecture with 2B parameters, applying BitLinear layers. Additionally, the model is trained with a predefined dataset and uploaded to Hugging Face for future use.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6419c2f6b4adb0e101b17b6c/X6O_WbSqbdOWjhTm0tWU1.png)
## Features π
- **Model Size:** 8B parameters π§
- **Architecture:** BitNet ποΈ
- **Bitlinear Layers:** Reduces weights to values of 1, 0, and -1. β
- **Optimized for:** Fast inference and memory efficiency β‘
## Architecture
```bash
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(128256, 4096)
(layers): ModuleList(
(0-5): 6 x LlamaDecoderLayer(
(self_attn): LlamaSdpaAttention(
(q_proj): BitLinear(in_features=4096, out_features=4096, bias=False)
(k_proj): BitLinear(in_features=4096, out_features=1024, bias=False)
(v_proj): BitLinear(in_features=4096, out_features=1024, bias=False)
(o_proj): BitLinear(in_features=4096, out_features=4096, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): BitLinear(in_features=4096, out_features=14336, bias=False)
(up_proj): BitLinear(in_features=4096, out_features=14336, bias=False)
(down_proj): BitLinear(in_features=14336, out_features=4096, bias=False)
(act_fn): SiLU()
)
(input_layernorm): Identity()
(post_attention_layernorm): LlamaRMSNorm((4096,), eps=1e-05)
)
)
(norm): LlamaRMSNorm((4096,), eps=1e-05)
(rotary_emb): LlamaRotaryEmbedding()
)
(lm_head): Linear(in_features=4096, out_features=128256, bias=False)
)
```
---
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** ejbejaranos@gmail.com && lidia.andres@itcl.es
- **Funded by [optional]:** ITCL
- **Model type:** LLama3 8B Tramsformed to Bitnet using Downscaling technique
- **Language(s) (NLP):** Bitnet
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
## Requirements π¦
Make sure you have the following libraries installed:
```bash
pip install transformers torch huggingface_hub wandb coloredlogs
```
You can install these dependencies using pip! π
## Usage π
### Loading the Model
To load the model, you can simply run the following code:
Para usar este modelo, puedes cargarlo desde Hugging Face con el siguiente cΓ³digo:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.models.llama.modeling_llama import *
import torch
from torch import nn
import torch.nn.functional as F
import coloredlogs
import logging
coloredlogs.install(level='INFO', fmt='%(asctime)s - %(levelname)s - %(message)s', logger=logging.getLogger())
logger = logging.getLogger(__name__)
HF_TOKEN = "you_api_key_here"
model = "ejbejaranos/Llama3-8B-ITCL-Bitnet1.6B"
# Load a pretrained BitNet model
tokenizer = AutoTokenizer.from_pretrained(model)
model = AutoModelForCausalLM.from_pretrained(
model,
token=HF_TOKEN
)
# Establece el pad_token_id
model.config.pad_token_id = tokenizer.eos_token_id
def count_parameters(model):
# Calculate the number of parameters in billions
num_params = sum(p.numel() for p in model.parameters() if p.requires_grad) / 10**9
print(f"Model size: {num_params:.3f}B parameters")
return int(num_params)
def activation_quant(x):
scale = 127.0 / x.abs().max(dim=-1, keepdim=True).values.clamp_(min=1e-5)
y = (x * scale).round().clamp_(-128, 127)
y = y / scale
return y
def weight_quant(w):
scale = 1.0 / w.abs().mean().clamp_(min=1e-5)
u = (w * scale).round().clamp_(-1, 1)
u = u / scale
return u
class BitLinear(nn.Linear):
def forward(self, x):
w = self.weight # a weight tensor with shape [d, k]
x = x.to(w.device)
RMSNorm = LlamaRMSNorm(x.shape[-1]).to(w.device)
x_norm = RMSNorm(x)
x_quant = x_norm + (activation_quant(x_norm) - x_norm).detach()
w_quant = w + (weight_quant(w) - w).detach()
y = F.linear(x_quant, w_quant)
return y
def convert_to_bitnet(model, copy_weights):
for name, module in model.named_modules():
if isinstance(module, LlamaSdpaAttention) or isinstance(module, LlamaMLP):
for child_name, child_module in module.named_children():
if isinstance(child_module, nn.Linear):
bitlinear = BitLinear(child_module.in_features, child_module.out_features, child_module.bias is not None).to(device="cuda:0")
if copy_weights:
bitlinear.weight = child_module.weight
if child_module.bias is not None:
bitlinear.bias = child_module.bias
setattr(module, child_name, bitlinear)
elif isinstance(module, LlamaDecoderLayer):
for child_name, child_module in module.named_children():
if isinstance(child_module, LlamaRMSNorm) and child_name == "input_layernorm":
setattr(module, child_name, nn.Identity().to(device="cuda:0"))
convert_to_bitnet(model, copy_weights=True)
model.to(device="cuda:0")
logger.info(f"π’ Number of parameters in the model after extracting weights: {count_parameters(model)}")
logger.info(f"π Reduced model structure:\n{model}")
prompt = "What is the color of sky?"
inputs = tokenizer(prompt, return_tensors="pt", padding=True, truncation=True).to(model.device)
inputs['attention_mask'] = inputs['input_ids'] != model.config.pad_token_id
generate_ids = model.generate(inputs.input_ids, attention_mask=inputs['attention_mask'], max_length=250)
decoded_output = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(decoded_output[0]) # Print the generated response
```
### Performing Inference
Generate text using the model to unleash its power! π¬β¨
```text
- What role does explainability play in your AI solutions?
How can you ensure that your AI system is able to accurately predict and respond to user inputs?
These are some of the questions that AI developers have been asking themselves in the last few years.
In this section, we will explore some of the key concepts and techniques that AI developers have used to develop in their AI systems.
First, let's consider the importance of understanding the role of AI in AI.
AI systems can be incredibly powerful tools for automating tasks, analyzing data, and identifying patterns.
They can analyze large datasets and identify patterns, trends, and anomalies that might be missed by human analysts.
By analyzing large datasets, AI can help identify patterns and trends that might otherwise go unnoticed.
One of the most significant challenges in AI development is the lack of transparency and accountability.
With AI systems becoming increasingly sophisticated, there is a growing need for transparency and accountability in AI development.
This means that there is a growing need for transparency and accountability in AI development.
However, as AI becomes more sophisticated, it can also lead to unintended consequences, such as job loss or reputational damage.
```
## Contact π«
For questions or suggestions, feel free to reach out to me:
- **Email:** ejbejaranos@gmail.com
- **GitHub:** [ejbejaranos](https://github.com/ejbejaranos) π
|