abideen commited on
Commit
7b9b6b2
1 Parent(s): cc8b563

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -5,7 +5,6 @@ license: apache-2.0
5
 
6
  # Llama-3-8B-NOLA
7
 
8
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64e09e72e43b9464c835735f/72xGCzYfM3fMf-RZy6ueU.png)
9
 
10
  Llama-3-8B-NOLA is a fine-tuned variant of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on [OpenAssistant/oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1)
11
  for 100 steps only. The goal of this experiment was to try out this new technique NOLA and see the the number of trainable parameters. Due to limited compute, the results of this experiment
@@ -49,7 +48,7 @@ from transformers import AutoTokenizer
49
  import transformers
50
  import torch
51
 
52
- model = "Syed-Hasan-8503/Llama-3-8B-NOLA"
53
 
54
  tokenizer = AutoTokenizer.from_pretrained(model)
55
  pipeline = transformers.pipeline(
 
5
 
6
  # Llama-3-8B-NOLA
7
 
 
8
 
9
  Llama-3-8B-NOLA is a fine-tuned variant of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on [OpenAssistant/oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1)
10
  for 100 steps only. The goal of this experiment was to try out this new technique NOLA and see the the number of trainable parameters. Due to limited compute, the results of this experiment
 
48
  import transformers
49
  import torch
50
 
51
+ model = "QueryloopAI/Llama-3-8B-NOLA"
52
 
53
  tokenizer = AutoTokenizer.from_pretrained(model)
54
  pipeline = transformers.pipeline(