VGLee commited on
Commit
ea83c72
1 Parent(s): c22d7ca

Delete .ipynb_checkpoints

Browse files
.ipynb_checkpoints/README-checkpoint.md DELETED
@@ -1,57 +0,0 @@
1
- ---
2
- license: other
3
- base_model: Qwen/Qwen1.5-4B
4
- tags:
5
- - llama-factory
6
- - full
7
- - generated_from_trainer
8
- model-index:
9
- - name: 4b_galore
10
- results: []
11
- ---
12
-
13
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
- should probably proofread and complete it, then remove this comment. -->
15
-
16
- # 4b_galore
17
-
18
- This model is a fine-tuned version of [Qwen/Qwen1.5-4B](https://huggingface.co/Qwen/Qwen1.5-4B) on the universal_ner_all dataset.
19
-
20
- ## Model description
21
-
22
- More information needed
23
-
24
- ## Intended uses & limitations
25
-
26
- More information needed
27
-
28
- ## Training and evaluation data
29
-
30
- More information needed
31
-
32
- ## Training procedure
33
-
34
- ### Training hyperparameters
35
-
36
- The following hyperparameters were used during training:
37
- - learning_rate: 1e-05
38
- - train_batch_size: 2
39
- - eval_batch_size: 1
40
- - seed: 42
41
- - gradient_accumulation_steps: 8
42
- - total_train_batch_size: 16
43
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
- - lr_scheduler_type: cosine
45
- - lr_scheduler_warmup_steps: 200
46
- - num_epochs: 1.0
47
-
48
- ### Training results
49
-
50
-
51
-
52
- ### Framework versions
53
-
54
- - Transformers 4.39.2
55
- - Pytorch 2.2.2+cu121
56
- - Datasets 2.18.0
57
- - Tokenizers 0.15.2