pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
| tokens_length
sequencelengths 1
723
| input_texts
sequencelengths 1
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
sqlcoder-7b-2 - bnb 4bits
- Model creator: https://huggingface.co/defog/
- Original model: https://huggingface.co/defog/sqlcoder-7b-2/
Original model description:
---
license: cc-by-sa-4.0
library_name: transformers
pipeline_tag: text-generation
---
# Update notice
The model weights were updated at 7 AM UTC on Feb 7, 2024. The new model weights lead to a much more performant model – particularly for joins.
If you downloaded the model before that, please redownload the weights for best performance.
# Model Card for SQLCoder-7B-2
A capable large language model for natural language to SQL generation.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/603bbad3fd770a9997b57cb6/AYUE2y14vy2XkD9MZpScu.png)
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [Defog, Inc](https://defog.ai)
- **Model type:** [Text to SQL]
- **License:** [CC-by-SA-4.0]
- **Finetuned from model:** [CodeLlama-7B]
### Model Sources [optional]
- [**HuggingFace:**](https://huggingface.co/defog/sqlcoder-70b-alpha)
- [**GitHub:**](https://github.com/defog-ai/sqlcoder)
- [**Demo:**](https://defog.ai/sqlcoder-demo/)
## Uses
This model is intended to be used by non-technical users to understand data inside their SQL databases. It is meant as an analytics tool, and not as a database admin tool.
This model has not been trained to reject malicious requests from users with write access to databases, and should only be used by users with read-only access.
## How to Get Started with the Model
Use the code [here](https://github.com/defog-ai/sqlcoder/blob/main/inference.py) to get started with the model.
## Prompt
Please use the following prompt for optimal results. Please remember to use `do_sample=False` and `num_beams=4` for optimal results.
```
### Task
Generate a SQL query to answer [QUESTION]{user_question}[/QUESTION]
### Database Schema
The query will run on a database with the following schema:
{table_metadata_string_DDL_statements}
### Answer
Given the database schema, here is the SQL query that [QUESTION]{user_question}[/QUESTION]
[SQL]
```
## Evaluation
This model was evaluated on [SQL-Eval](https://github.com/defog-ai/sql-eval), a PostgreSQL based evaluation framework developed by Defog for testing and alignment of model capabilities.
You can read more about the methodology behind SQLEval [here](https://defog.ai/blog/open-sourcing-sqleval/).
### Results
We classified each generated question into one of 6 categories. The table displays the percentage of questions answered correctly by each model, broken down by category.
| | date | group_by | order_by | ratio | join | where |
| -------------- | ---- | -------- | -------- | ----- | ---- | ----- |
| sqlcoder-70b | 96 | 91.4 | 97.1 | 85.7 | 97.1 | 91.4 |
| sqlcoder-7b-2 | 96 | 91.4 | 94.3 | 91.4 | 94.3 | 77.1 |
| sqlcoder-34b | 80 | 94.3 | 85.7 | 77.1 | 85.7 | 80 |
| gpt-4 | 72 | 94.3 | 97.1 | 80 | 91.4 | 80 |
| gpt-4-turbo | 76 | 91.4 | 91.4 | 62.8 | 88.6 | 77.1 |
| natural-sql-7b | 56 | 88.6 | 85.7 | 60 | 88.6 | 80 |
| sqlcoder-7b | 64 | 82.9 | 74.3 | 54.3 | 74.3 | 74.3 |
| gpt-3.5 | 72 | 77.1 | 82.8 | 34.3 | 65.7 | 71.4 |
| claude-2 | 52 | 71.4 | 74.3 | 57.1 | 65.7 | 62.9 |
## Model Card Contact
Contact us on X at [@defogdata](https://twitter.com/defogdata), or on email at [founders@defog.ai](mailto:founders@defog.ai)
| {} | RichardErkhov/defog_-_sqlcoder-7b-2-4bits | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-05-03T19:09:04+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
sqlcoder-7b-2 - bnb 4bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
license: cc-by-sa-4.0
library\_name: transformers
pipeline\_tag: text-generation
--------------------------------------------------------------------------------
Update notice
=============
The model weights were updated at 7 AM UTC on Feb 7, 2024. The new model weights lead to a much more performant model – particularly for joins.
If you downloaded the model before that, please redownload the weights for best performance.
Model Card for SQLCoder-7B-2
============================
A capable large language model for natural language to SQL generation.
!image/png
Model Details
-------------
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
* Developed by: Defog, Inc
* Model type: [Text to SQL]
* License: [CC-by-SA-4.0]
* Finetuned from model: [CodeLlama-7B]
### Model Sources [optional]
* HuggingFace:
* GitHub:
* Demo:
Uses
----
This model is intended to be used by non-technical users to understand data inside their SQL databases. It is meant as an analytics tool, and not as a database admin tool.
This model has not been trained to reject malicious requests from users with write access to databases, and should only be used by users with read-only access.
How to Get Started with the Model
---------------------------------
Use the code here to get started with the model.
Prompt
------
Please use the following prompt for optimal results. Please remember to use 'do\_sample=False' and 'num\_beams=4' for optimal results.
Evaluation
----------
This model was evaluated on SQL-Eval, a PostgreSQL based evaluation framework developed by Defog for testing and alignment of model capabilities.
You can read more about the methodology behind SQLEval here.
### Results
We classified each generated question into one of 6 categories. The table displays the percentage of questions answered correctly by each model, broken down by category.
Model Card Contact
------------------
Contact us on X at @defogdata, or on email at founders@URL
| [
"### Model Description\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n\n* Developed by: Defog, Inc\n* Model type: [Text to SQL]\n* License: [CC-by-SA-4.0]\n* Finetuned from model: [CodeLlama-7B]",
"### Model Sources [optional]\n\n\n* HuggingFace:\n* GitHub:\n* Demo:\n\n\nUses\n----\n\n\nThis model is intended to be used by non-technical users to understand data inside their SQL databases. It is meant as an analytics tool, and not as a database admin tool.\n\n\nThis model has not been trained to reject malicious requests from users with write access to databases, and should only be used by users with read-only access.\n\n\nHow to Get Started with the Model\n---------------------------------\n\n\nUse the code here to get started with the model.\n\n\nPrompt\n------\n\n\nPlease use the following prompt for optimal results. Please remember to use 'do\\_sample=False' and 'num\\_beams=4' for optimal results.\n\n\nEvaluation\n----------\n\n\nThis model was evaluated on SQL-Eval, a PostgreSQL based evaluation framework developed by Defog for testing and alignment of model capabilities.\n\n\nYou can read more about the methodology behind SQLEval here.",
"### Results\n\n\nWe classified each generated question into one of 6 categories. The table displays the percentage of questions answered correctly by each model, broken down by category.\n\n\n\nModel Card Contact\n------------------\n\n\nContact us on X at @defogdata, or on email at founders@URL"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"### Model Description\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n\n* Developed by: Defog, Inc\n* Model type: [Text to SQL]\n* License: [CC-by-SA-4.0]\n* Finetuned from model: [CodeLlama-7B]",
"### Model Sources [optional]\n\n\n* HuggingFace:\n* GitHub:\n* Demo:\n\n\nUses\n----\n\n\nThis model is intended to be used by non-technical users to understand data inside their SQL databases. It is meant as an analytics tool, and not as a database admin tool.\n\n\nThis model has not been trained to reject malicious requests from users with write access to databases, and should only be used by users with read-only access.\n\n\nHow to Get Started with the Model\n---------------------------------\n\n\nUse the code here to get started with the model.\n\n\nPrompt\n------\n\n\nPlease use the following prompt for optimal results. Please remember to use 'do\\_sample=False' and 'num\\_beams=4' for optimal results.\n\n\nEvaluation\n----------\n\n\nThis model was evaluated on SQL-Eval, a PostgreSQL based evaluation framework developed by Defog for testing and alignment of model capabilities.\n\n\nYou can read more about the methodology behind SQLEval here.",
"### Results\n\n\nWe classified each generated question into one of 6 categories. The table displays the percentage of questions answered correctly by each model, broken down by category.\n\n\n\nModel Card Contact\n------------------\n\n\nContact us on X at @defogdata, or on email at founders@URL"
] | [
38,
76,
241,
73
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n### Model Description\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n\n* Developed by: Defog, Inc\n* Model type: [Text to SQL]\n* License: [CC-by-SA-4.0]\n* Finetuned from model: [CodeLlama-7B]### Model Sources [optional]\n\n\n* HuggingFace:\n* GitHub:\n* Demo:\n\n\nUses\n----\n\n\nThis model is intended to be used by non-technical users to understand data inside their SQL databases. It is meant as an analytics tool, and not as a database admin tool.\n\n\nThis model has not been trained to reject malicious requests from users with write access to databases, and should only be used by users with read-only access.\n\n\nHow to Get Started with the Model\n---------------------------------\n\n\nUse the code here to get started with the model.\n\n\nPrompt\n------\n\n\nPlease use the following prompt for optimal results. Please remember to use 'do\\_sample=False' and 'num\\_beams=4' for optimal results.\n\n\nEvaluation\n----------\n\n\nThis model was evaluated on SQL-Eval, a PostgreSQL based evaluation framework developed by Defog for testing and alignment of model capabilities.\n\n\nYou can read more about the methodology behind SQLEval here.### Results\n\n\nWe classified each generated question into one of 6 categories. The table displays the percentage of questions answered correctly by each model, broken down by category.\n\n\n\nModel Card Contact\n------------------\n\n\nContact us on X at @defogdata, or on email at founders@URL"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_4-seqsight_4096_512_15M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3704
- F1 Score: 0.8588
- Accuracy: 0.859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5525 | 1.34 | 200 | 0.5126 | 0.7380 | 0.738 |
| 0.4762 | 2.68 | 400 | 0.4937 | 0.7496 | 0.75 |
| 0.4601 | 4.03 | 600 | 0.4891 | 0.7548 | 0.756 |
| 0.4466 | 5.37 | 800 | 0.4828 | 0.7580 | 0.758 |
| 0.4325 | 6.71 | 1000 | 0.4821 | 0.7670 | 0.769 |
| 0.4221 | 8.05 | 1200 | 0.4624 | 0.7769 | 0.777 |
| 0.416 | 9.4 | 1400 | 0.4501 | 0.7809 | 0.781 |
| 0.4062 | 10.74 | 1600 | 0.4531 | 0.7800 | 0.78 |
| 0.3994 | 12.08 | 1800 | 0.4526 | 0.7831 | 0.784 |
| 0.3951 | 13.42 | 2000 | 0.4485 | 0.7939 | 0.794 |
| 0.3826 | 14.77 | 2200 | 0.4444 | 0.7958 | 0.796 |
| 0.3825 | 16.11 | 2400 | 0.4407 | 0.7955 | 0.796 |
| 0.3734 | 17.45 | 2600 | 0.4475 | 0.7848 | 0.785 |
| 0.367 | 18.79 | 2800 | 0.4480 | 0.7940 | 0.794 |
| 0.3628 | 20.13 | 3000 | 0.4385 | 0.8019 | 0.802 |
| 0.3505 | 21.48 | 3200 | 0.4360 | 0.8079 | 0.808 |
| 0.3513 | 22.82 | 3400 | 0.4419 | 0.8037 | 0.804 |
| 0.345 | 24.16 | 3600 | 0.4359 | 0.8080 | 0.808 |
| 0.3405 | 25.5 | 3800 | 0.4313 | 0.8097 | 0.81 |
| 0.3327 | 26.85 | 4000 | 0.4307 | 0.8130 | 0.813 |
| 0.3347 | 28.19 | 4200 | 0.4333 | 0.7970 | 0.797 |
| 0.319 | 29.53 | 4400 | 0.4489 | 0.8188 | 0.819 |
| 0.3213 | 30.87 | 4600 | 0.4355 | 0.8050 | 0.805 |
| 0.3171 | 32.21 | 4800 | 0.4279 | 0.8090 | 0.809 |
| 0.3143 | 33.56 | 5000 | 0.4330 | 0.8120 | 0.812 |
| 0.3113 | 34.9 | 5200 | 0.4400 | 0.8070 | 0.807 |
| 0.3048 | 36.24 | 5400 | 0.4414 | 0.798 | 0.798 |
| 0.2986 | 37.58 | 5600 | 0.4316 | 0.8146 | 0.815 |
| 0.295 | 38.93 | 5800 | 0.4465 | 0.8040 | 0.804 |
| 0.295 | 40.27 | 6000 | 0.4404 | 0.8098 | 0.81 |
| 0.2883 | 41.61 | 6200 | 0.4515 | 0.8090 | 0.809 |
| 0.2897 | 42.95 | 6400 | 0.4408 | 0.8110 | 0.811 |
| 0.2857 | 44.3 | 6600 | 0.4365 | 0.8145 | 0.815 |
| 0.2787 | 45.64 | 6800 | 0.4331 | 0.8120 | 0.812 |
| 0.2862 | 46.98 | 7000 | 0.4335 | 0.8189 | 0.819 |
| 0.2767 | 48.32 | 7200 | 0.4339 | 0.8148 | 0.815 |
| 0.2712 | 49.66 | 7400 | 0.4270 | 0.8129 | 0.813 |
| 0.2712 | 51.01 | 7600 | 0.4322 | 0.8170 | 0.817 |
| 0.2708 | 52.35 | 7800 | 0.4382 | 0.8198 | 0.82 |
| 0.2644 | 53.69 | 8000 | 0.4400 | 0.8160 | 0.816 |
| 0.2678 | 55.03 | 8200 | 0.4366 | 0.8230 | 0.823 |
| 0.2635 | 56.38 | 8400 | 0.4318 | 0.8229 | 0.823 |
| 0.261 | 57.72 | 8600 | 0.4403 | 0.8178 | 0.818 |
| 0.262 | 59.06 | 8800 | 0.4338 | 0.8179 | 0.818 |
| 0.2617 | 60.4 | 9000 | 0.4364 | 0.8220 | 0.822 |
| 0.2545 | 61.74 | 9200 | 0.4385 | 0.8219 | 0.822 |
| 0.2568 | 63.09 | 9400 | 0.4400 | 0.8289 | 0.829 |
| 0.257 | 64.43 | 9600 | 0.4372 | 0.8239 | 0.824 |
| 0.2581 | 65.77 | 9800 | 0.4372 | 0.8249 | 0.825 |
| 0.2546 | 67.11 | 10000 | 0.4370 | 0.8259 | 0.826 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_tf_4-seqsight_4096_512_15M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_4-seqsight_4096_512_15M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T19:09:48+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_tf\_4-seqsight\_4096\_512\_15M-L32\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_tf\_4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3704
* F1 Score: 0.8588
* Accuracy: 0.859
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_3-seqsight_4096_512_15M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5670
- F1 Score: 0.6927
- Accuracy: 0.696
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6616 | 0.93 | 200 | 0.6000 | 0.6809 | 0.682 |
| 0.618 | 1.87 | 400 | 0.5894 | 0.6799 | 0.68 |
| 0.6049 | 2.8 | 600 | 0.5722 | 0.7073 | 0.711 |
| 0.6017 | 3.74 | 800 | 0.5692 | 0.7085 | 0.71 |
| 0.5991 | 4.67 | 1000 | 0.5638 | 0.7147 | 0.716 |
| 0.5926 | 5.61 | 1200 | 0.5632 | 0.7197 | 0.72 |
| 0.5904 | 6.54 | 1400 | 0.5591 | 0.7143 | 0.716 |
| 0.5889 | 7.48 | 1600 | 0.5586 | 0.7229 | 0.723 |
| 0.5877 | 8.41 | 1800 | 0.5571 | 0.7163 | 0.717 |
| 0.59 | 9.35 | 2000 | 0.5569 | 0.7177 | 0.719 |
| 0.5839 | 10.28 | 2200 | 0.5590 | 0.7100 | 0.71 |
| 0.5842 | 11.21 | 2400 | 0.5519 | 0.7183 | 0.72 |
| 0.5841 | 12.15 | 2600 | 0.5506 | 0.7176 | 0.721 |
| 0.581 | 13.08 | 2800 | 0.5494 | 0.7161 | 0.719 |
| 0.5822 | 14.02 | 3000 | 0.5530 | 0.7166 | 0.717 |
| 0.5808 | 14.95 | 3200 | 0.5503 | 0.7212 | 0.722 |
| 0.5786 | 15.89 | 3400 | 0.5493 | 0.7234 | 0.725 |
| 0.5755 | 16.82 | 3600 | 0.5515 | 0.7176 | 0.718 |
| 0.5761 | 17.76 | 3800 | 0.5495 | 0.7271 | 0.729 |
| 0.5766 | 18.69 | 4000 | 0.5525 | 0.7197 | 0.72 |
| 0.5732 | 19.63 | 4200 | 0.5478 | 0.7169 | 0.721 |
| 0.5766 | 20.56 | 4400 | 0.5462 | 0.7184 | 0.72 |
| 0.5746 | 21.5 | 4600 | 0.5500 | 0.7120 | 0.712 |
| 0.5734 | 22.43 | 4800 | 0.5467 | 0.7263 | 0.728 |
| 0.5739 | 23.36 | 5000 | 0.5478 | 0.7246 | 0.725 |
| 0.5734 | 24.3 | 5200 | 0.5494 | 0.7121 | 0.712 |
| 0.5696 | 25.23 | 5400 | 0.5453 | 0.7188 | 0.722 |
| 0.5745 | 26.17 | 5600 | 0.5448 | 0.7234 | 0.725 |
| 0.568 | 27.1 | 5800 | 0.5439 | 0.7209 | 0.724 |
| 0.5682 | 28.04 | 6000 | 0.5437 | 0.7299 | 0.731 |
| 0.569 | 28.97 | 6200 | 0.5486 | 0.7161 | 0.716 |
| 0.5717 | 29.91 | 6400 | 0.5448 | 0.7316 | 0.733 |
| 0.5681 | 30.84 | 6600 | 0.5447 | 0.7337 | 0.735 |
| 0.5686 | 31.78 | 6800 | 0.5464 | 0.7217 | 0.722 |
| 0.5681 | 32.71 | 7000 | 0.5444 | 0.7319 | 0.733 |
| 0.5714 | 33.64 | 7200 | 0.5447 | 0.7315 | 0.733 |
| 0.5642 | 34.58 | 7400 | 0.5480 | 0.7131 | 0.713 |
| 0.5704 | 35.51 | 7600 | 0.5458 | 0.7226 | 0.723 |
| 0.5689 | 36.45 | 7800 | 0.5453 | 0.7246 | 0.725 |
| 0.5676 | 37.38 | 8000 | 0.5453 | 0.7236 | 0.724 |
| 0.5647 | 38.32 | 8200 | 0.5449 | 0.7317 | 0.733 |
| 0.5652 | 39.25 | 8400 | 0.5451 | 0.7284 | 0.729 |
| 0.5662 | 40.19 | 8600 | 0.5453 | 0.7284 | 0.729 |
| 0.5649 | 41.12 | 8800 | 0.5455 | 0.7275 | 0.728 |
| 0.5682 | 42.06 | 9000 | 0.5454 | 0.7285 | 0.729 |
| 0.5665 | 42.99 | 9200 | 0.5461 | 0.7217 | 0.722 |
| 0.565 | 43.93 | 9400 | 0.5464 | 0.7199 | 0.72 |
| 0.5637 | 44.86 | 9600 | 0.5452 | 0.7266 | 0.727 |
| 0.5659 | 45.79 | 9800 | 0.5451 | 0.7285 | 0.729 |
| 0.562 | 46.73 | 10000 | 0.5452 | 0.7256 | 0.726 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_tf_3-seqsight_4096_512_15M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_3-seqsight_4096_512_15M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T19:09:48+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_tf\_3-seqsight\_4096\_512\_15M-L1\_f
=========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_tf\_3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5670
* F1 Score: 0.6927
* Accuracy: 0.696
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_3-seqsight_4096_512_15M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5516
- F1 Score: 0.7033
- Accuracy: 0.706
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6433 | 0.93 | 200 | 0.5805 | 0.7011 | 0.701 |
| 0.6017 | 1.87 | 400 | 0.5751 | 0.7004 | 0.701 |
| 0.594 | 2.8 | 600 | 0.5608 | 0.7085 | 0.711 |
| 0.5899 | 3.74 | 800 | 0.5582 | 0.7090 | 0.709 |
| 0.5889 | 4.67 | 1000 | 0.5522 | 0.7134 | 0.714 |
| 0.5826 | 5.61 | 1200 | 0.5491 | 0.7141 | 0.715 |
| 0.5801 | 6.54 | 1400 | 0.5494 | 0.7171 | 0.718 |
| 0.5778 | 7.48 | 1600 | 0.5482 | 0.7223 | 0.723 |
| 0.5758 | 8.41 | 1800 | 0.5475 | 0.7218 | 0.722 |
| 0.5787 | 9.35 | 2000 | 0.5472 | 0.7054 | 0.709 |
| 0.5717 | 10.28 | 2200 | 0.5482 | 0.7199 | 0.72 |
| 0.5721 | 11.21 | 2400 | 0.5441 | 0.7227 | 0.724 |
| 0.5709 | 12.15 | 2600 | 0.5453 | 0.7008 | 0.707 |
| 0.5673 | 13.08 | 2800 | 0.5479 | 0.6937 | 0.701 |
| 0.5676 | 14.02 | 3000 | 0.5444 | 0.7196 | 0.721 |
| 0.5661 | 14.95 | 3200 | 0.5459 | 0.7086 | 0.712 |
| 0.5641 | 15.89 | 3400 | 0.5448 | 0.7142 | 0.716 |
| 0.5601 | 16.82 | 3600 | 0.5457 | 0.7172 | 0.719 |
| 0.5597 | 17.76 | 3800 | 0.5455 | 0.7127 | 0.716 |
| 0.5602 | 18.69 | 4000 | 0.5471 | 0.7187 | 0.719 |
| 0.558 | 19.63 | 4200 | 0.5495 | 0.7043 | 0.709 |
| 0.559 | 20.56 | 4400 | 0.5477 | 0.7125 | 0.716 |
| 0.5577 | 21.5 | 4600 | 0.5518 | 0.7161 | 0.716 |
| 0.5555 | 22.43 | 4800 | 0.5469 | 0.7103 | 0.714 |
| 0.5556 | 23.36 | 5000 | 0.5495 | 0.7171 | 0.717 |
| 0.5544 | 24.3 | 5200 | 0.5554 | 0.6955 | 0.696 |
| 0.5502 | 25.23 | 5400 | 0.5482 | 0.7157 | 0.719 |
| 0.5575 | 26.17 | 5600 | 0.5434 | 0.7264 | 0.728 |
| 0.5477 | 27.1 | 5800 | 0.5433 | 0.7174 | 0.719 |
| 0.5481 | 28.04 | 6000 | 0.5441 | 0.7282 | 0.73 |
| 0.5482 | 28.97 | 6200 | 0.5480 | 0.7231 | 0.723 |
| 0.5491 | 29.91 | 6400 | 0.5455 | 0.7245 | 0.727 |
| 0.5473 | 30.84 | 6600 | 0.5441 | 0.7217 | 0.723 |
| 0.5492 | 31.78 | 6800 | 0.5472 | 0.7217 | 0.722 |
| 0.5466 | 32.71 | 7000 | 0.5442 | 0.7272 | 0.728 |
| 0.5503 | 33.64 | 7200 | 0.5444 | 0.7283 | 0.73 |
| 0.542 | 34.58 | 7400 | 0.5502 | 0.7191 | 0.719 |
| 0.5477 | 35.51 | 7600 | 0.5458 | 0.7290 | 0.729 |
| 0.5467 | 36.45 | 7800 | 0.5461 | 0.7257 | 0.726 |
| 0.5466 | 37.38 | 8000 | 0.5456 | 0.7278 | 0.728 |
| 0.5417 | 38.32 | 8200 | 0.5471 | 0.7259 | 0.727 |
| 0.5427 | 39.25 | 8400 | 0.5465 | 0.7237 | 0.724 |
| 0.5423 | 40.19 | 8600 | 0.5461 | 0.7255 | 0.726 |
| 0.5414 | 41.12 | 8800 | 0.5461 | 0.7285 | 0.729 |
| 0.5451 | 42.06 | 9000 | 0.5452 | 0.7277 | 0.728 |
| 0.5428 | 42.99 | 9200 | 0.5468 | 0.7259 | 0.726 |
| 0.541 | 43.93 | 9400 | 0.5469 | 0.7259 | 0.726 |
| 0.538 | 44.86 | 9600 | 0.5463 | 0.7257 | 0.726 |
| 0.5423 | 45.79 | 9800 | 0.5461 | 0.7293 | 0.73 |
| 0.5373 | 46.73 | 10000 | 0.5468 | 0.7248 | 0.725 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_tf_3-seqsight_4096_512_15M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_3-seqsight_4096_512_15M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T19:10:15+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_tf\_3-seqsight\_4096\_512\_15M-L8\_f
=========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_tf\_3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5516
* F1 Score: 0.7033
* Accuracy: 0.706
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_2-seqsight_4096_512_15M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4762
- F1 Score: 0.7710
- Accuracy: 0.771
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6238 | 1.34 | 200 | 0.5599 | 0.7084 | 0.71 |
| 0.5565 | 2.68 | 400 | 0.5340 | 0.7329 | 0.733 |
| 0.538 | 4.03 | 600 | 0.5271 | 0.7310 | 0.731 |
| 0.5338 | 5.37 | 800 | 0.5242 | 0.7350 | 0.735 |
| 0.5298 | 6.71 | 1000 | 0.5203 | 0.7365 | 0.737 |
| 0.5247 | 8.05 | 1200 | 0.5171 | 0.7490 | 0.749 |
| 0.5214 | 9.4 | 1400 | 0.5140 | 0.7415 | 0.742 |
| 0.5209 | 10.74 | 1600 | 0.5141 | 0.7418 | 0.742 |
| 0.5181 | 12.08 | 1800 | 0.5195 | 0.7438 | 0.744 |
| 0.5183 | 13.42 | 2000 | 0.5135 | 0.7440 | 0.744 |
| 0.5182 | 14.77 | 2200 | 0.5122 | 0.7470 | 0.747 |
| 0.5123 | 16.11 | 2400 | 0.5162 | 0.7407 | 0.741 |
| 0.5154 | 17.45 | 2600 | 0.5111 | 0.7399 | 0.74 |
| 0.5098 | 18.79 | 2800 | 0.5099 | 0.7400 | 0.74 |
| 0.5091 | 20.13 | 3000 | 0.5103 | 0.7400 | 0.74 |
| 0.5095 | 21.48 | 3200 | 0.5116 | 0.7359 | 0.736 |
| 0.5106 | 22.82 | 3400 | 0.5074 | 0.7399 | 0.74 |
| 0.5052 | 24.16 | 3600 | 0.5060 | 0.7358 | 0.736 |
| 0.5024 | 25.5 | 3800 | 0.5064 | 0.7342 | 0.735 |
| 0.505 | 26.85 | 4000 | 0.5060 | 0.7375 | 0.738 |
| 0.5014 | 28.19 | 4200 | 0.5058 | 0.7340 | 0.734 |
| 0.5024 | 29.53 | 4400 | 0.5097 | 0.7410 | 0.741 |
| 0.5034 | 30.87 | 4600 | 0.5076 | 0.7380 | 0.738 |
| 0.5015 | 32.21 | 4800 | 0.5058 | 0.7390 | 0.739 |
| 0.5012 | 33.56 | 5000 | 0.5107 | 0.7417 | 0.742 |
| 0.5032 | 34.9 | 5200 | 0.5063 | 0.7389 | 0.739 |
| 0.4975 | 36.24 | 5400 | 0.5017 | 0.7367 | 0.737 |
| 0.4993 | 37.58 | 5600 | 0.5034 | 0.7420 | 0.742 |
| 0.4966 | 38.93 | 5800 | 0.5047 | 0.7370 | 0.737 |
| 0.497 | 40.27 | 6000 | 0.5033 | 0.7360 | 0.736 |
| 0.4973 | 41.61 | 6200 | 0.5028 | 0.7320 | 0.732 |
| 0.4951 | 42.95 | 6400 | 0.5043 | 0.7340 | 0.734 |
| 0.4949 | 44.3 | 6600 | 0.5056 | 0.7370 | 0.737 |
| 0.4977 | 45.64 | 6800 | 0.5057 | 0.7420 | 0.742 |
| 0.4943 | 46.98 | 7000 | 0.5042 | 0.7400 | 0.74 |
| 0.4949 | 48.32 | 7200 | 0.5059 | 0.7380 | 0.738 |
| 0.4923 | 49.66 | 7400 | 0.5017 | 0.7390 | 0.739 |
| 0.4941 | 51.01 | 7600 | 0.5031 | 0.7400 | 0.74 |
| 0.4942 | 52.35 | 7800 | 0.5022 | 0.7390 | 0.739 |
| 0.4957 | 53.69 | 8000 | 0.5019 | 0.7299 | 0.73 |
| 0.492 | 55.03 | 8200 | 0.5023 | 0.7410 | 0.741 |
| 0.4959 | 56.38 | 8400 | 0.5038 | 0.7400 | 0.74 |
| 0.494 | 57.72 | 8600 | 0.5026 | 0.7370 | 0.737 |
| 0.4905 | 59.06 | 8800 | 0.5026 | 0.7340 | 0.734 |
| 0.4909 | 60.4 | 9000 | 0.5039 | 0.7390 | 0.739 |
| 0.4921 | 61.74 | 9200 | 0.5022 | 0.7360 | 0.736 |
| 0.4956 | 63.09 | 9400 | 0.5020 | 0.7360 | 0.736 |
| 0.4896 | 64.43 | 9600 | 0.5025 | 0.7380 | 0.738 |
| 0.4913 | 65.77 | 9800 | 0.5032 | 0.7370 | 0.737 |
| 0.4887 | 67.11 | 10000 | 0.5025 | 0.7370 | 0.737 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_tf_2-seqsight_4096_512_15M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_2-seqsight_4096_512_15M-L1_f | null | [
"region:us"
] | null | 2024-05-03T19:11:10+00:00 | [] | [] | TAGS
#region-us
| GUE\_tf\_2-seqsight\_4096\_512\_15M-L1\_f
=========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_tf\_2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4762
* F1 Score: 0.7710
* Accuracy: 0.771
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
5,
100,
5,
52
] | [
"TAGS\n#region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | ferrazzipietro/LS_Llama-2-7b-hf_adapters_en.layer1_NoQuant_32_32_0.05_2_5e-05 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T19:11:20+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
26,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Armandodelca/Prototipo_7_EMI | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T19:12:51+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
22,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | null | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
sqlcoder-7b-2 - bnb 8bits
- Model creator: https://huggingface.co/defog/
- Original model: https://huggingface.co/defog/sqlcoder-7b-2/
Original model description:
---
license: cc-by-sa-4.0
library_name: transformers
pipeline_tag: text-generation
---
# Update notice
The model weights were updated at 7 AM UTC on Feb 7, 2024. The new model weights lead to a much more performant model – particularly for joins.
If you downloaded the model before that, please redownload the weights for best performance.
# Model Card for SQLCoder-7B-2
A capable large language model for natural language to SQL generation.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/603bbad3fd770a9997b57cb6/AYUE2y14vy2XkD9MZpScu.png)
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [Defog, Inc](https://defog.ai)
- **Model type:** [Text to SQL]
- **License:** [CC-by-SA-4.0]
- **Finetuned from model:** [CodeLlama-7B]
### Model Sources [optional]
- [**HuggingFace:**](https://huggingface.co/defog/sqlcoder-70b-alpha)
- [**GitHub:**](https://github.com/defog-ai/sqlcoder)
- [**Demo:**](https://defog.ai/sqlcoder-demo/)
## Uses
This model is intended to be used by non-technical users to understand data inside their SQL databases. It is meant as an analytics tool, and not as a database admin tool.
This model has not been trained to reject malicious requests from users with write access to databases, and should only be used by users with read-only access.
## How to Get Started with the Model
Use the code [here](https://github.com/defog-ai/sqlcoder/blob/main/inference.py) to get started with the model.
## Prompt
Please use the following prompt for optimal results. Please remember to use `do_sample=False` and `num_beams=4` for optimal results.
```
### Task
Generate a SQL query to answer [QUESTION]{user_question}[/QUESTION]
### Database Schema
The query will run on a database with the following schema:
{table_metadata_string_DDL_statements}
### Answer
Given the database schema, here is the SQL query that [QUESTION]{user_question}[/QUESTION]
[SQL]
```
## Evaluation
This model was evaluated on [SQL-Eval](https://github.com/defog-ai/sql-eval), a PostgreSQL based evaluation framework developed by Defog for testing and alignment of model capabilities.
You can read more about the methodology behind SQLEval [here](https://defog.ai/blog/open-sourcing-sqleval/).
### Results
We classified each generated question into one of 6 categories. The table displays the percentage of questions answered correctly by each model, broken down by category.
| | date | group_by | order_by | ratio | join | where |
| -------------- | ---- | -------- | -------- | ----- | ---- | ----- |
| sqlcoder-70b | 96 | 91.4 | 97.1 | 85.7 | 97.1 | 91.4 |
| sqlcoder-7b-2 | 96 | 91.4 | 94.3 | 91.4 | 94.3 | 77.1 |
| sqlcoder-34b | 80 | 94.3 | 85.7 | 77.1 | 85.7 | 80 |
| gpt-4 | 72 | 94.3 | 97.1 | 80 | 91.4 | 80 |
| gpt-4-turbo | 76 | 91.4 | 91.4 | 62.8 | 88.6 | 77.1 |
| natural-sql-7b | 56 | 88.6 | 85.7 | 60 | 88.6 | 80 |
| sqlcoder-7b | 64 | 82.9 | 74.3 | 54.3 | 74.3 | 74.3 |
| gpt-3.5 | 72 | 77.1 | 82.8 | 34.3 | 65.7 | 71.4 |
| claude-2 | 52 | 71.4 | 74.3 | 57.1 | 65.7 | 62.9 |
## Model Card Contact
Contact us on X at [@defogdata](https://twitter.com/defogdata), or on email at [founders@defog.ai](mailto:founders@defog.ai)
| {} | RichardErkhov/defog_-_sqlcoder-7b-2-8bits | null | [
"safetensors",
"region:us"
] | null | 2024-05-03T19:13:29+00:00 | [] | [] | TAGS
#safetensors #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
sqlcoder-7b-2 - bnb 8bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
license: cc-by-sa-4.0
library\_name: transformers
pipeline\_tag: text-generation
--------------------------------------------------------------------------------
Update notice
=============
The model weights were updated at 7 AM UTC on Feb 7, 2024. The new model weights lead to a much more performant model – particularly for joins.
If you downloaded the model before that, please redownload the weights for best performance.
Model Card for SQLCoder-7B-2
============================
A capable large language model for natural language to SQL generation.
!image/png
Model Details
-------------
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
* Developed by: Defog, Inc
* Model type: [Text to SQL]
* License: [CC-by-SA-4.0]
* Finetuned from model: [CodeLlama-7B]
### Model Sources [optional]
* HuggingFace:
* GitHub:
* Demo:
Uses
----
This model is intended to be used by non-technical users to understand data inside their SQL databases. It is meant as an analytics tool, and not as a database admin tool.
This model has not been trained to reject malicious requests from users with write access to databases, and should only be used by users with read-only access.
How to Get Started with the Model
---------------------------------
Use the code here to get started with the model.
Prompt
------
Please use the following prompt for optimal results. Please remember to use 'do\_sample=False' and 'num\_beams=4' for optimal results.
Evaluation
----------
This model was evaluated on SQL-Eval, a PostgreSQL based evaluation framework developed by Defog for testing and alignment of model capabilities.
You can read more about the methodology behind SQLEval here.
### Results
We classified each generated question into one of 6 categories. The table displays the percentage of questions answered correctly by each model, broken down by category.
Model Card Contact
------------------
Contact us on X at @defogdata, or on email at founders@URL
| [
"### Model Description\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n\n* Developed by: Defog, Inc\n* Model type: [Text to SQL]\n* License: [CC-by-SA-4.0]\n* Finetuned from model: [CodeLlama-7B]",
"### Model Sources [optional]\n\n\n* HuggingFace:\n* GitHub:\n* Demo:\n\n\nUses\n----\n\n\nThis model is intended to be used by non-technical users to understand data inside their SQL databases. It is meant as an analytics tool, and not as a database admin tool.\n\n\nThis model has not been trained to reject malicious requests from users with write access to databases, and should only be used by users with read-only access.\n\n\nHow to Get Started with the Model\n---------------------------------\n\n\nUse the code here to get started with the model.\n\n\nPrompt\n------\n\n\nPlease use the following prompt for optimal results. Please remember to use 'do\\_sample=False' and 'num\\_beams=4' for optimal results.\n\n\nEvaluation\n----------\n\n\nThis model was evaluated on SQL-Eval, a PostgreSQL based evaluation framework developed by Defog for testing and alignment of model capabilities.\n\n\nYou can read more about the methodology behind SQLEval here.",
"### Results\n\n\nWe classified each generated question into one of 6 categories. The table displays the percentage of questions answered correctly by each model, broken down by category.\n\n\n\nModel Card Contact\n------------------\n\n\nContact us on X at @defogdata, or on email at founders@URL"
] | [
"TAGS\n#safetensors #region-us \n",
"### Model Description\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n\n* Developed by: Defog, Inc\n* Model type: [Text to SQL]\n* License: [CC-by-SA-4.0]\n* Finetuned from model: [CodeLlama-7B]",
"### Model Sources [optional]\n\n\n* HuggingFace:\n* GitHub:\n* Demo:\n\n\nUses\n----\n\n\nThis model is intended to be used by non-technical users to understand data inside their SQL databases. It is meant as an analytics tool, and not as a database admin tool.\n\n\nThis model has not been trained to reject malicious requests from users with write access to databases, and should only be used by users with read-only access.\n\n\nHow to Get Started with the Model\n---------------------------------\n\n\nUse the code here to get started with the model.\n\n\nPrompt\n------\n\n\nPlease use the following prompt for optimal results. Please remember to use 'do\\_sample=False' and 'num\\_beams=4' for optimal results.\n\n\nEvaluation\n----------\n\n\nThis model was evaluated on SQL-Eval, a PostgreSQL based evaluation framework developed by Defog for testing and alignment of model capabilities.\n\n\nYou can read more about the methodology behind SQLEval here.",
"### Results\n\n\nWe classified each generated question into one of 6 categories. The table displays the percentage of questions answered correctly by each model, broken down by category.\n\n\n\nModel Card Contact\n------------------\n\n\nContact us on X at @defogdata, or on email at founders@URL"
] | [
9,
76,
241,
73
] | [
"TAGS\n#safetensors #region-us \n### Model Description\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n\n* Developed by: Defog, Inc\n* Model type: [Text to SQL]\n* License: [CC-by-SA-4.0]\n* Finetuned from model: [CodeLlama-7B]### Model Sources [optional]\n\n\n* HuggingFace:\n* GitHub:\n* Demo:\n\n\nUses\n----\n\n\nThis model is intended to be used by non-technical users to understand data inside their SQL databases. It is meant as an analytics tool, and not as a database admin tool.\n\n\nThis model has not been trained to reject malicious requests from users with write access to databases, and should only be used by users with read-only access.\n\n\nHow to Get Started with the Model\n---------------------------------\n\n\nUse the code here to get started with the model.\n\n\nPrompt\n------\n\n\nPlease use the following prompt for optimal results. Please remember to use 'do\\_sample=False' and 'num\\_beams=4' for optimal results.\n\n\nEvaluation\n----------\n\n\nThis model was evaluated on SQL-Eval, a PostgreSQL based evaluation framework developed by Defog for testing and alignment of model capabilities.\n\n\nYou can read more about the methodology behind SQLEval here.### Results\n\n\nWe classified each generated question into one of 6 categories. The table displays the percentage of questions answered correctly by each model, broken down by category.\n\n\n\nModel Card Contact\n------------------\n\n\nContact us on X at @defogdata, or on email at founders@URL"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_2-seqsight_4096_512_15M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4762
- F1 Score: 0.7889
- Accuracy: 0.789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5955 | 1.34 | 200 | 0.5400 | 0.7222 | 0.723 |
| 0.5387 | 2.68 | 400 | 0.5279 | 0.7378 | 0.738 |
| 0.5259 | 4.03 | 600 | 0.5212 | 0.7440 | 0.744 |
| 0.5216 | 5.37 | 800 | 0.5194 | 0.7418 | 0.742 |
| 0.5186 | 6.71 | 1000 | 0.5178 | 0.7380 | 0.738 |
| 0.5121 | 8.05 | 1200 | 0.5113 | 0.7370 | 0.737 |
| 0.5072 | 9.4 | 1400 | 0.5088 | 0.7378 | 0.738 |
| 0.5049 | 10.74 | 1600 | 0.5100 | 0.7390 | 0.739 |
| 0.5029 | 12.08 | 1800 | 0.5164 | 0.7475 | 0.748 |
| 0.4997 | 13.42 | 2000 | 0.5137 | 0.7435 | 0.744 |
| 0.5004 | 14.77 | 2200 | 0.5058 | 0.7422 | 0.743 |
| 0.4932 | 16.11 | 2400 | 0.5088 | 0.7445 | 0.745 |
| 0.4954 | 17.45 | 2600 | 0.5046 | 0.7419 | 0.742 |
| 0.489 | 18.79 | 2800 | 0.4987 | 0.7417 | 0.742 |
| 0.4875 | 20.13 | 3000 | 0.5027 | 0.7400 | 0.74 |
| 0.486 | 21.48 | 3200 | 0.5136 | 0.7389 | 0.74 |
| 0.4861 | 22.82 | 3400 | 0.5056 | 0.7339 | 0.734 |
| 0.4817 | 24.16 | 3600 | 0.4967 | 0.7400 | 0.74 |
| 0.4779 | 25.5 | 3800 | 0.4973 | 0.7370 | 0.737 |
| 0.4792 | 26.85 | 4000 | 0.5002 | 0.7398 | 0.74 |
| 0.4759 | 28.19 | 4200 | 0.5024 | 0.7369 | 0.737 |
| 0.4746 | 29.53 | 4400 | 0.5073 | 0.7470 | 0.747 |
| 0.4749 | 30.87 | 4600 | 0.5034 | 0.7409 | 0.741 |
| 0.4733 | 32.21 | 4800 | 0.4998 | 0.7419 | 0.742 |
| 0.4726 | 33.56 | 5000 | 0.5061 | 0.7393 | 0.74 |
| 0.4737 | 34.9 | 5200 | 0.5063 | 0.7414 | 0.742 |
| 0.4669 | 36.24 | 5400 | 0.4962 | 0.7449 | 0.745 |
| 0.469 | 37.58 | 5600 | 0.5000 | 0.7450 | 0.745 |
| 0.4658 | 38.93 | 5800 | 0.5001 | 0.7380 | 0.738 |
| 0.4631 | 40.27 | 6000 | 0.5003 | 0.7379 | 0.738 |
| 0.464 | 41.61 | 6200 | 0.4970 | 0.7400 | 0.74 |
| 0.4623 | 42.95 | 6400 | 0.5046 | 0.7459 | 0.746 |
| 0.46 | 44.3 | 6600 | 0.5083 | 0.7489 | 0.749 |
| 0.4634 | 45.64 | 6800 | 0.5060 | 0.7437 | 0.744 |
| 0.4588 | 46.98 | 7000 | 0.5045 | 0.7439 | 0.744 |
| 0.4597 | 48.32 | 7200 | 0.5028 | 0.746 | 0.746 |
| 0.4557 | 49.66 | 7400 | 0.5030 | 0.7510 | 0.751 |
| 0.4585 | 51.01 | 7600 | 0.5068 | 0.7386 | 0.739 |
| 0.4579 | 52.35 | 7800 | 0.5012 | 0.7440 | 0.744 |
| 0.4594 | 53.69 | 8000 | 0.5003 | 0.7460 | 0.746 |
| 0.4561 | 55.03 | 8200 | 0.5002 | 0.7450 | 0.745 |
| 0.4584 | 56.38 | 8400 | 0.5024 | 0.7428 | 0.743 |
| 0.4565 | 57.72 | 8600 | 0.5004 | 0.7470 | 0.747 |
| 0.4528 | 59.06 | 8800 | 0.5026 | 0.7459 | 0.746 |
| 0.4547 | 60.4 | 9000 | 0.5034 | 0.7458 | 0.746 |
| 0.4547 | 61.74 | 9200 | 0.5012 | 0.7459 | 0.746 |
| 0.4584 | 63.09 | 9400 | 0.5009 | 0.7459 | 0.746 |
| 0.4507 | 64.43 | 9600 | 0.5012 | 0.7489 | 0.749 |
| 0.4539 | 65.77 | 9800 | 0.5020 | 0.7469 | 0.747 |
| 0.4504 | 67.11 | 10000 | 0.5006 | 0.7470 | 0.747 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_tf_2-seqsight_4096_512_15M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_2-seqsight_4096_512_15M-L8_f | null | [
"region:us"
] | null | 2024-05-03T19:14:29+00:00 | [] | [] | TAGS
#region-us
| GUE\_tf\_2-seqsight\_4096\_512\_15M-L8\_f
=========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_tf\_2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4762
* F1 Score: 0.7889
* Accuracy: 0.789
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
5,
100,
5,
52
] | [
"TAGS\n#region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_2-seqsight_4096_512_15M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4798
- F1 Score: 0.7869
- Accuracy: 0.787
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.578 | 1.34 | 200 | 0.5342 | 0.7303 | 0.733 |
| 0.531 | 2.68 | 400 | 0.5274 | 0.7439 | 0.745 |
| 0.5177 | 4.03 | 600 | 0.5157 | 0.7400 | 0.74 |
| 0.5099 | 5.37 | 800 | 0.5128 | 0.7489 | 0.749 |
| 0.5048 | 6.71 | 1000 | 0.5149 | 0.7448 | 0.745 |
| 0.4968 | 8.05 | 1200 | 0.5041 | 0.7375 | 0.738 |
| 0.4897 | 9.4 | 1400 | 0.5042 | 0.7520 | 0.752 |
| 0.486 | 10.74 | 1600 | 0.5024 | 0.7480 | 0.748 |
| 0.4817 | 12.08 | 1800 | 0.5059 | 0.7574 | 0.758 |
| 0.4755 | 13.42 | 2000 | 0.5121 | 0.7437 | 0.744 |
| 0.4763 | 14.77 | 2200 | 0.5078 | 0.7339 | 0.736 |
| 0.4663 | 16.11 | 2400 | 0.5129 | 0.7577 | 0.758 |
| 0.4685 | 17.45 | 2600 | 0.5037 | 0.7478 | 0.748 |
| 0.4603 | 18.79 | 2800 | 0.4975 | 0.7444 | 0.745 |
| 0.4557 | 20.13 | 3000 | 0.5109 | 0.7469 | 0.747 |
| 0.4502 | 21.48 | 3200 | 0.5222 | 0.7300 | 0.731 |
| 0.4525 | 22.82 | 3400 | 0.5181 | 0.7539 | 0.754 |
| 0.4457 | 24.16 | 3600 | 0.5046 | 0.7480 | 0.748 |
| 0.4382 | 25.5 | 3800 | 0.5103 | 0.7479 | 0.748 |
| 0.4378 | 26.85 | 4000 | 0.5076 | 0.7479 | 0.748 |
| 0.4323 | 28.19 | 4200 | 0.5127 | 0.7404 | 0.741 |
| 0.4281 | 29.53 | 4400 | 0.5187 | 0.7369 | 0.737 |
| 0.4288 | 30.87 | 4600 | 0.5104 | 0.7460 | 0.746 |
| 0.4232 | 32.21 | 4800 | 0.5187 | 0.7560 | 0.756 |
| 0.4203 | 33.56 | 5000 | 0.5202 | 0.7537 | 0.754 |
| 0.4205 | 34.9 | 5200 | 0.5271 | 0.7454 | 0.746 |
| 0.409 | 36.24 | 5400 | 0.5216 | 0.7489 | 0.749 |
| 0.4114 | 37.58 | 5600 | 0.5241 | 0.7477 | 0.748 |
| 0.4077 | 38.93 | 5800 | 0.5173 | 0.7479 | 0.748 |
| 0.404 | 40.27 | 6000 | 0.5202 | 0.7560 | 0.756 |
| 0.4026 | 41.61 | 6200 | 0.5207 | 0.7430 | 0.743 |
| 0.3983 | 42.95 | 6400 | 0.5391 | 0.7477 | 0.748 |
| 0.3954 | 44.3 | 6600 | 0.5431 | 0.7377 | 0.738 |
| 0.3973 | 45.64 | 6800 | 0.5416 | 0.7351 | 0.736 |
| 0.3911 | 46.98 | 7000 | 0.5404 | 0.7419 | 0.742 |
| 0.3916 | 48.32 | 7200 | 0.5340 | 0.7429 | 0.743 |
| 0.3874 | 49.66 | 7400 | 0.5330 | 0.7450 | 0.745 |
| 0.3831 | 51.01 | 7600 | 0.5419 | 0.7387 | 0.739 |
| 0.3811 | 52.35 | 7800 | 0.5460 | 0.7430 | 0.743 |
| 0.3823 | 53.69 | 8000 | 0.5400 | 0.7440 | 0.744 |
| 0.3795 | 55.03 | 8200 | 0.5479 | 0.7407 | 0.741 |
| 0.3828 | 56.38 | 8400 | 0.5518 | 0.7407 | 0.741 |
| 0.379 | 57.72 | 8600 | 0.5405 | 0.7458 | 0.746 |
| 0.3751 | 59.06 | 8800 | 0.5438 | 0.7388 | 0.739 |
| 0.3759 | 60.4 | 9000 | 0.5491 | 0.7407 | 0.741 |
| 0.3729 | 61.74 | 9200 | 0.5489 | 0.7458 | 0.746 |
| 0.3759 | 63.09 | 9400 | 0.5501 | 0.7437 | 0.744 |
| 0.3732 | 64.43 | 9600 | 0.5483 | 0.7388 | 0.739 |
| 0.375 | 65.77 | 9800 | 0.5503 | 0.7446 | 0.745 |
| 0.369 | 67.11 | 10000 | 0.5488 | 0.7369 | 0.737 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_tf_2-seqsight_4096_512_15M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_2-seqsight_4096_512_15M-L32_f | null | [
"region:us"
] | null | 2024-05-03T19:15:29+00:00 | [] | [] | TAGS
#region-us
| GUE\_tf\_2-seqsight\_4096\_512\_15M-L32\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_tf\_2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4798
* F1 Score: 0.7869
* Accuracy: 0.787
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
5,
100,
5,
52
] | [
"TAGS\n#region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_virus_covid-seqsight_4096_512_15M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_virus_covid](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_virus_covid) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9282
- F1 Score: 0.2779
- Accuracy: 0.2832
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 2.1857 | 0.35 | 200 | 2.1860 | 0.0716 | 0.1278 |
| 2.183 | 0.7 | 400 | 2.1840 | 0.0572 | 0.1247 |
| 2.1793 | 1.05 | 600 | 2.1784 | 0.0832 | 0.1352 |
| 2.1765 | 1.4 | 800 | 2.1736 | 0.0908 | 0.1396 |
| 2.1694 | 1.75 | 1000 | 2.1685 | 0.1077 | 0.1492 |
| 2.1674 | 2.09 | 1200 | 2.1739 | 0.0989 | 0.1451 |
| 2.1644 | 2.44 | 1400 | 2.1706 | 0.1217 | 0.1480 |
| 2.1612 | 2.79 | 1600 | 2.1591 | 0.1324 | 0.1615 |
| 2.1568 | 3.14 | 1800 | 2.1538 | 0.1270 | 0.1687 |
| 2.1522 | 3.49 | 2000 | 2.1578 | 0.1333 | 0.1689 |
| 2.1535 | 3.84 | 2200 | 2.1464 | 0.1515 | 0.1814 |
| 2.1446 | 4.19 | 2400 | 2.1428 | 0.1506 | 0.1716 |
| 2.1418 | 4.54 | 2600 | 2.1369 | 0.1560 | 0.1859 |
| 2.138 | 4.89 | 2800 | 2.1304 | 0.1727 | 0.1887 |
| 2.133 | 5.24 | 3000 | 2.1352 | 0.1610 | 0.1908 |
| 2.1303 | 5.58 | 3200 | 2.1227 | 0.1787 | 0.2040 |
| 2.127 | 5.93 | 3400 | 2.1300 | 0.1451 | 0.1809 |
| 2.121 | 6.28 | 3600 | 2.1118 | 0.1827 | 0.2046 |
| 2.1132 | 6.63 | 3800 | 2.0989 | 0.1781 | 0.2027 |
| 2.11 | 6.98 | 4000 | 2.0828 | 0.2078 | 0.2254 |
| 2.0955 | 7.33 | 4200 | 2.0556 | 0.2196 | 0.2338 |
| 2.0834 | 7.68 | 4400 | 2.0488 | 0.2224 | 0.2342 |
| 2.0747 | 8.03 | 4600 | 2.0685 | 0.1803 | 0.2083 |
| 2.0662 | 8.38 | 4800 | 2.0344 | 0.2150 | 0.2323 |
| 2.0627 | 8.73 | 5000 | 2.0267 | 0.2107 | 0.2333 |
| 2.0541 | 9.08 | 5200 | 2.0213 | 0.2244 | 0.2355 |
| 2.0482 | 9.42 | 5400 | 2.0056 | 0.2347 | 0.2490 |
| 2.0413 | 9.77 | 5600 | 2.0041 | 0.2293 | 0.2441 |
| 2.0395 | 10.12 | 5800 | 1.9909 | 0.2505 | 0.2573 |
| 2.0322 | 10.47 | 6000 | 1.9841 | 0.2563 | 0.2616 |
| 2.0275 | 10.82 | 6200 | 1.9875 | 0.2414 | 0.2515 |
| 2.0227 | 11.17 | 6400 | 1.9840 | 0.2401 | 0.2509 |
| 2.0205 | 11.52 | 6600 | 1.9861 | 0.2374 | 0.2514 |
| 2.0191 | 11.87 | 6800 | 1.9717 | 0.2484 | 0.2594 |
| 2.0118 | 12.22 | 7000 | 1.9615 | 0.2657 | 0.2700 |
| 2.008 | 12.57 | 7200 | 1.9528 | 0.2658 | 0.2708 |
| 2.0108 | 12.91 | 7400 | 1.9626 | 0.2555 | 0.2638 |
| 2.0043 | 13.26 | 7600 | 1.9508 | 0.2567 | 0.2681 |
| 1.9972 | 13.61 | 7800 | 1.9566 | 0.2538 | 0.2635 |
| 1.9999 | 13.96 | 8000 | 1.9473 | 0.2719 | 0.2755 |
| 1.9947 | 14.31 | 8200 | 1.9432 | 0.2678 | 0.2758 |
| 1.9987 | 14.66 | 8400 | 1.9337 | 0.2747 | 0.2785 |
| 1.9902 | 15.01 | 8600 | 1.9422 | 0.2650 | 0.2717 |
| 1.9921 | 15.36 | 8800 | 1.9332 | 0.2762 | 0.2783 |
| 1.9841 | 15.71 | 9000 | 1.9405 | 0.2699 | 0.2780 |
| 1.9876 | 16.06 | 9200 | 1.9298 | 0.2772 | 0.2806 |
| 1.9878 | 16.4 | 9400 | 1.9299 | 0.2749 | 0.2798 |
| 1.9869 | 16.75 | 9600 | 1.9348 | 0.2755 | 0.2804 |
| 1.9865 | 17.1 | 9800 | 1.9314 | 0.2739 | 0.2793 |
| 1.9921 | 17.45 | 10000 | 1.9304 | 0.2764 | 0.2804 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_virus_covid-seqsight_4096_512_15M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_virus_covid-seqsight_4096_512_15M-L1_f | null | [
"region:us"
] | null | 2024-05-03T19:16:42+00:00 | [] | [] | TAGS
#region-us
| GUE\_virus\_covid-seqsight\_4096\_512\_15M-L1\_f
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_virus\_covid dataset.
It achieves the following results on the evaluation set:
* Loss: 1.9282
* F1 Score: 0.2779
* Accuracy: 0.2832
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
5,
100,
5,
52
] | [
"TAGS\n#region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | null |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | golf2248/ofn2ele | null | [
"region:us"
] | null | 2024-05-03T19:16:49+00:00 | [] | [] | TAGS
#region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
5,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_virus_covid-seqsight_4096_512_15M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_virus_covid](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_virus_covid) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5376
- F1 Score: 0.4255
- Accuracy: 0.4204
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 2.1853 | 0.35 | 200 | 2.1837 | 0.0874 | 0.1342 |
| 2.1805 | 0.7 | 400 | 2.1771 | 0.0925 | 0.1295 |
| 2.1715 | 1.05 | 600 | 2.1691 | 0.1098 | 0.1369 |
| 2.1617 | 1.4 | 800 | 2.1614 | 0.1110 | 0.1502 |
| 2.1494 | 1.75 | 1000 | 2.1453 | 0.1504 | 0.1804 |
| 2.1399 | 2.09 | 1200 | 2.1424 | 0.1226 | 0.1747 |
| 2.11 | 2.44 | 1400 | 2.0699 | 0.1890 | 0.2135 |
| 2.0694 | 2.79 | 1600 | 2.0231 | 0.2130 | 0.2406 |
| 2.0288 | 3.14 | 1800 | 2.0134 | 0.2062 | 0.2318 |
| 1.9962 | 3.49 | 2000 | 1.9502 | 0.2475 | 0.2598 |
| 1.9712 | 3.84 | 2200 | 1.8961 | 0.2710 | 0.2816 |
| 1.9382 | 4.19 | 2400 | 1.8577 | 0.2936 | 0.2901 |
| 1.9121 | 4.54 | 2600 | 1.8328 | 0.3132 | 0.3178 |
| 1.8976 | 4.89 | 2800 | 1.8175 | 0.3134 | 0.3129 |
| 1.875 | 5.24 | 3000 | 1.7826 | 0.3280 | 0.3340 |
| 1.8617 | 5.58 | 3200 | 1.7518 | 0.3499 | 0.3488 |
| 1.8365 | 5.93 | 3400 | 1.7553 | 0.3296 | 0.3388 |
| 1.8209 | 6.28 | 3600 | 1.7260 | 0.3515 | 0.3516 |
| 1.8059 | 6.63 | 3800 | 1.7081 | 0.3620 | 0.3599 |
| 1.8003 | 6.98 | 4000 | 1.7012 | 0.3732 | 0.3702 |
| 1.7834 | 7.33 | 4200 | 1.6943 | 0.3664 | 0.3658 |
| 1.7706 | 7.68 | 4400 | 1.6790 | 0.3783 | 0.3660 |
| 1.767 | 8.03 | 4600 | 1.6793 | 0.3684 | 0.3688 |
| 1.7547 | 8.38 | 4800 | 1.6680 | 0.3748 | 0.3752 |
| 1.7509 | 8.73 | 5000 | 1.6592 | 0.3763 | 0.3802 |
| 1.7496 | 9.08 | 5200 | 1.6561 | 0.3869 | 0.3803 |
| 1.7273 | 9.42 | 5400 | 1.6421 | 0.3869 | 0.3880 |
| 1.7283 | 9.77 | 5600 | 1.6331 | 0.3979 | 0.3955 |
| 1.725 | 10.12 | 5800 | 1.6186 | 0.4024 | 0.3932 |
| 1.7221 | 10.47 | 6000 | 1.6145 | 0.3986 | 0.3946 |
| 1.7101 | 10.82 | 6200 | 1.6078 | 0.4082 | 0.4012 |
| 1.6922 | 11.17 | 6400 | 1.6023 | 0.4073 | 0.4024 |
| 1.6973 | 11.52 | 6600 | 1.5917 | 0.4116 | 0.4045 |
| 1.6989 | 11.87 | 6800 | 1.5862 | 0.4106 | 0.4053 |
| 1.684 | 12.22 | 7000 | 1.5780 | 0.4176 | 0.4108 |
| 1.674 | 12.57 | 7200 | 1.5750 | 0.4172 | 0.4123 |
| 1.6799 | 12.91 | 7400 | 1.5693 | 0.4194 | 0.4140 |
| 1.6687 | 13.26 | 7600 | 1.5574 | 0.4183 | 0.4153 |
| 1.6716 | 13.61 | 7800 | 1.5663 | 0.4222 | 0.4162 |
| 1.6615 | 13.96 | 8000 | 1.5567 | 0.4226 | 0.4177 |
| 1.6562 | 14.31 | 8200 | 1.5533 | 0.4217 | 0.4166 |
| 1.6584 | 14.66 | 8400 | 1.5481 | 0.4290 | 0.4196 |
| 1.656 | 15.01 | 8600 | 1.5455 | 0.4272 | 0.4237 |
| 1.6563 | 15.36 | 8800 | 1.5480 | 0.4297 | 0.4204 |
| 1.639 | 15.71 | 9000 | 1.5463 | 0.4260 | 0.4224 |
| 1.6507 | 16.06 | 9200 | 1.5438 | 0.4242 | 0.4192 |
| 1.6477 | 16.4 | 9400 | 1.5385 | 0.4275 | 0.4226 |
| 1.6475 | 16.75 | 9600 | 1.5404 | 0.4289 | 0.4243 |
| 1.6414 | 17.1 | 9800 | 1.5406 | 0.4294 | 0.4249 |
| 1.6511 | 17.45 | 10000 | 1.5388 | 0.4300 | 0.4249 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_virus_covid-seqsight_4096_512_15M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_virus_covid-seqsight_4096_512_15M-L8_f | null | [
"region:us"
] | null | 2024-05-03T19:16:59+00:00 | [] | [] | TAGS
#region-us
| GUE\_virus\_covid-seqsight\_4096\_512\_15M-L8\_f
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_virus\_covid dataset.
It achieves the following results on the evaluation set:
* Loss: 1.5376
* F1 Score: 0.4255
* Accuracy: 0.4204
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
5,
100,
5,
52
] | [
"TAGS\n#region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |