modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
apwic/summarization-lora-3 | apwic | "2024-07-02T21:52:30Z" | 0 | 0 | null | [
"generated_from_trainer",
"id",
"base_model:LazarusNLP/IndoNanoT5-base",
"license:apache-2.0",
"region:us"
] | null | "2024-07-02T18:35:08Z" | ---
language:
- id
license: apache-2.0
base_model: LazarusNLP/IndoNanoT5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: summarization-lora-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summarization-lora-3
This model is a fine-tuned version of [LazarusNLP/IndoNanoT5-base](https://huggingface.co/LazarusNLP/IndoNanoT5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5917
- Rouge1: 0.3832
- Rouge2: 0.0
- Rougel: 0.3816
- Rougelsum: 0.3824
- Gen Len: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 1.2253 | 1.0 | 1784 | 0.6577 | 0.4006 | 0.0 | 0.3969 | 0.4007 | 1.0 |
| 0.7893 | 2.0 | 3568 | 0.6133 | 0.4188 | 0.0 | 0.4167 | 0.4194 | 1.0 |
| 0.7351 | 3.0 | 5352 | 0.6062 | 0.3925 | 0.0 | 0.3898 | 0.3908 | 1.0 |
| 0.7092 | 4.0 | 7136 | 0.5990 | 0.3834 | 0.0 | 0.381 | 0.3826 | 1.0 |
| 0.6978 | 5.0 | 8920 | 0.5917 | 0.3832 | 0.0 | 0.3816 | 0.3824 | 1.0 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Litzy0619/acronym_identification_0.003_32_5_6 | Litzy0619 | "2024-07-02T18:38:58Z" | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | "2024-07-02T18:35:20Z" | Entry not found |
fifala/16-fifa-07-02-01 | fifala | "2024-07-02T18:38:40Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:35:55Z" | Entry not found |
vishnuhaasan/xlnet_base_all | vishnuhaasan | "2024-07-02T22:34:34Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlnet",
"text-classification",
"generated_from_trainer",
"base_model:xlnet/xlnet-base-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-07-02T18:36:19Z" | ---
license: mit
base_model: xlnet/xlnet-base-cased
tags:
- generated_from_trainer
model-index:
- name: xlnet_base_all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet_base_all
This model is a fine-tuned version of [xlnet/xlnet-base-cased](https://huggingface.co/xlnet/xlnet-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4611
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.3162 | 1.0 | 1949 | 0.4187 |
| 0.1553 | 2.0 | 3898 | 0.3679 |
| 0.1649 | 3.0 | 5847 | 0.3724 |
| 0.2501 | 4.0 | 7796 | 0.3303 |
| 0.2847 | 5.0 | 9745 | 0.3959 |
| 0.2092 | 6.0 | 11694 | 0.4093 |
| 0.0744 | 7.0 | 13643 | 0.4144 |
| 0.0403 | 8.0 | 15592 | 0.4852 |
| 0.0003 | 9.0 | 17541 | 0.4603 |
| 0.0002 | 10.0 | 19490 | 0.4611 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
clefab/affiche | clefab | "2024-07-02T18:37:48Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T18:36:35Z" | A SHOE IN A PUDDLE |
shilpap/ssearch | shilpap | "2024-07-02T18:36:48Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T18:36:48Z" | Entry not found |
healtori/14-heal-07-02-01 | healtori | "2024-07-02T18:40:47Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:37:47Z" | Entry not found |
starnet/03-star21-07-02 | starnet | "2024-07-02T18:45:28Z" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | null | "2024-07-02T18:38:00Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
crumb/13f189-augmented-mappings-medium-control | crumb | "2024-07-03T01:27:17Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T18:38:32Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|-------------|------:|------|-----:|--------|-----:|---|-----:|
|arc_challenge| 1|none | 25|acc |0.2108|± |0.0119|
| | |none | 25|acc_norm|0.2423|± |0.0125|
|truthfulqa_mc2| 2|none | 0|acc |0.4356|± |0.0151|
|winogrande| 1|none | 5|acc |0.5138|± | 0.014|
|hellaswag| 1|none | 10|acc |0.2938|± |0.0045|
| | |none | 10|acc_norm|0.3242|± |0.0047|
|gsm8k| 3|strict-match | 5|exact_match|0.0129|± |0.0031|
| | |flexible-extract| 5|exact_match|0.0197|± |0.0038|
### MMLU *(0.2649701754385965, 0.004451753262466369)*
| Tasks |Version|Filter|n-shot|Metric|Value | |Stderr|
|-----------------------------------|------:|------|-----:|------|-----:|---|-----:|
|world_religions | 0|none | 5|acc |0.2281|± |0.0322|
|virology | 0|none | 5|acc |0.1747|± |0.0296|
|us_foreign_policy | 0|none | 5|acc |0.2600|± |0.0441|
|sociology | 0|none | 5|acc |0.2736|± |0.0315|
|security_studies | 0|none | 5|acc |0.4000|± |0.0314|
|public_relations | 0|none | 5|acc |0.2273|± |0.0401|
|professional_psychology | 0|none | 5|acc |0.2467|± |0.0174|
|professional_medicine | 0|none | 5|acc |0.4485|± |0.0302|
|professional_law | 0|none | 5|acc |0.2490|± |0.0110|
|professional_accounting | 0|none | 5|acc |0.2340|± |0.0253|
|prehistory | 0|none | 5|acc |0.2315|± |0.0235|
|philosophy | 0|none | 5|acc |0.2154|± |0.0234|
|nutrition | 0|none | 5|acc |0.2516|± |0.0248|
|moral_scenarios | 0|none | 5|acc |0.2536|± |0.0146|
|moral_disputes | 0|none | 5|acc |0.1879|± |0.0210|
|miscellaneous | 0|none | 5|acc |0.2197|± |0.0148|
|medical_genetics | 0|none | 5|acc |0.1900|± |0.0394|
|marketing | 0|none | 5|acc |0.1923|± |0.0258|
|management | 0|none | 5|acc |0.3301|± |0.0466|
|machine_learning | 0|none | 5|acc |0.1875|± |0.0370|
|logical_fallacies | 0|none | 5|acc |0.2577|± |0.0344|
|jurisprudence | 0|none | 5|acc |0.2222|± |0.0402|
|international_law | 0|none | 5|acc |0.3802|± |0.0443|
|human_sexuality | 0|none | 5|acc |0.2137|± |0.0360|
|human_aging | 0|none | 5|acc |0.1121|± |0.0212|
|high_school_world_history | 0|none | 5|acc |0.2743|± |0.0290|
|high_school_us_history | 0|none | 5|acc |0.2353|± |0.0298|
|high_school_statistics | 0|none | 5|acc |0.4722|± |0.0340|
|high_school_psychology | 0|none | 5|acc |0.3358|± |0.0202|
|high_school_physics | 0|none | 5|acc |0.3245|± |0.0382|
|high_school_microeconomics | 0|none | 5|acc |0.2605|± |0.0285|
|high_school_mathematics | 0|none | 5|acc |0.2741|± |0.0272|
|high_school_macroeconomics | 0|none | 5|acc |0.3615|± |0.0244|
|high_school_government_and_politics| 0|none | 5|acc |0.3679|± |0.0348|
|high_school_geography | 0|none | 5|acc |0.3535|± |0.0341|
|high_school_european_history | 0|none | 5|acc |0.2485|± |0.0337|
|high_school_computer_science | 0|none | 5|acc |0.1600|± |0.0368|
|high_school_chemistry | 0|none | 5|acc |0.2709|± |0.0313|
|high_school_biology | 0|none | 5|acc |0.3032|± |0.0261|
|global_facts | 0|none | 5|acc |0.2500|± |0.0435|
|formal_logic | 0|none | 5|acc |0.1587|± |0.0327|
|elementary_mathematics | 0|none | 5|acc |0.2857|± |0.0233|
|electrical_engineering | 0|none | 5|acc |0.2483|± |0.0360|
|econometrics | 0|none | 5|acc |0.2895|± |0.0427|
|conceptual_physics | 0|none | 5|acc |0.2894|± |0.0296|
|computer_security | 0|none | 5|acc |0.1900|± |0.0394|
|college_physics | 0|none | 5|acc |0.2451|± |0.0428|
|college_medicine | 0|none | 5|acc |0.2775|± |0.0341|
|college_mathematics | 0|none | 5|acc |0.2800|± |0.0451|
|college_computer_science | 0|none | 5|acc |0.2400|± |0.0429|
|college_chemistry | 0|none | 5|acc |0.3300|± |0.0473|
|college_biology | 0|none | 5|acc |0.2639|± |0.0369|
|clinical_knowledge | 0|none | 5|acc |0.3094|± |0.0285|
|business_ethics | 0|none | 5|acc |0.1900|± |0.0394|
|astronomy | 0|none | 5|acc |0.2303|± |0.0343|
|anatomy | 0|none | 5|acc |0.3259|± |0.0405|
|abstract_algebra | 0|none | 5|acc |0.2700|± |0.0446|
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
starnet/11-star-07-02-01 | starnet | "2024-07-02T18:41:51Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:38:39Z" | Entry not found |
sara-m98/ECO_GPT_FINAL_1 | sara-m98 | "2024-07-02T20:43:10Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | token-classification | "2024-07-02T18:39:18Z" | 'openai-community/gpt2'
training_args = TrainingArguments(
output_dir='ECO_GPT',
evaluation_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=32,
weight_decay=0.01,
save_strategy="epoch",
load_best_model_at_end=True,
push_to_hub=True
)
Epoch Training Loss Validation Loss Precision Recall F1 Accuracy
1 No log 0.197675 0.285714 0.212743 0.243887 0.959920
2 0.368900 0.159557 0.311457 0.302376 0.306849 0.961646
3 0.368900 0.146986 0.324180 0.344222 0.333901 0.961224
4 0.129100 0.145931 0.340494 0.372300 0.355687 0.960939
5 0.129100 0.144521 0.334234 0.367441 0.350051 0.961286
6 0.098700 0.141342 0.323340 0.393089 0.354819 0.962803
7 0.098700 0.144871 0.319322 0.381479 0.347644 0.962950
8 0.079700 0.147587 0.283885 0.377160 0.323942 0.961504
9 0.079700 0.152369 0.294302 0.355562 0.322044 0.961081
10 0.066600 0.154832 0.304663 0.373920 0.335758 0.962580
11 0.066600 0.160076 0.295521 0.381210 0.332940 0.960837
12 0.056600 0.164543 0.306661 0.370410 0.335534 0.962100
13 0.056600 0.166050 0.331369 0.379590 0.353844 0.963194
14 0.048700 0.169381 0.307859 0.383909 0.341704 0.961958
15 0.048700 0.179932 0.306340 0.380940 0.339591 0.960970
16 0.042500 0.180197 0.313375 0.368790 0.338832 0.962193
17 0.042500 0.186068 0.297326 0.384179 0.335218 0.960623
18 0.037500 0.195882 0.313199 0.377970 0.342550 0.961268
19 0.037500 0.197563 0.304962 0.384989 0.340334 0.960783
20 0.034100 0.201612 0.294698 0.370680 0.328351 0.960765
21 0.034100 0.207106 0.296033 0.382829 0.333883 0.959849
22 0.030900 0.205857 0.293838 0.383639 0.332787 0.960080
23 0.030900 0.211530 0.309654 0.383639 0.342699 0.961041
24 0.028700 0.215475 0.312196 0.381479 0.343378 0.961108
25 0.028700 0.213058 0.311651 0.377700 0.341511 0.961700
26 0.026700 0.216595 0.312333 0.378780 0.342362 0.961508
27 0.026700 0.223365 0.318739 0.379320 0.346400 0.961722
28 0.025400 0.224544 0.312472 0.373380 0.340221 0.961517
29 0.025400 0.225070 0.319539 0.381479 0.347773 0.961878
30 0.024300 0.223071 0.308169 0.375810 0.338645 0.961455
31 0.024300 0.225455 0.306649 0.372300 0.336300 0.961268
32 0.023900 0.224405 0.308620 0.373110 0.337815 0.961384
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6634a6ddbcf56d1302dc1e82/LwJz_e7_EKlh4_c-RwS3f.png)
|
fifala/17-fifa-07-02-01 | fifala | "2024-07-02T18:42:24Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:39:31Z" | Entry not found |
Litzy0619/ade_corpus_v2-dosage_0.003_32_5_6 | Litzy0619 | "2024-07-02T18:44:55Z" | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | "2024-07-02T18:41:19Z" | Entry not found |
ajayarora1235/lyric_model_v0 | ajayarora1235 | "2024-07-02T18:41:55Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T18:41:34Z" | ---
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** ajayarora1235
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
healtori/15-heal-07-02-01 | healtori | "2024-07-02T18:44:38Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:41:47Z" | Entry not found |
Ahmad0067/llama-3-8b-Instruct-Blood_Refer_Prescription_Synth_data_Phase_1_and_2_merged_adopter | Ahmad0067 | "2024-07-02T18:44:05Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-07-02T18:42:02Z" | Entry not found |
juanpablomesa/bge-small-bioasq-1epoch-batch32-step50 | juanpablomesa | "2024-07-02T18:42:16Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:4012",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:BAAI/bge-small-en-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | "2024-07-02T18:42:12Z" | ---
base_model: BAAI/bge-small-en-v1.5
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:4012
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: 'Extensive messenger RNA editing generates transcript and protein
diversity in genes involved in neural excitability, as previously described, as
well as in genes participating in a broad range of other cellular functions. '
sentences:
- Do cephalopods use RNA editing less frequently than other species?
- GV1001 vaccine targets which enzyme?
- Which event results in the acetylation of S6K1?
- source_sentence: Yes, exposure to household furry pets influences the gut microbiota
of infants.
sentences:
- Can pets affect infant microbiomed?
- What is the mode of action of Thiazovivin?
- What are the effects of CAMK4 inhibition?
- source_sentence: "In children with heart failure evidence of the effect of enalapril\
\ is empirical. Enalapril was clinically safe and effective in 50% to 80% of for\
\ children with cardiac failure secondary to congenital heart malformations before\
\ and after cardiac surgery, impaired ventricular function , valvar regurgitation,\
\ congestive cardiomyopathy, , arterial hypertension, life-threatening arrhythmias\
\ coexisting with circulatory insufficiency. \nACE inhibitors have shown a transient\
\ beneficial effect on heart failure due to anticancer drugs and possibly a beneficial\
\ effect in muscular dystrophy-associated cardiomyopathy, which deserves further\
\ studies."
sentences:
- Which receptors can be evaluated with the [18F]altanserin?
- In what proportion of children with heart failure has Enalapril been shown to
be safe and effective?
- Which major signaling pathways are regulated by RIP1?
- source_sentence: Cellular senescence-associated heterochromatic foci (SAHFS) are
a novel type of chromatin condensation involving alterations of linker histone
H1 and linker DNA-binding proteins. SAHFS can be formed by a variety of cell types,
but their mechanism of action remains unclear.
sentences:
- What is the relationship between the X chromosome and a neutrophil drumstick?
- Which microRNAs are involved in exercise adaptation?
- How are SAHFS created?
- source_sentence: Multicluster Pcdh diversity is required for mouse olfactory neural
circuit assembly. The vertebrate clustered protocadherin (Pcdh) cell surface proteins
are encoded by three closely linked gene clusters (Pcdhα, Pcdhβ, and Pcdhγ). Although
deletion of individual Pcdh clusters had subtle phenotypic consequences, the loss
of all three clusters (tricluster deletion) led to a severe axonal arborization
defect and loss of self-avoidance.
sentences:
- What are the effects of the deletion of all three Pcdh clusters (tricluster deletion)
in mice?
- what is the role of MEF-2 in cardiomyocyte differentiation?
- How many periods of regulatory innovation led to the evolution of vertebrates?
model-index:
- name: BGE small finetuned BIOASQ
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: BAAI/bge small en v1.5
type: BAAI/bge-small-en-v1.5
metrics:
- type: cosine_accuracy@1
value: 0.8345120226308345
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9207920792079208
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.942008486562942
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9547383309759547
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8345120226308345
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3069306930693069
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.18840169731258838
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09547383309759547
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8345120226308345
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9207920792079208
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.942008486562942
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9547383309759547
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9001912196285257
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8821973013627894
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8832710505733916
name: Cosine Map@100
---
# BGE small finetuned BIOASQ
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) <!-- at revision 5c38ec7c405ec4b44b94cc5a9bb96e735b38267a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("juanpablomesa/bge-small-bioasq-1epoch-batch32-step50")
# Run inference
sentences = [
'Multicluster Pcdh diversity is required for mouse olfactory neural circuit assembly. The vertebrate clustered protocadherin (Pcdh) cell surface proteins are encoded by three closely linked gene clusters (Pcdhα, Pcdhβ, and Pcdhγ). Although deletion of individual Pcdh clusters had subtle phenotypic consequences, the loss of all three clusters (tricluster deletion) led to a severe axonal arborization defect and loss of self-avoidance.',
'What are the effects of the deletion of all three Pcdh clusters (tricluster deletion) in mice?',
'How many periods of regulatory innovation led to the evolution of vertebrates?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `BAAI/bge-small-en-v1.5`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.8345 |
| cosine_accuracy@3 | 0.9208 |
| cosine_accuracy@5 | 0.942 |
| cosine_accuracy@10 | 0.9547 |
| cosine_precision@1 | 0.8345 |
| cosine_precision@3 | 0.3069 |
| cosine_precision@5 | 0.1884 |
| cosine_precision@10 | 0.0955 |
| cosine_recall@1 | 0.8345 |
| cosine_recall@3 | 0.9208 |
| cosine_recall@5 | 0.942 |
| cosine_recall@10 | 0.9547 |
| cosine_ndcg@10 | 0.9002 |
| cosine_mrr@10 | 0.8822 |
| **cosine_map@100** | **0.8833** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 4,012 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 63.38 tokens</li><li>max: 485 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 16.13 tokens</li><li>max: 49 tokens</li></ul> |
* Samples:
| positive | anchor |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|
| <code>Aberrant patterns of H3K4, H3K9, and H3K27 histone lysine methylation were shown to result in histone code alterations, which induce changes in gene expression, and affect the proliferation rate of cells in medulloblastoma.</code> | <code>What is the implication of histone lysine methylation in medulloblastoma?</code> |
| <code>STAG1/STAG2 proteins are tumour suppressor proteins that suppress cell proliferation and are essential for differentiation.</code> | <code>What is the role of STAG1/STAG2 proteins in differentiation?</code> |
| <code>The association between cell phone use and incident glioblastoma remains unclear. Some studies have reported that cell phone use was associated with incident glioblastoma, and with reduced survival of patients diagnosed with glioblastoma. However, other studies have repeatedly replicated to find an association between cell phone use and glioblastoma.</code> | <code>What is the association between cell phone use and glioblastoma?</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | BAAI/bge-small-en-v1.5_cosine_map@100 |
|:------:|:----:|:-------------:|:-------------------------------------:|
| 0.3968 | 50 | 0.2673 | 0.8807 |
| 0.7937 | 100 | 0.1576 | 0.8833 |
### Framework Versions
- Python: 3.11.5
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.31.0
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
starnet/12-star-07-02-01 | starnet | "2024-07-02T18:46:00Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:42:53Z" | Entry not found |
shriyakan/auto-gptq-repo | shriyakan | "2024-07-02T18:43:01Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T18:43:01Z" | Invalid username or password. |
RichardErkhov/NickyNicky_-_Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1-gguf | RichardErkhov | "2024-07-02T19:00:04Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-07-02T18:43:04Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1 - GGUF
- Model creator: https://huggingface.co/NickyNicky/
- Original model: https://huggingface.co/NickyNicky/Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.Q2_K.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1-gguf/blob/main/Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.Q2_K.gguf) | Q2_K | 0.92GB |
| [Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1-gguf/blob/main/Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.IQ3_XS.gguf) | IQ3_XS | 1.02GB |
| [Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1-gguf/blob/main/Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.IQ3_S.gguf) | IQ3_S | 1.08GB |
| [Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1-gguf/blob/main/Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.Q3_K_S.gguf) | Q3_K_S | 1.07GB |
| [Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1-gguf/blob/main/Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.IQ3_M.gguf) | IQ3_M | 1.1GB |
| [Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.Q3_K.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1-gguf/blob/main/Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.Q3_K.gguf) | Q3_K | 1.19GB |
| [Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1-gguf/blob/main/Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.Q3_K_M.gguf) | Q3_K_M | 1.19GB |
| [Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1-gguf/blob/main/Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.Q3_K_L.gguf) | Q3_K_L | 1.28GB |
| [Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1-gguf/blob/main/Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.IQ4_XS.gguf) | IQ4_XS | 1.33GB |
| [Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.Q4_0.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1-gguf/blob/main/Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.Q4_0.gguf) | Q4_0 | 1.39GB |
| [Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1-gguf/blob/main/Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.IQ4_NL.gguf) | IQ4_NL | 1.4GB |
| [Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1-gguf/blob/main/Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.Q4_K_S.gguf) | Q4_K_S | 1.4GB |
| [Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.Q4_K.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1-gguf/blob/main/Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.Q4_K.gguf) | Q4_K | 1.48GB |
| [Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1-gguf/blob/main/Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.Q4_K_M.gguf) | Q4_K_M | 1.48GB |
| [Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.Q4_1.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1-gguf/blob/main/Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.Q4_1.gguf) | Q4_1 | 1.54GB |
| [Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.Q5_0.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1-gguf/blob/main/Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.Q5_0.gguf) | Q5_0 | 1.69GB |
| [Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1-gguf/blob/main/Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.Q5_K_S.gguf) | Q5_K_S | 1.69GB |
| [Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.Q5_K.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1-gguf/blob/main/Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.Q5_K.gguf) | Q5_K | 1.73GB |
| [Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1-gguf/blob/main/Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.Q5_K_M.gguf) | Q5_K_M | 1.73GB |
| [Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.Q5_1.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1-gguf/blob/main/Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.Q5_1.gguf) | Q5_1 | 1.84GB |
| [Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.Q6_K.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1-gguf/blob/main/Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.Q6_K.gguf) | Q6_K | 2.01GB |
| [Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.Q8_0.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1-gguf/blob/main/Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1.Q8_0.gguf) | Q8_0 | 2.6GB |
Original model description:
---
library_name: transformers
tags:
- merge
language:
- en
- es
- ru
- zh
- de
- fr
- th
- ca
- it
- ja
- pl
- eo
- eu
- vi
- fi
- hu
- ar
- nl
- da
- tr
- ko
- he
- id
- cs
- bn
- sv
widget:
- text: |
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
podrias escribir un codigo de ejemplo en Python<|im_end|>
<|im_start|>assistant
license: apache-2.0
---
# Model Card for Model MixLlama
<!-- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/641b435ba5f876fe30c5ae0a/d4yUGFC5XZz41aA3_-kGC.png) -->
<!-- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/641b435ba5f876fe30c5ae0a/mZx6OGCHfm92udQfNFcGD.png) -->
![image/png](https://cdn-uploads.huggingface.co/production/uploads/641b435ba5f876fe30c5ae0a/CW8JrvB58GSt_6B5XPcGZ.png)
<!-- Provide a quick summary of what the model is/does. -->
```Python
experts:
- source_model: NickyNicky/TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster_1_V1
positive_prompts:
- ""
- source_model: NickyNicky/TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster_2_V1
positive_prompts:
- ""
- source_model: NickyNicky/TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster_3_V1
positive_prompts:
- ""
base_model: NickyNicky/TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster_1_V1
gate_mode: random # one of "hidden", "cheap_embed", or "random"
dtype: bfloat16 # output dtype (float32, float16, or bfloat16)
```
```Python
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
TrainingArguments,
pipeline,
logging,
GenerationConfig,
TextIteratorStreamer,
)
import torch
new_model= "NickyNicky/Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1"
model = AutoModelForCausalLM.from_pretrained(#f'NickyNicky/{new_model}',
new_model,
device_map="auto",
trust_remote_code=True,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage= True,
# use_flash_attention_2=False,
)
tokenizer = AutoTokenizer.from_pretrained(new_model,
max_length=2048,
trust_remote_code=True,
use_fast = True,
)
tokenizer.pad_token = tokenizer.eos_token
# tokenizer.padding_side = 'left'
tokenizer.padding_side = 'right'
prompt= """<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
escribe una historia de amor.<|im_end|>
<|im_start|>assistant
"""
inputs = tokenizer.encode(prompt,
return_tensors="pt",
add_special_tokens=False).cuda()#.to("cuda") # False # True
generation_config = GenerationConfig(
max_new_tokens=700,
temperature=0.5,
top_p=0.9,
top_k=40,
repetition_penalty=1.1, #1.1, # 1.0 means no penalty, > 1.0 means penalty, 1.2 from CTRL paper
do_sample=True,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
)
outputs = model.generate(
generation_config=generation_config,
input_ids=inputs,)
# tokenizer.decode(outputs[0], skip_special_tokens=False) #True
print(tokenizer.decode(outputs[0], skip_special_tokens=False))
```
|
fifala/18-fifa-07-02-01 | fifala | "2024-07-02T18:45:59Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:43:19Z" | Entry not found |
TatvaJoshi-AHS/peft-InstructionTuning-training-1719944109 | TatvaJoshi-AHS | "2024-07-02T19:14:22Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"license:apache-2.0",
"region:us"
] | null | "2024-07-02T18:43:33Z" | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: google/flan-t5-base
model-index:
- name: peft-InstructionTuning-training-1719944109
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# peft-InstructionTuning-training-1719944109
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1 |
abhayesian/LLama3_HarmBench_LAT_11 | abhayesian | "2024-07-03T00:45:39Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T18:44:39Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ironlanderl/Phi-3-mini-4k-instruct-Q5_K_M-GGUF | ironlanderl | "2024-07-02T18:45:18Z" | 0 | 0 | null | [
"gguf",
"nlp",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
] | text-generation | "2024-07-02T18:45:06Z" | ---
base_model: microsoft/Phi-3-mini-4k-instruct
language:
- en
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
tags:
- nlp
- code
- llama-cpp
- gguf-my-repo
inference:
parameters:
temperature: 0.0
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
# ironlanderl/Phi-3-mini-4k-instruct-Q5_K_M-GGUF
This model was converted to GGUF format from [`microsoft/Phi-3-mini-4k-instruct`](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ironlanderl/Phi-3-mini-4k-instruct-Q5_K_M-GGUF --hf-file phi-3-mini-4k-instruct-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ironlanderl/Phi-3-mini-4k-instruct-Q5_K_M-GGUF --hf-file phi-3-mini-4k-instruct-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ironlanderl/Phi-3-mini-4k-instruct-Q5_K_M-GGUF --hf-file phi-3-mini-4k-instruct-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ironlanderl/Phi-3-mini-4k-instruct-Q5_K_M-GGUF --hf-file phi-3-mini-4k-instruct-q5_k_m.gguf -c 2048
```
|
tctrautman/202407-kibbe-prod-baseline | tctrautman | "2024-07-02T18:45:13Z" | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:HuggingFaceM4/idefics2-8b",
"license:apache-2.0",
"region:us"
] | null | "2024-07-02T18:45:10Z" | ---
license: apache-2.0
base_model: HuggingFaceM4/idefics2-8b
tags:
- generated_from_trainer
model-index:
- name: 202407-kibbe-prod-baseline
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/dubs/Kibbe-Prod/runs/qddmxycz)
# 202407-kibbe-prod-baseline
This model is a fine-tuned version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2606 | 0.5005 | 515 | 0.0458 |
| 0.5362 | 1.0010 | 1030 | 0.0361 |
| 0.5273 | 1.5015 | 1545 | 0.0375 |
### Framework versions
- Transformers 4.43.0.dev0
- Pytorch 2.1.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
slm-research-vn/Qwen2-7B-Merged-SPPO-Online-RLHF | slm-research-vn | "2024-07-02T18:49:58Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T18:45:16Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
healtori/16-heal-07-02-01 | healtori | "2024-07-02T18:48:26Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:45:36Z" | Entry not found |
Baidicoot/gemma-2b-jailbreak-RM | Baidicoot | "2024-07-02T18:47:47Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"reward-trainer",
"generated_from_trainer",
"base_model:google/gemma-2b",
"license:gemma",
"region:us"
] | null | "2024-07-02T18:46:13Z" | ---
base_model: google/gemma-2b
library_name: peft
license: gemma
metrics:
- accuracy
tags:
- trl
- reward-trainer
- generated_from_trainer
model-index:
- name: reward_modeling
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/quirky_lats_at_mats/huggingface/runs/k92pr3b1)
# reward_modeling
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4036
- Accuracy: 0.8058
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.9241 | 0.0787 | 5 | 0.6996 | 0.5678 |
| 0.7708 | 0.1575 | 10 | 0.6284 | 0.6660 |
| 0.7875 | 0.2362 | 15 | 0.5749 | 0.7244 |
| 0.6575 | 0.3150 | 20 | 0.5360 | 0.7390 |
| 0.6802 | 0.3937 | 25 | 0.5087 | 0.7432 |
| 0.3982 | 0.4724 | 30 | 0.4890 | 0.7578 |
| 0.4555 | 0.5512 | 35 | 0.4775 | 0.7599 |
| 0.8838 | 0.6299 | 40 | 0.4683 | 0.7662 |
| 0.4692 | 0.7087 | 45 | 0.4611 | 0.7662 |
| 0.5455 | 0.7874 | 50 | 0.4531 | 0.7620 |
| 0.5696 | 0.8661 | 55 | 0.4459 | 0.7662 |
| 0.7453 | 0.9449 | 60 | 0.4414 | 0.7766 |
| 0.5369 | 1.0236 | 65 | 0.4371 | 0.7829 |
| 0.3994 | 1.1024 | 70 | 0.4334 | 0.7850 |
| 0.4235 | 1.1811 | 75 | 0.4298 | 0.7912 |
| 0.4811 | 1.2598 | 80 | 0.4266 | 0.7912 |
| 0.5072 | 1.3386 | 85 | 0.4253 | 0.7912 |
| 0.4405 | 1.4173 | 90 | 0.4228 | 0.7850 |
| 0.5349 | 1.4961 | 95 | 0.4196 | 0.7871 |
| 0.3342 | 1.5748 | 100 | 0.4170 | 0.7829 |
| 0.5271 | 1.6535 | 105 | 0.4149 | 0.7933 |
| 0.3463 | 1.7323 | 110 | 0.4136 | 0.7975 |
| 0.4867 | 1.8110 | 115 | 0.4128 | 0.7996 |
| 0.3221 | 1.8898 | 120 | 0.4125 | 0.7996 |
| 0.3542 | 1.9685 | 125 | 0.4116 | 0.7996 |
| 0.5465 | 2.0472 | 130 | 0.4107 | 0.7996 |
| 0.3427 | 2.1260 | 135 | 0.4101 | 0.7996 |
| 0.4787 | 2.2047 | 140 | 0.4087 | 0.8038 |
| 0.4229 | 2.2835 | 145 | 0.4073 | 0.8017 |
| 0.4514 | 2.3622 | 150 | 0.4063 | 0.8038 |
| 0.5116 | 2.4409 | 155 | 0.4051 | 0.8038 |
| 0.3234 | 2.5197 | 160 | 0.4045 | 0.8058 |
| 0.3993 | 2.5984 | 165 | 0.4040 | 0.8058 |
| 0.3264 | 2.6772 | 170 | 0.4037 | 0.8058 |
| 0.3316 | 2.7559 | 175 | 0.4035 | 0.8038 |
| 0.4855 | 2.8346 | 180 | 0.4035 | 0.8038 |
| 0.536 | 2.9134 | 185 | 0.4036 | 0.8058 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.42.3
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1 |
starnet/04-star21-07-02 | starnet | "2024-07-02T18:54:06Z" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | null | "2024-07-02T18:46:24Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
fifala/19-fifa-07-02-01 | fifala | "2024-07-02T18:49:35Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:46:55Z" | Entry not found |
RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch2-gguf | RichardErkhov | "2024-07-02T18:58:41Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-07-02T18:46:55Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Mixsmol-4x400M-v0.1-epoch2 - GGUF
- Model creator: https://huggingface.co/vilm/
- Original model: https://huggingface.co/vilm/Mixsmol-4x400M-v0.1-epoch2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Mixsmol-4x400M-v0.1-epoch2.Q2_K.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch2-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch2.Q2_K.gguf) | Q2_K | 0.62GB |
| [Mixsmol-4x400M-v0.1-epoch2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch2-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch2.IQ3_XS.gguf) | IQ3_XS | 0.7GB |
| [Mixsmol-4x400M-v0.1-epoch2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch2-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch2.IQ3_S.gguf) | IQ3_S | 0.73GB |
| [Mixsmol-4x400M-v0.1-epoch2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch2-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch2.Q3_K_S.gguf) | Q3_K_S | 0.73GB |
| [Mixsmol-4x400M-v0.1-epoch2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch2-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch2.IQ3_M.gguf) | IQ3_M | 0.74GB |
| [Mixsmol-4x400M-v0.1-epoch2.Q3_K.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch2-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch2.Q3_K.gguf) | Q3_K | 0.8GB |
| [Mixsmol-4x400M-v0.1-epoch2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch2-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch2.Q3_K_M.gguf) | Q3_K_M | 0.8GB |
| [Mixsmol-4x400M-v0.1-epoch2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch2-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch2.Q3_K_L.gguf) | Q3_K_L | 0.87GB |
| [Mixsmol-4x400M-v0.1-epoch2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch2-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch2.IQ4_XS.gguf) | IQ4_XS | 0.9GB |
| [Mixsmol-4x400M-v0.1-epoch2.Q4_0.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch2-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch2.Q4_0.gguf) | Q4_0 | 0.94GB |
| [Mixsmol-4x400M-v0.1-epoch2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch2-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch2.IQ4_NL.gguf) | IQ4_NL | 0.95GB |
| [Mixsmol-4x400M-v0.1-epoch2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch2-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch2.Q4_K_S.gguf) | Q4_K_S | 0.95GB |
| [Mixsmol-4x400M-v0.1-epoch2.Q4_K.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch2-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch2.Q4_K.gguf) | Q4_K | 1.01GB |
| [Mixsmol-4x400M-v0.1-epoch2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch2-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch2.Q4_K_M.gguf) | Q4_K_M | 1.01GB |
| [Mixsmol-4x400M-v0.1-epoch2.Q4_1.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch2-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch2.Q4_1.gguf) | Q4_1 | 1.04GB |
| [Mixsmol-4x400M-v0.1-epoch2.Q5_0.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch2-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch2.Q5_0.gguf) | Q5_0 | 1.14GB |
| [Mixsmol-4x400M-v0.1-epoch2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch2-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch2.Q5_K_S.gguf) | Q5_K_S | 1.14GB |
| [Mixsmol-4x400M-v0.1-epoch2.Q5_K.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch2-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch2.Q5_K.gguf) | Q5_K | 1.18GB |
| [Mixsmol-4x400M-v0.1-epoch2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch2-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch2.Q5_K_M.gguf) | Q5_K_M | 1.18GB |
| [Mixsmol-4x400M-v0.1-epoch2.Q5_1.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch2-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch2.Q5_1.gguf) | Q5_1 | 1.24GB |
| [Mixsmol-4x400M-v0.1-epoch2.Q6_K.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch2-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch2.Q6_K.gguf) | Q6_K | 1.36GB |
| [Mixsmol-4x400M-v0.1-epoch2.Q8_0.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch2-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch2.Q8_0.gguf) | Q8_0 | 1.76GB |
Original model description:
---
license: apache-2.0
widget:
- text: My name is El Microondas the Wise, and
example_title: El Microondas
- text: Kennesaw State University is a public
example_title: Kennesaw State University
- text: Bungie Studios is an American video game developer. They are most famous for
developing the award winning Halo series of video games. They also made Destiny.
The studio was founded
example_title: Bungie
- text: The Mona Lisa is a world-renowned painting created by
example_title: Mona Lisa
- text: The Harry Potter series, written by J.K. Rowling, begins with the book titled
example_title: Harry Potter Series
- text: 'Question: I have cities, but no houses. I have mountains, but no trees. I
have water, but no fish. What am I?
Answer:'
example_title: Riddle
- text: The process of photosynthesis involves the conversion of
example_title: Photosynthesis
- text: Jane went to the store to buy some groceries. She picked up apples, oranges,
and a loaf of bread. When she got home, she realized she forgot
example_title: Story Continuation
- text: 'Problem 2: If a train leaves Station A at 9:00 AM and travels at 60 mph,
and another train leaves Station B at 10:00 AM and travels at 80 mph, when will
they meet if the distance between the stations is 300 miles?
To determine'
example_title: Math Problem
- text: In the context of computer programming, an algorithm is
example_title: Algorithm Definition
---
# Mixsmol-4x400M-v0.1 by Ontocord
This is the first checkpoint (Epoch 1) of Mixsmol-4x400M-v0.1
Note that this is an experimental in data mixing. Therefore, we only trained the model on 50B tokens (95% English and 5% Vietnamese) to test the following:
- Reasoining capabilities through high-quality synthetic textbooks data pretraining
- Crosslingual understanding through machine translation and multilingual + multiple tasks pretraining
After verifying our hypothesis with this run, we will schedule a second run on bigger data and compute for it to achieve its maximum capability.
## Data
- Synthetic Textbooks: 8M samples
- RefinedWeb: 1M samples
- RedPajama-v2: 500K samples
- MathPile: Everything
- ThePile: MiniPile Subset
- GoodWiki
- The Stack Smol XL
- The Vault: train_small split
- Instruction Pretraining: 250k samples
|
starnet/13-star-07-02-01 | starnet | "2024-07-02T18:50:18Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:47:04Z" | Entry not found |
minionai/llama3-70b-wh_cove_thght_062024_halluc_rem_refusal_runpod_rename | minionai | "2024-07-02T18:51:23Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-70B",
"license:llama3",
"region:us"
] | null | "2024-07-02T18:47:05Z" | ---
base_model: meta-llama/Meta-Llama-3-70B
library_name: peft
license: llama3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: llama3-70b-wh_cove_thght_062024_halluc_rem_refusal_runpod
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: meta-llama/Meta-Llama-3-70B
bf16: true
dataset_prepared_path: last_run_prepared
debug: null
deepspeed: null
early_stopping_patience: null
eval_table_size: null
evals_per_epoch: 0
flash_attention: true
fp16: null
deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16.json
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
group_by_length: false
hub_model_id: minionai/llama3-70b-wh_cove_thght_062024_halluc_rem_refusal_runpod
hub_strategy: all_checkpoints
learning_rate: 1e-4
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lora_target_modules: null
lr_scheduler: cosine
micro_batch_size: 1
model_type: LlamaForCausalLM
num_epochs: 3
optimizer: adamw_torch
output_dir: ./lora-out
pad_to_sequence_len: true
resume_from_checkpoint: null
sample_packing: true
wandb_entity: minionai
wandb_name: wh_cove_thght_062024_ift
wandb_project: llama3-70b
saves_per_epoch: 1
sequence_len: 8192
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
val_set_size: 0
warmup_steps: 250
weight_decay: 0.0
datasets:
- path: minionai/wh_w_cv_thoughts_062024_halluc_filt_refusal_add_ift
type:
system_prompt: ""
system_format: "{system}"
field_system: system
field_instruction: instruction
field_input: input
field_output: output
format: |-
User: {instruction} {input}
Assistant:
# 'no_input_format' cannot include {input}
no_input_format: "### System:\nBelow is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:\nverify(\""
```
</details><br>
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/minionai/llama3-70b/runs/czcejakf)
# llama3-70b-wh_cove_thght_062024_halluc_rem_refusal_runpod
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 250
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.42.3
- Pytorch 2.1.2+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1 |
Litzy0619/ade_corpus_v2-effect_0.003_32_5_6 | Litzy0619 | "2024-07-02T18:51:12Z" | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | "2024-07-02T18:47:21Z" | Entry not found |
TheFinAI/finllm-8B-sft-small | TheFinAI | "2024-07-02T19:15:44Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T18:47:54Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
minionai/final_checkpoint_7224 | minionai | "2024-07-02T18:50:31Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Meta-Llama-3-70B",
"region:us"
] | null | "2024-07-02T18:49:03Z" | Invalid username or password. |
CassioBN/XLNet-base_LeNER-Br | CassioBN | "2024-07-02T20:33:07Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlnet",
"token-classification",
"generated_from_trainer",
"dataset:lener_br",
"base_model:xlnet/xlnet-base-cased",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-07-02T18:49:21Z" | ---
license: mit
base_model: xlnet/xlnet-base-cased
tags:
- generated_from_trainer
datasets:
- lener_br
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: XLNet-base_LeNER-Br
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: lener_br
type: lener_br
config: lener_br
split: validation
args: lener_br
metrics:
- name: Precision
type: precision
value: 0.8062054933875891
- name: Recall
type: recall
value: 0.872317006053935
- name: F1
type: f1
value: 0.8379592915675389
- name: Accuracy
type: accuracy
value: 0.9783680282796544
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLNet-base_LeNER-Br
This model is a fine-tuned version of [xlnet/xlnet-base-cased](https://huggingface.co/xlnet/xlnet-base-cased) on the lener_br dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Precision: 0.8062
- Recall: 0.8723
- F1: 0.8380
- Accuracy: 0.9784
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2531 | 1.0 | 979 | nan | 0.6037 | 0.7788 | 0.6801 | 0.9602 |
| 0.0531 | 2.0 | 1958 | nan | 0.6865 | 0.8184 | 0.7467 | 0.9657 |
| 0.0344 | 3.0 | 2937 | nan | 0.7079 | 0.8321 | 0.7650 | 0.9697 |
| 0.0214 | 4.0 | 3916 | nan | 0.7739 | 0.8514 | 0.8108 | 0.9765 |
| 0.0176 | 5.0 | 4895 | nan | 0.7407 | 0.8520 | 0.7924 | 0.9712 |
| 0.0109 | 6.0 | 5874 | nan | 0.7984 | 0.8696 | 0.8325 | 0.9773 |
| 0.0093 | 7.0 | 6853 | nan | 0.7944 | 0.8657 | 0.8285 | 0.9778 |
| 0.0056 | 8.0 | 7832 | nan | 0.8130 | 0.8756 | 0.8431 | 0.9779 |
| 0.0041 | 9.0 | 8811 | nan | 0.8171 | 0.8751 | 0.8451 | 0.9781 |
| 0.0034 | 10.0 | 9790 | nan | 0.8062 | 0.8723 | 0.8380 | 0.9784 |
#### Testing results
metrics={'test_loss': 0.10678809881210327, 'test_precision': 0.8132832080200502, 'test_recall': 0.8670674682698731, 'test_f1': 0.8393145813126414, 'test_accuracy': 0.9862667593953853, 'test_runtime': 42.9969, 'test_samples_per_second': 32.328, 'test_steps_per_second': 4.047})
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
healtori/17-heal-07-02-01 | healtori | "2024-07-02T18:52:16Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:49:25Z" | Entry not found |
fifala/20-fifa-07-02-01 | fifala | "2024-07-02T18:53:21Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:50:31Z" | Entry not found |
J-LAB/bisteka-GGUF | J-LAB | "2024-07-02T23:18:19Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T18:50:39Z" | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** J-LAB
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lagy/carballo-1.3b-finetuned | lagy | "2024-07-02T19:48:56Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bloom",
"text-generation",
"galician",
"FLOR",
"gl",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T18:50:51Z" | ---
language:
- gl
licence:
- MIT
tags:
- galician
- FLOR
- bloom
license: mit
inference:
parameters:
top_k: 10
do_sample: true
temperature: 0.4
widget:
- text: "Abaixo encontrase unha instrución que describe unha tarefa, xunto cunha entrada que proporciona máis contexto. \nEscribe unha resposta que responda adecuadamente a entrada.\n### Instrución:\nXera unha descrición dun castelo mediaval\n### Entrada:\n\n### Resposta:"
example_title: Castelo
- text: "Abaixo encontrase unha instrución que describe unha tarefa, xunto cunha entrada que proporciona máis contexto. \nEscribe unha resposta que responda adecuadamente a entrada.\n### Instrución:\nResume o seguinte texto explicando os puntos claves\n### Entrada:A pasada sexta feira, día 15, a maioría do persoal de Urxencias do Complexo Hospitalario Universitario de Compostela (CHUS deu o visto bo á última proposta realizada por xerencia sanitaria de Santiago. Foi66% a favor e 32% en contrada proposta. O resultado da votación apuntaba a que poderíarematar a folga que os traballadores e traballadorasdeste servizo protagonizaban desde hai14 semanas.'Vistos os resultados os traballadores ratifican que aceptan a proposta de xerencia, para o cal dende o Comité de folga solicitamos unha reunión con Xerencia o luns 18 de cara a asinar a acta final de negociación', indica o comité de centro no comunicado no que informa das votacións. Porén, desde o sindicato médico O'Mega decidiuse esta segunda feira que ían continuar coa greve, en demanda de máis persoa e espazos. A proposta da xerencia contemplaba 11 enfermeiras e 11 auxiliares e 2 médicos máis para reforzar o servizo de Urxencias. O`Mega reivindica catro persoas interinas máis para alén dun cumprimento estrito da normativa que impide que os pacientes pendentes de ser ingresados poidan 'acumularse' en corredores ou salas de observación. urxencias chus urxencias chus\n\n### Resposta:"
example_title: Resumo
- text: "Abaixo encontrase unha instrución que describe unha tarefa, xunto cunha entrada que proporciona máis contexto. \nEscribe unha resposta que responda adecuadamente a entrada.\n### Instrución:\nQuen é Rosalía de Castro?\n### Entrada:\n\n### Resposta:"
example_title: Rosalia
---
|
starnet/14-star-07-02-01 | starnet | "2024-07-02T18:54:27Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:51:18Z" | Entry not found |
ferrazzipietro/Meta-Llama-3-8B-Instruct_en.layer1_NoQuant_64_16_0.02_8 | ferrazzipietro | "2024-07-02T18:51:58Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T18:51:35Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ProfEngel/mistral-7b-v0.3 | ProfEngel | "2024-07-02T18:55:24Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-07-02T18:52:08Z" | ---
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** ProfEngel
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
realsaturnapp/plannermodel2 | realsaturnapp | "2024-07-02T23:25:13Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T18:52:20Z" | Entry not found |
Jbbok/Taxi-v3 | Jbbok | "2024-07-02T18:53:08Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-07-02T18:52:50Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Jbbok/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
healtori/18-heal-07-02-01 | healtori | "2024-07-02T18:55:59Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:53:11Z" | Entry not found |
Litzy0619/anli_0.003_32_5_6 | Litzy0619 | "2024-07-02T19:06:10Z" | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | "2024-07-02T18:53:21Z" | Entry not found |
Soorya1998/dqn-SpaceInvadersNoFrameskip-v4 | Soorya1998 | "2024-07-02T18:54:05Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-07-02T18:53:32Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 602.00 +/- 396.33
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Soorya1998 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Soorya1998 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Soorya1998
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
vishnuhaasan/distilbert_nbx_all_l | vishnuhaasan | "2024-07-02T21:03:17Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-07-02T18:54:05Z" | ---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert_nbx_all_l
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_nbx_all_l
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4794
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.27 | 1.0 | 1949 | 0.3553 |
| 0.4705 | 2.0 | 3898 | 0.3125 |
| 0.2474 | 3.0 | 5847 | 0.3118 |
| 0.2003 | 4.0 | 7796 | 0.3311 |
| 0.0115 | 5.0 | 9745 | 0.3643 |
| 0.1731 | 6.0 | 11694 | 0.3928 |
| 0.0423 | 7.0 | 13643 | 0.4096 |
| 0.0052 | 8.0 | 15592 | 0.4430 |
| 0.0001 | 9.0 | 17541 | 0.4746 |
| 0.0001 | 10.0 | 19490 | 0.4794 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.0
- Datasets 2.19.2
- Tokenizers 0.15.0
|
starnet/05-star21-07-02 | starnet | "2024-07-02T19:02:26Z" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | null | "2024-07-02T18:55:00Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
sk2003/stableoutput | sk2003 | "2024-07-02T18:55:22Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T18:55:22Z" | Entry not found |
starnet/15-star-07-02-01 | starnet | "2024-07-02T18:58:38Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:55:32Z" | Entry not found |
juanpablomesa/bge-small-bioasq-1epoch-batch32-100steps | juanpablomesa | "2024-07-02T18:56:09Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:4012",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:BAAI/bge-small-en-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | "2024-07-02T18:56:05Z" | ---
base_model: BAAI/bge-small-en-v1.5
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:4012
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: 'Extensive messenger RNA editing generates transcript and protein
diversity in genes involved in neural excitability, as previously described, as
well as in genes participating in a broad range of other cellular functions. '
sentences:
- Do cephalopods use RNA editing less frequently than other species?
- GV1001 vaccine targets which enzyme?
- Which event results in the acetylation of S6K1?
- source_sentence: Yes, exposure to household furry pets influences the gut microbiota
of infants.
sentences:
- Can pets affect infant microbiomed?
- What is the mode of action of Thiazovivin?
- What are the effects of CAMK4 inhibition?
- source_sentence: "In children with heart failure evidence of the effect of enalapril\
\ is empirical. Enalapril was clinically safe and effective in 50% to 80% of for\
\ children with cardiac failure secondary to congenital heart malformations before\
\ and after cardiac surgery, impaired ventricular function , valvar regurgitation,\
\ congestive cardiomyopathy, , arterial hypertension, life-threatening arrhythmias\
\ coexisting with circulatory insufficiency. \nACE inhibitors have shown a transient\
\ beneficial effect on heart failure due to anticancer drugs and possibly a beneficial\
\ effect in muscular dystrophy-associated cardiomyopathy, which deserves further\
\ studies."
sentences:
- Which receptors can be evaluated with the [18F]altanserin?
- In what proportion of children with heart failure has Enalapril been shown to
be safe and effective?
- Which major signaling pathways are regulated by RIP1?
- source_sentence: Cellular senescence-associated heterochromatic foci (SAHFS) are
a novel type of chromatin condensation involving alterations of linker histone
H1 and linker DNA-binding proteins. SAHFS can be formed by a variety of cell types,
but their mechanism of action remains unclear.
sentences:
- What is the relationship between the X chromosome and a neutrophil drumstick?
- Which microRNAs are involved in exercise adaptation?
- How are SAHFS created?
- source_sentence: Multicluster Pcdh diversity is required for mouse olfactory neural
circuit assembly. The vertebrate clustered protocadherin (Pcdh) cell surface proteins
are encoded by three closely linked gene clusters (Pcdhα, Pcdhβ, and Pcdhγ). Although
deletion of individual Pcdh clusters had subtle phenotypic consequences, the loss
of all three clusters (tricluster deletion) led to a severe axonal arborization
defect and loss of self-avoidance.
sentences:
- What are the effects of the deletion of all three Pcdh clusters (tricluster deletion)
in mice?
- what is the role of MEF-2 in cardiomyocyte differentiation?
- How many periods of regulatory innovation led to the evolution of vertebrates?
model-index:
- name: BGE small finetuned BIOASQ
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: BAAI/bge small en v1.5
type: BAAI/bge-small-en-v1.5
metrics:
- type: cosine_accuracy@1
value: 0.8345120226308345
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9207920792079208
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.942008486562942
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9547383309759547
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8345120226308345
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3069306930693069
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.18840169731258838
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09547383309759547
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8345120226308345
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9207920792079208
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.942008486562942
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9547383309759547
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9001912196285257
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8821973013627894
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8832658504735496
name: Cosine Map@100
---
# BGE small finetuned BIOASQ
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) <!-- at revision 5c38ec7c405ec4b44b94cc5a9bb96e735b38267a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("juanpablomesa/bge-small-bioasq-1epoch-batch32-100steps")
# Run inference
sentences = [
'Multicluster Pcdh diversity is required for mouse olfactory neural circuit assembly. The vertebrate clustered protocadherin (Pcdh) cell surface proteins are encoded by three closely linked gene clusters (Pcdhα, Pcdhβ, and Pcdhγ). Although deletion of individual Pcdh clusters had subtle phenotypic consequences, the loss of all three clusters (tricluster deletion) led to a severe axonal arborization defect and loss of self-avoidance.',
'What are the effects of the deletion of all three Pcdh clusters (tricluster deletion) in mice?',
'How many periods of regulatory innovation led to the evolution of vertebrates?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `BAAI/bge-small-en-v1.5`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.8345 |
| cosine_accuracy@3 | 0.9208 |
| cosine_accuracy@5 | 0.942 |
| cosine_accuracy@10 | 0.9547 |
| cosine_precision@1 | 0.8345 |
| cosine_precision@3 | 0.3069 |
| cosine_precision@5 | 0.1884 |
| cosine_precision@10 | 0.0955 |
| cosine_recall@1 | 0.8345 |
| cosine_recall@3 | 0.9208 |
| cosine_recall@5 | 0.942 |
| cosine_recall@10 | 0.9547 |
| cosine_ndcg@10 | 0.9002 |
| cosine_mrr@10 | 0.8822 |
| **cosine_map@100** | **0.8833** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 4,012 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 63.38 tokens</li><li>max: 485 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 16.13 tokens</li><li>max: 49 tokens</li></ul> |
* Samples:
| positive | anchor |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|
| <code>Aberrant patterns of H3K4, H3K9, and H3K27 histone lysine methylation were shown to result in histone code alterations, which induce changes in gene expression, and affect the proliferation rate of cells in medulloblastoma.</code> | <code>What is the implication of histone lysine methylation in medulloblastoma?</code> |
| <code>STAG1/STAG2 proteins are tumour suppressor proteins that suppress cell proliferation and are essential for differentiation.</code> | <code>What is the role of STAG1/STAG2 proteins in differentiation?</code> |
| <code>The association between cell phone use and incident glioblastoma remains unclear. Some studies have reported that cell phone use was associated with incident glioblastoma, and with reduced survival of patients diagnosed with glioblastoma. However, other studies have repeatedly replicated to find an association between cell phone use and glioblastoma.</code> | <code>What is the association between cell phone use and glioblastoma?</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | BAAI/bge-small-en-v1.5_cosine_map@100 |
|:------:|:----:|:-------------:|:-------------------------------------:|
| 0.7937 | 100 | 0.2124 | 0.8833 |
### Framework Versions
- Python: 3.11.5
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.31.0
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
harshtih/llama3-8b-instruct-FT_small | harshtih | "2024-07-02T18:57:26Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T18:57:00Z" | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** harshtih
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
healtori/19-heal-07-02-01 | healtori | "2024-07-02T18:59:44Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:57:02Z" | Entry not found |
KasuleTrevor/wav2vec2-large-xls-r-300m-lg-cv-100hr-v3 | KasuleTrevor | "2024-07-03T01:29:08Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-07-02T18:59:34Z" | ---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-lg-cv-100hr-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/asr-africa-research-team/ASR%20Africa/runs/te0ou310)
# wav2vec2-large-xls-r-300m-lg-cv-100hr-v3
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4427
- Wer: 0.2163
- Cer: 0.0477
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:------:|:---------------:|:------:|:------:|
| 0.9427 | 1.0 | 3745 | 0.2542 | 0.3486 | 0.0687 |
| 0.2163 | 2.0 | 7490 | 0.2224 | 0.3099 | 0.0591 |
| 0.1757 | 3.0 | 11235 | 0.2090 | 0.2790 | 0.0543 |
| 0.1531 | 4.0 | 14980 | 0.2047 | 0.2648 | 0.0514 |
| 0.137 | 5.0 | 18725 | 0.2042 | 0.2639 | 0.0517 |
| 0.1251 | 6.0 | 22470 | 0.2106 | 0.2544 | 0.0496 |
| 0.1129 | 7.0 | 26215 | 0.2102 | 0.2470 | 0.0482 |
| 0.1041 | 8.0 | 29960 | 0.2164 | 0.2444 | 0.0478 |
| 0.0948 | 9.0 | 33705 | 0.2164 | 0.2493 | 0.0484 |
| 0.0851 | 10.0 | 37450 | 0.2240 | 0.2455 | 0.0473 |
| 0.0757 | 11.0 | 41195 | 0.2351 | 0.2421 | 0.0469 |
| 0.0675 | 12.0 | 44940 | 0.2441 | 0.2410 | 0.0468 |
| 0.0605 | 13.0 | 48685 | 0.2507 | 0.2419 | 0.0470 |
| 0.0554 | 14.0 | 52430 | 0.2522 | 0.2413 | 0.0464 |
| 0.0506 | 15.0 | 56175 | 0.2662 | 0.2414 | 0.0466 |
| 0.0464 | 16.0 | 59920 | 0.2717 | 0.2357 | 0.0455 |
| 0.0427 | 17.0 | 63665 | 0.2865 | 0.2371 | 0.0460 |
| 0.0398 | 18.0 | 67410 | 0.2835 | 0.2313 | 0.0442 |
| 0.0368 | 19.0 | 71155 | 0.2930 | 0.2330 | 0.0444 |
| 0.0345 | 20.0 | 74900 | 0.3051 | 0.2287 | 0.0439 |
| 0.0323 | 21.0 | 78645 | 0.3049 | 0.2253 | 0.0433 |
| 0.0296 | 22.0 | 82390 | 0.3063 | 0.2222 | 0.0426 |
| 0.0285 | 23.0 | 86135 | 0.3097 | 0.2248 | 0.0426 |
| 0.0259 | 24.0 | 89880 | 0.3214 | 0.2202 | 0.0421 |
| 0.0248 | 25.0 | 93625 | 0.3117 | 0.2185 | 0.0414 |
| 0.0226 | 26.0 | 97370 | 0.3199 | 0.2163 | 0.0412 |
| 0.0218 | 27.0 | 101115 | 0.3305 | 0.2132 | 0.0406 |
| 0.0202 | 28.0 | 104860 | 0.3272 | 0.2117 | 0.0407 |
| 0.0191 | 29.0 | 108605 | 0.3534 | 0.2088 | 0.0398 |
| 0.0186 | 30.0 | 112350 | 0.3436 | 0.2094 | 0.0396 |
| 0.0172 | 31.0 | 116095 | 0.3453 | 0.2075 | 0.0391 |
| 0.016 | 32.0 | 119840 | 0.3509 | 0.2046 | 0.0389 |
| 0.0154 | 33.0 | 123585 | 0.3441 | 0.2037 | 0.0386 |
| 0.0149 | 34.0 | 127330 | 0.3453 | 0.2043 | 0.0385 |
| 0.0148 | 35.0 | 131075 | 0.3514 | 0.2027 | 0.0382 |
| 0.0143 | 36.0 | 134820 | 0.3569 | 0.2019 | 0.0380 |
| 0.0137 | 37.0 | 138565 | 0.3544 | 0.2016 | 0.0378 |
| 0.0138 | 38.0 | 142310 | 0.3553 | 0.2012 | 0.0377 |
| 0.0135 | 39.0 | 146055 | 0.3549 | 0.2016 | 0.0378 |
| 0.0136 | 40.0 | 149800 | 0.3548 | 0.2017 | 0.0378 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.2.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
starnet/16-star-07-02-01 | starnet | "2024-07-02T19:02:55Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:59:43Z" | Entry not found |
RichardErkhov/NickyNicky_-_TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1-gguf | RichardErkhov | "2024-07-02T19:12:20Z" | 0 | 0 | null | [
"gguf",
"arxiv:2311.03099",
"arxiv:2306.01708",
"region:us"
] | null | "2024-07-02T19:00:02Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1 - GGUF
- Model creator: https://huggingface.co/NickyNicky/
- Original model: https://huggingface.co/NickyNicky/TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.Q2_K.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1-gguf/blob/main/TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.Q2_K.gguf) | Q2_K | 0.4GB |
| [TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1-gguf/blob/main/TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.IQ3_XS.gguf) | IQ3_XS | 0.44GB |
| [TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1-gguf/blob/main/TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.IQ3_S.gguf) | IQ3_S | 0.47GB |
| [TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1-gguf/blob/main/TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.Q3_K_S.gguf) | Q3_K_S | 0.47GB |
| [TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1-gguf/blob/main/TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.IQ3_M.gguf) | IQ3_M | 0.48GB |
| [TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.Q3_K.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1-gguf/blob/main/TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.Q3_K.gguf) | Q3_K | 0.51GB |
| [TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1-gguf/blob/main/TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.Q3_K_M.gguf) | Q3_K_M | 0.51GB |
| [TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1-gguf/blob/main/TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.Q3_K_L.gguf) | Q3_K_L | 0.55GB |
| [TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1-gguf/blob/main/TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.IQ4_XS.gguf) | IQ4_XS | 0.57GB |
| [TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.Q4_0.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1-gguf/blob/main/TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.Q4_0.gguf) | Q4_0 | 0.59GB |
| [TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1-gguf/blob/main/TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.IQ4_NL.gguf) | IQ4_NL | 0.6GB |
| [TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1-gguf/blob/main/TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.Q4_K_S.gguf) | Q4_K_S | 0.6GB |
| [TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.Q4_K.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1-gguf/blob/main/TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.Q4_K.gguf) | Q4_K | 0.62GB |
| [TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1-gguf/blob/main/TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.Q4_K_M.gguf) | Q4_K_M | 0.62GB |
| [TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.Q4_1.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1-gguf/blob/main/TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.Q4_1.gguf) | Q4_1 | 0.65GB |
| [TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.Q5_0.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1-gguf/blob/main/TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.Q5_0.gguf) | Q5_0 | 0.71GB |
| [TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1-gguf/blob/main/TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.Q5_K_S.gguf) | Q5_K_S | 0.71GB |
| [TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.Q5_K.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1-gguf/blob/main/TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.Q5_K.gguf) | Q5_K | 0.73GB |
| [TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1-gguf/blob/main/TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.Q5_K_M.gguf) | Q5_K_M | 0.73GB |
| [TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.Q5_1.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1-gguf/blob/main/TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.Q5_1.gguf) | Q5_1 | 0.77GB |
| [TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.Q6_K.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1-gguf/blob/main/TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.Q6_K.gguf) | Q6_K | 0.84GB |
| [TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.Q8_0.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1-gguf/blob/main/TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v1.Q8_0.gguf) | Q8_0 | 1.09GB |
Original model description:
---
language:
- en
- es
- ru
- zh
- de
- fr
- th
- ca
- it
- ja
- pl
- eo
- eu
- vi
- fi
- hu
- ar
- nl
- da
- tr
- ko
- he
- id
- cs
- bn
- sv
base_model:
- NickyNicky/TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster_1_V1
- NickyNicky/TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster_3_V1
- NickyNicky/TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster_2_V1
tags:
- mergekit
- merge
widget:
- text: "<|im_start|>system\nYou are a helpful AI assistant.<|im_end|>\n<|im_start|>user\npodrias escribir un codigo de ejemplo en Python<|im_end|>\n<|im_start|>assistant\n"
license: apache-2.0
---
# merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [NickyNicky/TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster_1_V1](https://huggingface.co/NickyNicky/TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster_1_V1) as a base.
### Models Merged
The following models were included in the merge:
* [NickyNicky/TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster_3_V1](https://huggingface.co/NickyNicky/TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster_3_V1)
* [NickyNicky/TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster_2_V1](https://huggingface.co/NickyNicky/TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster_2_V1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model:
model:
path: NickyNicky/TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster_1_V1
dtype: bfloat16
merge_method: dare_ties
slices:
- sources:
- layer_range: [0, 22]
model:
model:
path: NickyNicky/TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster_1_V1
- layer_range: [0, 22]
model:
model:
path: NickyNicky/TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster_1_V1
parameters:
density: 0.55
weight: 0.55
- layer_range: [0, 22]
model:
model:
path: NickyNicky/TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster_2_V1
parameters:
density: 0.55
weight: 0.56
- layer_range: [0, 22]
model:
model:
path: NickyNicky/TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster_3_V1
parameters:
density: 0.55
weight: 0.56
```
```Python
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
TrainingArguments,
pipeline,
logging,
GenerationConfig,
TextIteratorStreamer,
)
import torch
new_model= "NickyNicky/TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_merge_v1"
model = AutoModelForCausalLM.from_pretrained(#f'NickyNicky/{new_model}',
new_model,
device_map="auto",
trust_remote_code=True,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage= True,
# use_flash_attention_2=False,
)
tokenizer = AutoTokenizer.from_pretrained(new_model,
max_length=2048,
trust_remote_code=True,
use_fast = True,
)
tokenizer.pad_token = tokenizer.eos_token
# tokenizer.padding_side = 'left'
tokenizer.padding_side = 'right'
prompt= """<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
escribe una historia de amor.<|im_end|>
<|im_start|>assistant
"""
inputs = tokenizer.encode(prompt,
return_tensors="pt",
add_special_tokens=False).cuda()#.to("cuda") # False # True
generation_config = GenerationConfig(
max_new_tokens=700,
temperature=0.5,
top_p=0.9,
top_k=40,
repetition_penalty=1.1, #1.1, # 1.0 means no penalty, > 1.0 means penalty, 1.2 from CTRL paper
do_sample=True,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
)
outputs = model.generate(
generation_config=generation_config,
input_ids=inputs,)
# tokenizer.decode(outputs[0], skip_special_tokens=False) #True
print(tokenizer.decode(outputs[0], skip_special_tokens=False))
```
|
healtori/20-heal-07-02-01 | healtori | "2024-07-02T19:03:46Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T19:00:40Z" | Entry not found |
rg1683/fine_tuned_mBERT_SentimentAnalysis | rg1683 | "2024-07-02T19:02:49Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-07-02T19:02:08Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/NickyNicky_-_Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster-gguf | RichardErkhov | "2024-07-02T19:27:04Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-07-02T19:02:23Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster - GGUF
- Model creator: https://huggingface.co/NickyNicky/
- Original model: https://huggingface.co/NickyNicky/Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.Q2_K.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster-gguf/blob/main/Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.Q2_K.gguf) | Q2_K | 1.17GB |
| [Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster-gguf/blob/main/Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.IQ3_XS.gguf) | IQ3_XS | 1.31GB |
| [Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.IQ3_S.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster-gguf/blob/main/Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.IQ3_S.gguf) | IQ3_S | 1.38GB |
| [Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster-gguf/blob/main/Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.Q3_K_S.gguf) | Q3_K_S | 1.38GB |
| [Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.IQ3_M.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster-gguf/blob/main/Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.IQ3_M.gguf) | IQ3_M | 1.4GB |
| [Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.Q3_K.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster-gguf/blob/main/Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.Q3_K.gguf) | Q3_K | 1.52GB |
| [Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster-gguf/blob/main/Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.Q3_K_M.gguf) | Q3_K_M | 1.52GB |
| [Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster-gguf/blob/main/Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.Q3_K_L.gguf) | Q3_K_L | 1.65GB |
| [Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster-gguf/blob/main/Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.Q4_0.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster-gguf/blob/main/Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.Q4_0.gguf) | Q4_0 | 1.79GB |
| [Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster-gguf/blob/main/Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.IQ4_NL.gguf) | IQ4_NL | 1.8GB |
| [Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster-gguf/blob/main/Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.Q4_K.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster-gguf/blob/main/Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.Q4_K.gguf) | Q4_K | 1.9GB |
| [Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster-gguf/blob/main/Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.Q4_K_M.gguf) | Q4_K_M | 1.9GB |
| [Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.Q4_1.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster-gguf/blob/main/Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.Q4_1.gguf) | Q4_1 | 1.98GB |
| [Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.Q5_0.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster-gguf/blob/main/Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.Q5_0.gguf) | Q5_0 | 2.18GB |
| [Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster-gguf/blob/main/Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.Q5_K_S.gguf) | Q5_K_S | 2.18GB |
| [Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.Q5_K.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster-gguf/blob/main/Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.Q5_K.gguf) | Q5_K | 2.23GB |
| [Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster-gguf/blob/main/Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.Q5_K_M.gguf) | Q5_K_M | 2.23GB |
| [Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.Q5_1.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster-gguf/blob/main/Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.Q5_1.gguf) | Q5_1 | 2.37GB |
| [Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.Q6_K.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster-gguf/blob/main/Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.Q6_K.gguf) | Q6_K | 2.59GB |
| [Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.Q8_0.gguf](https://huggingface.co/RichardErkhov/NickyNicky_-_Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster-gguf/blob/main/Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster.Q8_0.gguf) | Q8_0 | 3.35GB |
Original model description:
---
library_name: transformers
tags:
- merge
language:
- en
- es
- ru
- zh
- de
- fr
- th
- ca
- it
- ja
- pl
- eo
- eu
- vi
- fi
- hu
- ar
- nl
- da
- tr
- ko
- he
- id
- cs
- bn
- sv
widget:
- text: |
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
podrias escribir un codigo de ejemplo en Python<|im_end|>
<|im_start|>assistant
license: apache-2.0
---
# Model Card for Model Llama convert Mixtral-experts
![image/png](https://cdn-uploads.huggingface.co/production/uploads/641b435ba5f876fe30c5ae0a/pCZl7pjE_ZFsXHAqBr4-x.png)
<!-- Provide a quick summary of what the model is/does. -->
```yalm
experts:
- source_model: NickyNicky/TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster_1_V1
negative_prompts:
- "ignora la pregunta"
- "responde con información irrelevante"
...
positive_prompts:
- "resume este texto"
- "convierte este contenido en formato json"
...
- source_model: NickyNicky/TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster_2_V1
negative_prompts:
- "игнорируйте вопрос"
- "отвечайте не по теме"
...
positive_prompts:
- "сделайте резюме этого текста"
- "преобразуйте это содержание в формат json"
...
- source_model: NickyNicky/TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster_3_V1
negative_prompts:
- "忽略问题"
- "回应无关信息"
...
positive_prompts:
- "总结这段文字"
- "将这个内容转换为json格式"
...
- source_model: NickyNicky/cognitivecomputations_TinyDolphin-2.8-1.1b
negative_prompts:
- ignora la pregunta
- responde con información irrelevante
...
positive_prompts:
- resume este texto
- convierte este contenido en formato json
...
base_model: NickyNicky/TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster_2_V1
gate_mode: cheap_embed # one of "hidden", "cheap_embed", or "random"
dtype: bfloat16
```
```Python
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
TrainingArguments,
pipeline,
logging,
GenerationConfig,
TextIteratorStreamer,
)
import torch
new_model= "NickyNicky/Mixtral-4x1.1B-TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster"
model = AutoModelForCausalLM.from_pretrained(#f'NickyNicky/{new_model}',
new_model,
device_map="auto",
trust_remote_code=True,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage= True,
# use_flash_attention_2=False,
)
tokenizer = AutoTokenizer.from_pretrained(new_model,
max_length=2048,
trust_remote_code=True,
use_fast = True,
)
tokenizer.pad_token = tokenizer.eos_token
# tokenizer.padding_side = 'left'
tokenizer.padding_side = 'right'
prompt= """<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
escribe una historia de amor.<|im_end|>
<|im_start|>assistant
"""
inputs = tokenizer.encode(prompt,
return_tensors="pt",
add_special_tokens=False).cuda()#.to("cuda") # False # True
generation_config = GenerationConfig(
max_new_tokens=700,
temperature=0.5,
top_p=0.9,
top_k=40,
repetition_penalty=1.1, #1.1, # 1.0 means no penalty, > 1.0 means penalty, 1.2 from CTRL paper
do_sample=True,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
)
outputs = model.generate(
generation_config=generation_config,
input_ids=inputs,)
# tokenizer.decode(outputs[0], skip_special_tokens=False) #True
print(tokenizer.decode(outputs[0], skip_special_tokens=False))
'''output print
<|im_start|> system
You are a helpful AI assistant.
<|im_start|> user
podrias escribir un codigo de ejemplo en Python
<|im_start|> assistant
Claro, aquí está un ejemplo de código en Python para crear un programa que imprime la suma de todos los números enteros entre 1 y 10:
python
# Código para imprimir la suma de todos los números enteros entre 1 y 10
suma = 0
for i in range(1, 11):
suma += i
print("La suma de todos los números enteros entre 1 y 10 es:", suma)
Este código utiliza la función `range()` para crear una lista de los números enteros entre 1 y 10, y la función `sum()` para sumar todos los elementos de la lista. La variable `suma` se asigna a 0 durante el ciclo for, y se incrementa cada vez que se realiza una iteración del ciclo.
El resultado de ejecutar este código será:
La suma de todos los números enteros entre 1 y 10 es: 55
Este ejemplo muestra cómo crear programas en Python que usan las instrucciones básicas de programación, como las listas, las variables y las funciones. También muestra cómo usar la función '''
```
|
starnet/06-star21-07-02 | starnet | "2024-07-02T19:10:57Z" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | null | "2024-07-02T19:03:23Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
starnet/17-star-07-02-01 | starnet | "2024-07-02T19:07:48Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T19:03:52Z" | Entry not found |
InfiniteEcho/CartPole-v1 | InfiniteEcho | "2024-07-02T19:04:30Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2024-07-02T19:04:19Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
QuocAnh123/QuocAnh | QuocAnh123 | "2024-07-02T19:05:15Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T19:05:12Z" | Entry not found |
danielkosyra/polynomial_2000_1e-3_16b_w0.075 | danielkosyra | "2024-07-02T19:05:35Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T19:05:14Z" | ---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: polynomial_2000_1e-3_16b_w0.075
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# polynomial_2000_1e-3_16b_w0.075
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8163
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- lr_scheduler_warmup_steps: 250
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.2211 | 0.9116 | 250 | 4.7231 |
| 4.0225 | 1.8232 | 500 | 3.4634 |
| 3.2171 | 2.7347 | 750 | 3.1355 |
| 2.8923 | 3.6463 | 1000 | 2.9862 |
| 2.6796 | 4.5579 | 1250 | 2.9034 |
| 2.4887 | 5.4695 | 1500 | 2.8586 |
| 2.3317 | 6.3810 | 1750 | 2.8339 |
| 2.193 | 7.2926 | 2000 | 2.8163 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
BTX24/vit-base-patch16-224-in21k-finetuned-birads-23_down | BTX24 | "2024-07-02T22:05:51Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-07-02T19:06:01Z" | Entry not found |
rahuketu86/LabellingTracker | rahuketu86 | "2024-07-02T19:15:42Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T19:06:09Z" | Entry not found |
starnet/18-star-07-02-01 | starnet | "2024-07-02T19:12:41Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T19:09:06Z" | Entry not found |
MP42/multi-viewDiffusion | MP42 | "2024-07-02T19:10:15Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T19:10:14Z" | Entry not found |
mradermacher/HornyEchidna-13b-v0.1-GGUF | mradermacher | "2024-07-02T20:00:28Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:NeverSleep/HornyEchidna-13b-v0.1",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T19:10:23Z" | ---
base_model: NeverSleep/HornyEchidna-13b-v0.1
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/NeverSleep/HornyEchidna-13b-v0.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/HornyEchidna-13b-v0.1-GGUF/resolve/main/HornyEchidna-13b-v0.1.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/HornyEchidna-13b-v0.1-GGUF/resolve/main/HornyEchidna-13b-v0.1.IQ3_XS.gguf) | IQ3_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/HornyEchidna-13b-v0.1-GGUF/resolve/main/HornyEchidna-13b-v0.1.IQ3_S.gguf) | IQ3_S | 5.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/HornyEchidna-13b-v0.1-GGUF/resolve/main/HornyEchidna-13b-v0.1.Q3_K_S.gguf) | Q3_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/HornyEchidna-13b-v0.1-GGUF/resolve/main/HornyEchidna-13b-v0.1.IQ3_M.gguf) | IQ3_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/HornyEchidna-13b-v0.1-GGUF/resolve/main/HornyEchidna-13b-v0.1.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/HornyEchidna-13b-v0.1-GGUF/resolve/main/HornyEchidna-13b-v0.1.Q3_K_L.gguf) | Q3_K_L | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/HornyEchidna-13b-v0.1-GGUF/resolve/main/HornyEchidna-13b-v0.1.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/HornyEchidna-13b-v0.1-GGUF/resolve/main/HornyEchidna-13b-v0.1.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/HornyEchidna-13b-v0.1-GGUF/resolve/main/HornyEchidna-13b-v0.1.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/HornyEchidna-13b-v0.1-GGUF/resolve/main/HornyEchidna-13b-v0.1.Q5_K_S.gguf) | Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/HornyEchidna-13b-v0.1-GGUF/resolve/main/HornyEchidna-13b-v0.1.Q5_K_M.gguf) | Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/HornyEchidna-13b-v0.1-GGUF/resolve/main/HornyEchidna-13b-v0.1.Q6_K.gguf) | Q6_K | 10.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/HornyEchidna-13b-v0.1-GGUF/resolve/main/HornyEchidna-13b-v0.1.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
NikolayKozloff/L3-8B-Everything-COT-Q8_0-GGUF | NikolayKozloff | "2024-07-02T19:11:36Z" | 0 | 1 | null | [
"gguf",
"llm",
"llama",
"llama3",
"llama-cpp",
"gguf-my-repo",
"base_model:FPHam/L3-8B-Everything-COT",
"region:us"
] | null | "2024-07-02T19:10:38Z" | ---
base_model: FPHam/L3-8B-Everything-COT
tags:
- llm
- llama
- llama3
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/L3-8B-Everything-COT-Q8_0-GGUF
This model was converted to GGUF format from [`FPHam/L3-8B-Everything-COT`](https://huggingface.co/FPHam/L3-8B-Everything-COT) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/FPHam/L3-8B-Everything-COT) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/L3-8B-Everything-COT-Q8_0-GGUF --hf-file l3-8b-everything-cot-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/L3-8B-Everything-COT-Q8_0-GGUF --hf-file l3-8b-everything-cot-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/L3-8B-Everything-COT-Q8_0-GGUF --hf-file l3-8b-everything-cot-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/L3-8B-Everything-COT-Q8_0-GGUF --hf-file l3-8b-everything-cot-q8_0.gguf -c 2048
```
|
whizzzzkid/whizzzzkid_431_5 | whizzzzkid | "2024-07-02T19:11:46Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T19:11:23Z" | Entry not found |
starnet/07-star21-07-02 | starnet | "2024-07-02T19:20:28Z" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | null | "2024-07-02T19:11:55Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
whizzzzkid/whizzzzkid_432_3 | whizzzzkid | "2024-07-02T19:12:49Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T19:12:27Z" | Entry not found |
benbekir/ppo-Huggy | benbekir | "2024-07-02T19:12:42Z" | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | "2024-07-02T19:12:31Z" | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: benbekir/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AgentMarlboro/bigASPv1 | AgentMarlboro | "2024-07-02T19:25:26Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T19:13:24Z" | Entry not found |
whizzzzkid/whizzzzkid_433_4 | whizzzzkid | "2024-07-02T19:14:02Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T19:13:35Z" | Entry not found |
RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf | RichardErkhov | "2024-07-02T19:22:36Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-07-02T19:13:41Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
TinyDolphin-2.8.2-1.1b-laser - GGUF
- Model creator: https://huggingface.co/cognitivecomputations/
- Original model: https://huggingface.co/cognitivecomputations/TinyDolphin-2.8.2-1.1b-laser/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [TinyDolphin-2.8.2-1.1b-laser.Q2_K.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.Q2_K.gguf) | Q2_K | 0.4GB |
| [TinyDolphin-2.8.2-1.1b-laser.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.IQ3_XS.gguf) | IQ3_XS | 0.44GB |
| [TinyDolphin-2.8.2-1.1b-laser.IQ3_S.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.IQ3_S.gguf) | IQ3_S | 0.47GB |
| [TinyDolphin-2.8.2-1.1b-laser.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.Q3_K_S.gguf) | Q3_K_S | 0.47GB |
| [TinyDolphin-2.8.2-1.1b-laser.IQ3_M.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.IQ3_M.gguf) | IQ3_M | 0.48GB |
| [TinyDolphin-2.8.2-1.1b-laser.Q3_K.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.Q3_K.gguf) | Q3_K | 0.51GB |
| [TinyDolphin-2.8.2-1.1b-laser.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.Q3_K_M.gguf) | Q3_K_M | 0.51GB |
| [TinyDolphin-2.8.2-1.1b-laser.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.Q3_K_L.gguf) | Q3_K_L | 0.55GB |
| [TinyDolphin-2.8.2-1.1b-laser.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.IQ4_XS.gguf) | IQ4_XS | 0.57GB |
| [TinyDolphin-2.8.2-1.1b-laser.Q4_0.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.Q4_0.gguf) | Q4_0 | 0.59GB |
| [TinyDolphin-2.8.2-1.1b-laser.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.IQ4_NL.gguf) | IQ4_NL | 0.6GB |
| [TinyDolphin-2.8.2-1.1b-laser.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.Q4_K_S.gguf) | Q4_K_S | 0.6GB |
| [TinyDolphin-2.8.2-1.1b-laser.Q4_K.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.Q4_K.gguf) | Q4_K | 0.62GB |
| [TinyDolphin-2.8.2-1.1b-laser.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.Q4_K_M.gguf) | Q4_K_M | 0.62GB |
| [TinyDolphin-2.8.2-1.1b-laser.Q4_1.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.Q4_1.gguf) | Q4_1 | 0.65GB |
| [TinyDolphin-2.8.2-1.1b-laser.Q5_0.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.Q5_0.gguf) | Q5_0 | 0.71GB |
| [TinyDolphin-2.8.2-1.1b-laser.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.Q5_K_S.gguf) | Q5_K_S | 0.71GB |
| [TinyDolphin-2.8.2-1.1b-laser.Q5_K.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.Q5_K.gguf) | Q5_K | 0.73GB |
| [TinyDolphin-2.8.2-1.1b-laser.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.Q5_K_M.gguf) | Q5_K_M | 0.73GB |
| [TinyDolphin-2.8.2-1.1b-laser.Q5_1.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.Q5_1.gguf) | Q5_1 | 0.77GB |
| [TinyDolphin-2.8.2-1.1b-laser.Q6_K.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.Q6_K.gguf) | Q6_K | 0.84GB |
| [TinyDolphin-2.8.2-1.1b-laser.Q8_0.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.Q8_0.gguf) | Q8_0 | 1.09GB |
Original model description:
---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- teknium/openhermes
language:
- en
---
# TinyDolphin-2.8.2-1.1b-laser
![image/webp](https://cdn-uploads.huggingface.co/production/uploads/655dc641accde1bbc8b41aec/x8c5Ue58EAHRl1cp2Wwk1.webp)
Join Our Discord! https://discord.gg/cognitivecomputations
This is an version 3 of a model trained on 3 3090's by Kearm on the new Dolphin 2.8 dataset by Eric Hartford https://erichartford.com/dolphin 🐬
This model uses our laser technique from https://github.com/cognitivecomputations/laserRMT to denoise the model!
For this version we increased the epochs as well as refined the datasets used.
## Example Outputs
TBD
Support my efforts! https://ko-fi.com/kearm
# Orignal Model Card Below
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Collection
This collection contains all checkpoints after the 1T fix. Branch name indicates the step and number of tokens seen.
#### Eval
| Model | Pretrain Tokens | HellaSwag | Obqa | WinoGrande | ARC_c | ARC_e | boolq | piqa | avg |
|-------------------------------------------|-----------------|-----------|------|------------|-------|-------|-------|------|-----|
| Pythia-1.0B | 300B | 47.16 | 31.40| 53.43 | 27.05 | 48.99 | 60.83 | 69.21 | 48.30 |
| TinyLlama-1.1B-intermediate-step-50K-104b | 103B | 43.50 | 29.80| 53.28 | 24.32 | 44.91 | 59.66 | 67.30 | 46.11|
| TinyLlama-1.1B-intermediate-step-240k-503b| 503B | 49.56 |31.40 |55.80 |26.54 |48.32 |56.91 |69.42 | 48.28 |
| TinyLlama-1.1B-intermediate-step-480k-1007B | 1007B | 52.54 | 33.40 | 55.96 | 27.82 | 52.36 | 59.54 | 69.91 | 50.22 |
| TinyLlama-1.1B-intermediate-step-715k-1.5T | 1.5T | 53.68 | 35.20 | 58.33 | 29.18 | 51.89 | 59.08 | 71.65 | 51.29 |
| TinyLlama-1.1B-intermediate-step-955k-2T | 2T | 54.63 | 33.40 | 56.83 | 28.07 | 54.67 | 63.21 | 70.67 | 51.64 |
| TinyLlama-1.1B-intermediate-step-1195k-2.5T | 2.5T | 58.96 | 34.40 | 58.72 | 31.91 | 56.78 | 63.21 | 73.07 | 53.86|
| TinyLlama-1.1B-intermediate-step-1431k-3T | 3T | 59.20 | 36.00 | 59.12 | 30.12 | 55.25 | 57.83 | 73.29 | 52.99|
|
starnet/19-star-07-02-01 | starnet | "2024-07-02T19:17:04Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T19:13:44Z" | Entry not found |
Litzy0619/aqua_rat_0.003_32_5_6 | Litzy0619 | "2024-07-02T19:18:04Z" | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | "2024-07-02T19:14:03Z" | Entry not found |
LeonardPuettmann/MiniMistral-8M | LeonardPuettmann | "2024-07-02T19:40:03Z" | 0 | 1 | null | [
"region:us"
] | null | "2024-07-02T19:14:09Z" | This is my glorious attempt to understand the Mistral 7B model. Because the people from Mistral AI have open-sourced their model code, I tried to replicate a small version of the model. Like... really small. A whopping a million parameters. Needless to say, the model is useless for anything.
The model was trained on a handful examples from the Cosmopedia dataset, which is an open-source version of the high quality textbook dataset in a similar style to the Phi dataset.
Check out my GitHub to see the code used: https://github.com/LeonardPuettmann/understanding-mistral
### How to use
Please don't. You should probably use Mistral 7B instead: [mistralai/Mistral-7B-v0.3](https://huggingface.co/mistralai/Mistral-7B-v0.3)
Or if you are (very) GPU rich, you can try to train their model yourself: https://github.com/mistralai/mistral-inference
In the folder `inference` you actually find a small script, which allows you to chat with the 7B param model. All you need is a free HuggingFace API token.
---
license: apache-2.0
---
|
whizzzzkid/whizzzzkid_434_1 | whizzzzkid | "2024-07-02T19:15:08Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T19:14:46Z" | Entry not found |
talhaturab/lora-dreambooth-model | talhaturab | "2024-07-02T19:25:33Z" | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"lora",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2024-07-02T19:15:03Z" | ---
base_model: CompVis/stable-diffusion-v1-4
library_name: diffusers
license: creativeml-openrail-m
tags:
- text-to-image
- diffusers
- lora
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
inference: true
instance_prompt: a photo of a maxttcat cat
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA DreamBooth - talhaturab/lora-dreambooth-model
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of a maxttcat cat using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
whizzzzkid/whizzzzkid_435_7 | whizzzzkid | "2024-07-02T19:16:15Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T19:15:48Z" | Entry not found |
braindao/iq-code-evmind-v3.1-granite-8b-instruct-average | braindao | "2024-07-02T19:52:12Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"solidity",
"conversational",
"en",
"dataset:braindao/Solidity-Dataset",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T19:16:27Z" | ---
license: apache-2.0
datasets:
- braindao/Solidity-Dataset
language:
- en
tags:
- solidity
---
**Overview:**
The braindao/iq-code-evmind-v3.1-granite-8b-instruct-average is a highly specialized language model designed to generate Solidity code efficiently and accurately. This model leverages the robust capabilities of the ibm-granite/granite-8b-code-instruct architecture, fine-tuned specifically on the braindao/Solidity-Dataset.
**Dataset:**
The model training utilized the braindao/Solidity-Dataset, particularly focusing on the "average" column, which provides a well-balanced perspective on Solidity code examples. This dataset was meticulously curated to enhance the model's understanding and generation of high-quality Solidity code.
**Capabilities:**
- **Code Generation:** Expertly generates Solidity code snippets, functions, and smart contracts.
- **Syntax Understanding:** Displays an in-depth understanding of Solidity syntax and best practices.
- **Adaptability:** Can adapt to various coding styles and requirements based on the provided prompts.
- **Efficiency:** Optimized to deliver accurate and contextually relevant code outputs quickly.
**Applications:**
- **Smart Contract Development:** Ideal for developers looking to create, modify, or optimize smart contracts on the Ethereum blockchain.
- **Educational Use:** A valuable tool for learners and educators in blockchain development and Solidity programming.
- **Automation:** Can be integrated into development workflows to automate routine coding tasks, enhancing productivity. |
shriyakan/auto-gptq-128-true | shriyakan | "2024-07-02T19:16:34Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T19:16:34Z" | Invalid username or password. |
ferrazzipietro/Meta-Llama-3-8B-Instruct_en.layer1_NoQuant_64_32_0.02_8 | ferrazzipietro | "2024-07-02T19:17:34Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T19:17:10Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tapan247/fine-tuned-llama-2-7b-chat-2 | tapan247 | "2024-07-02T19:25:24Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T19:17:28Z" | Entry not found |
santos-marco/Llama3_8b_com_Unsloth_q4_k_m_GGUF_QeA-MMGD_ORPO | santos-marco | "2024-07-02T21:26:34Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T19:17:32Z" | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** santos-marco
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
shriyakan/auto-gptq-64-true | shriyakan | "2024-07-02T19:17:48Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T19:17:48Z" | Invalid username or password. |
starnet/20-star-07-02-01 | starnet | "2024-07-02T19:21:36Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T19:18:08Z" | Invalid username or password. |
shriyakan/auto-gptq-32-true | shriyakan | "2024-07-02T19:18:35Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T19:18:35Z" | Invalid username or password. |
RobertML/sn3-cuprum | RobertML | "2024-07-02T19:18:50Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T19:18:39Z" | Entry not found |
medieval-data/gliner_multi-v2.1-medieval-latin | medieval-data | "2024-07-02T19:53:51Z" | 0 | 1 | gliner | [
"gliner",
"pytorch",
"token-classification",
"multilingual",
"dataset:medieval-data/medieval-latin-ner-HOME-Alcar-sents",
"arxiv:2311.08526",
"license:apache-2.0",
"region:us"
] | token-classification | "2024-07-02T19:18:47Z" | ---
license: apache-2.0
language:
- multilingual
library_name: gliner
datasets:
- medieval-data/medieval-latin-ner-HOME-Alcar-sents
pipeline_tag: token-classification
---
# About
This is a GLiNER model finetuned on medieval Latin. It was trained to improve the identification of PERSON and LOC. It was finetuned from [urchade/gliner_multi-v2.1](https://huggingface.co/urchade/gliner_multi-v2.1). The model was finetuned on 1,500 annotations from the [Home Alcar sentences](https://huggingface.co/datasets/medieval-data/medieval-latin-ner-HOME-Alcar-sents). Only 1,500 were selected to prevent catastrophic forgetting.
GLiNER is a Named Entity Recognition (NER) model capable of identifying any entity type using a bidirectional transformer encoder (BERT-like). It provides a practical alternative to traditional NER models, which are limited to predefined entities, and Large Language Models (LLMs) that, despite their flexibility, are costly and large for resource-constrained scenarios.
## Installation
To use this model, you must install the GLiNER Python library:
```
!pip install gliner
```
## Usage
Once you've downloaded the GLiNER library, you can import the GLiNER class. You can then load this model using `GLiNER.from_pretrained` and predict entities with `predict_entities`.
```python
from gliner import GLiNER
model = GLiNER.from_pretrained("medieval-data/gliner_multi-v2.1-medieval-latin")
text = """
Testes : magister Stephanus cantor Autissiodorensis , Petrus capellanus comitis , Gaufridus clericus , Hugo de Argenteolo , Milo Filluns , Johannes Maleherbe , Nivardus de Argenteolo , Columbus tunc prepositus Tornodorensis , Johannes prepositus Autissiodorensis , Johannes Brisebarra .
"""
labels = ["PERSON", "LOC"]
entities = model.predict_entities(text, labels)
for entity in entities:
print(entity["text"], "=>", entity["label"])
```
```
Stephanus => PERSON
Autissiodorensis => LOC
Petrus => PERSON
Gaufridus => PERSON
Hugo de Argenteolo => PERSON
Milo Filluns => PERSON
Johannes Maleherbe => PERSON
Nivardus de Argenteolo => PERSON
Columbus => PERSON
Tornodorensis => LOC
Johannes => PERSON
Autissiodorensis => LOC
Johannes Brisebarra => PERSON
```
## Citation to Original GLiNER Model
```bibtex
@misc{zaratiana2023gliner,
title={GLiNER: Generalist Model for Named Entity Recognition using Bidirectional Transformer},
author={Urchade Zaratiana and Nadi Tomeh and Pierre Holat and Thierry Charnois},
year={2023},
eprint={2311.08526},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
shriyakan/auto-gptq-32-false | shriyakan | "2024-07-02T19:19:30Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T19:19:30Z" | Invalid username or password. |
mradermacher/Echidna-13b-v0.2-GGUF | mradermacher | "2024-07-02T20:40:50Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:NeverSleep/Echidna-13b-v0.2",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T19:20:01Z" | ---
base_model: NeverSleep/Echidna-13b-v0.2
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/NeverSleep/Echidna-13b-v0.2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Echidna-13b-v0.2-GGUF/resolve/main/Echidna-13b-v0.2.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Echidna-13b-v0.2-GGUF/resolve/main/Echidna-13b-v0.2.IQ3_XS.gguf) | IQ3_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Echidna-13b-v0.2-GGUF/resolve/main/Echidna-13b-v0.2.IQ3_S.gguf) | IQ3_S | 5.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Echidna-13b-v0.2-GGUF/resolve/main/Echidna-13b-v0.2.Q3_K_S.gguf) | Q3_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Echidna-13b-v0.2-GGUF/resolve/main/Echidna-13b-v0.2.IQ3_M.gguf) | IQ3_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/Echidna-13b-v0.2-GGUF/resolve/main/Echidna-13b-v0.2.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Echidna-13b-v0.2-GGUF/resolve/main/Echidna-13b-v0.2.Q3_K_L.gguf) | Q3_K_L | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/Echidna-13b-v0.2-GGUF/resolve/main/Echidna-13b-v0.2.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/Echidna-13b-v0.2-GGUF/resolve/main/Echidna-13b-v0.2.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Echidna-13b-v0.2-GGUF/resolve/main/Echidna-13b-v0.2.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Echidna-13b-v0.2-GGUF/resolve/main/Echidna-13b-v0.2.Q5_K_S.gguf) | Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Echidna-13b-v0.2-GGUF/resolve/main/Echidna-13b-v0.2.Q5_K_M.gguf) | Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/Echidna-13b-v0.2-GGUF/resolve/main/Echidna-13b-v0.2.Q6_K.gguf) | Q6_K | 10.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Echidna-13b-v0.2-GGUF/resolve/main/Echidna-13b-v0.2.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Litzy0619/aslg_pc12_0.003_32_5_6 | Litzy0619 | "2024-07-02T19:23:43Z" | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | "2024-07-02T19:20:04Z" | Entry not found |
juanpablomesa/sentence-t5-base-bioasq-1epoch-batch32-100steps | juanpablomesa | "2024-07-02T19:20:25Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"t5",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:4012",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/sentence-t5-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-07-02T19:20:08Z" | ---
base_model: sentence-transformers/sentence-t5-base
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:4012
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: 'Extensive messenger RNA editing generates transcript and protein
diversity in genes involved in neural excitability, as previously described, as
well as in genes participating in a broad range of other cellular functions. '
sentences:
- Do cephalopods use RNA editing less frequently than other species?
- GV1001 vaccine targets which enzyme?
- Which event results in the acetylation of S6K1?
- source_sentence: Yes, exposure to household furry pets influences the gut microbiota
of infants.
sentences:
- Can pets affect infant microbiomed?
- What is the mode of action of Thiazovivin?
- What are the effects of CAMK4 inhibition?
- source_sentence: "In children with heart failure evidence of the effect of enalapril\
\ is empirical. Enalapril was clinically safe and effective in 50% to 80% of for\
\ children with cardiac failure secondary to congenital heart malformations before\
\ and after cardiac surgery, impaired ventricular function , valvar regurgitation,\
\ congestive cardiomyopathy, , arterial hypertension, life-threatening arrhythmias\
\ coexisting with circulatory insufficiency. \nACE inhibitors have shown a transient\
\ beneficial effect on heart failure due to anticancer drugs and possibly a beneficial\
\ effect in muscular dystrophy-associated cardiomyopathy, which deserves further\
\ studies."
sentences:
- Which receptors can be evaluated with the [18F]altanserin?
- In what proportion of children with heart failure has Enalapril been shown to
be safe and effective?
- Which major signaling pathways are regulated by RIP1?
- source_sentence: Cellular senescence-associated heterochromatic foci (SAHFS) are
a novel type of chromatin condensation involving alterations of linker histone
H1 and linker DNA-binding proteins. SAHFS can be formed by a variety of cell types,
but their mechanism of action remains unclear.
sentences:
- What is the relationship between the X chromosome and a neutrophil drumstick?
- Which microRNAs are involved in exercise adaptation?
- How are SAHFS created?
- source_sentence: Multicluster Pcdh diversity is required for mouse olfactory neural
circuit assembly. The vertebrate clustered protocadherin (Pcdh) cell surface proteins
are encoded by three closely linked gene clusters (Pcdhα, Pcdhβ, and Pcdhγ). Although
deletion of individual Pcdh clusters had subtle phenotypic consequences, the loss
of all three clusters (tricluster deletion) led to a severe axonal arborization
defect and loss of self-avoidance.
sentences:
- What are the effects of the deletion of all three Pcdh clusters (tricluster deletion)
in mice?
- what is the role of MEF-2 in cardiomyocyte differentiation?
- How many periods of regulatory innovation led to the evolution of vertebrates?
model-index:
- name: BGE small finetuned BIOASQ
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: sentence transformers/sentence t5 base
type: sentence-transformers/sentence-t5-base
metrics:
- type: cosine_accuracy@1
value: 0.0
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.0
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.0
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.0
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.0
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.0
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.0
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.0
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.0
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.0
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.0
name: Cosine Map@100
---
# BGE small finetuned BIOASQ
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/sentence-t5-base](https://huggingface.co/sentence-transformers/sentence-t5-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/sentence-t5-base](https://huggingface.co/sentence-transformers/sentence-t5-base) <!-- at revision 50c53e206f8b01c9621484a3c0aafce4e55efebf -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: T5EncoderModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 768, 'out_features': 768, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
(3): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("juanpablomesa/sentence-t5-base-bioasq-1epoch-batch32-100steps")
# Run inference
sentences = [
'Multicluster Pcdh diversity is required for mouse olfactory neural circuit assembly. The vertebrate clustered protocadherin (Pcdh) cell surface proteins are encoded by three closely linked gene clusters (Pcdhα, Pcdhβ, and Pcdhγ). Although deletion of individual Pcdh clusters had subtle phenotypic consequences, the loss of all three clusters (tricluster deletion) led to a severe axonal arborization defect and loss of self-avoidance.',
'What are the effects of the deletion of all three Pcdh clusters (tricluster deletion) in mice?',
'How many periods of regulatory innovation led to the evolution of vertebrates?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `sentence-transformers/sentence-t5-base`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:--------|
| cosine_accuracy@1 | 0.0 |
| cosine_accuracy@3 | 0.0 |
| cosine_accuracy@5 | 0.0 |
| cosine_accuracy@10 | 0.0 |
| cosine_precision@1 | 0.0 |
| cosine_precision@3 | 0.0 |
| cosine_precision@5 | 0.0 |
| cosine_precision@10 | 0.0 |
| cosine_recall@1 | 0.0 |
| cosine_recall@3 | 0.0 |
| cosine_recall@5 | 0.0 |
| cosine_recall@10 | 0.0 |
| cosine_ndcg@10 | 0.0 |
| cosine_mrr@10 | 0.0 |
| **cosine_map@100** | **0.0** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 4,012 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 2 tokens</li><li>mean: 66.95 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 16.85 tokens</li><li>max: 53 tokens</li></ul> |
* Samples:
| positive | anchor |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|
| <code>Aberrant patterns of H3K4, H3K9, and H3K27 histone lysine methylation were shown to result in histone code alterations, which induce changes in gene expression, and affect the proliferation rate of cells in medulloblastoma.</code> | <code>What is the implication of histone lysine methylation in medulloblastoma?</code> |
| <code>STAG1/STAG2 proteins are tumour suppressor proteins that suppress cell proliferation and are essential for differentiation.</code> | <code>What is the role of STAG1/STAG2 proteins in differentiation?</code> |
| <code>The association between cell phone use and incident glioblastoma remains unclear. Some studies have reported that cell phone use was associated with incident glioblastoma, and with reduced survival of patients diagnosed with glioblastoma. However, other studies have repeatedly replicated to find an association between cell phone use and glioblastoma.</code> | <code>What is the association between cell phone use and glioblastoma?</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | sentence-transformers/sentence-t5-base_cosine_map@100 |
|:------:|:----:|:-------------:|:-----------------------------------------------------:|
| 0.7937 | 100 | 0.0 | 0.0 |
### Framework Versions
- Python: 3.11.5
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.31.0
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |