modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
cswilkin/example-model | cswilkin | "2024-07-02T15:38:15Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-07-02T15:38:15Z" | ---
license: mit
---
|
CodeHima/Tos-Roberta | CodeHima | "2024-07-02T15:55:15Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-07-02T15:38:40Z" | ---
license: mit
language:
- en
widget:
- text: "You have the right to use CommunityConnect for its intended purpose of connecting with others, sharing content responsibly, and engaging in constructive dialogue. You are responsible for the content you post and must respect the rights and privacy of others."
example_title: "Fair Clause"
- text: " We reserve the right to suspend, terminate, or restrict your access to the platform at any time and for any reason, without prior notice or explanation. This includes but is not limited to violations of our community guidelines or terms of service, as determined solely by ConnectWorld."
example_title: "Unfair Clause"
metrics:
- accuracy
- precision
- f1
- recall
library_name: transformers
pipeline_tag: text-classification
---
# Tos-Roberta: Terms of Service Fairness Classifier
## Model Description
Tos-Roberta is a fine-tuned RoBERTa-large model designed to classify clauses in Terms of Service (ToS) documents based on their fairness level. The model categorizes clauses into three classes: clearly fair, potentially unfair, and clearly unfair.
### Task
The model performs multi-class classification on individual sentences or clauses, categorizing them into three levels of unfairness:
0. Clearly Fair
1. Potentially Unfair
2. Clearly Unfair
## Key Features
- Based on the RoBERTa-large architecture
- Fine-tuned on a specialized dataset of ToS clauses
- Achieves high accuracy in distinguishing between fair and unfair clauses
- Suitable for legal text analysis and consumer rights applications
## Performance
The model demonstrates strong performance on the task of ToS clause classification:
- Validation Accuracy: 89.64%
- Test Accuracy: 85.84%
Detailed performance metrics per epoch:
| Epoch | Training Loss | Validation Loss | Accuracy | F1 Score | Precision | Recall |
|-------|---------------|-----------------|----------|----------|-----------|--------|
| 1 | 0.443500 | 0.398950 | 0.874699 | 0.858838 | 0.862516 | 0.874699 |
| 2 | 0.416400 | 0.438409 | 0.853012 | 0.847317 | 0.849916 | 0.853012 |
| 3 | 0.227700 | 0.505879 | 0.896386 | 0.893325 | 0.891521 | 0.896386 |
| 4 | 0.052600 | 0.667532 | 0.891566 | 0.893167 | 0.895115 | 0.891566 |
| 5 | 0.124200 | 0.747090 | 0.884337 | 0.887412 | 0.891807 | 0.884337 |
## Training Details
- **Base Model**: RoBERTa-large
- **Dataset**: CodeHima/TOS_DatasetV2
- **Training Time**: 3310.09 seconds
- **Epochs**: 5
- **Batch Size**: 8
- **Learning Rate**: Started at 2e-5 with a warmup period and decay
- **Optimizer**: AdamW
- **Loss Function**: Cross-Entropy
- **Training Strategy**: Mixed precision training (FP16)
## Usage
To use this model for inference:
```python
from transformers import RobertaTokenizer, RobertaForSequenceClassification
import torch
# Load model and tokenizer
model = RobertaForSequenceClassification.from_pretrained("YourHuggingFaceUsername/Tos-Roberta")
tokenizer = RobertaTokenizer.from_pretrained("YourHuggingFaceUsername/Tos-Roberta")
# Prepare input text
text = "Your Terms of Service clause here"
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=512)
# Make prediction
with torch.no_grad():
outputs = model(**inputs)
probabilities = torch.softmax(outputs.logits, dim=-1)
predicted_class = torch.argmax(probabilities, dim=-1).item()
# Map prediction to label
label_map = {0: "clearly_fair", 1: "potentially_unfair", 2: "clearly_unfair"}
predicted_label = label_map[predicted_class]
print(f"Predicted class: {predicted_label}")
print(f"Probabilities: {probabilities[0].tolist()}")
```
## Limitations and Bias
- The model's performance may vary depending on the legal jurisdiction and specific domain of the ToS.
- It may not capture nuanced legal interpretations that require human expertise.
- The training data may contain biases present in existing ToS documents.
## Ethical Considerations
While this model can assist in identifying potentially unfair clauses in ToS documents, it should not be used as a substitute for professional legal advice. The model's predictions should be reviewed by qualified legal professionals before making any decisions based on its output.
## Citation
If you use this model in your research or application, please cite it as:
```
@misc{Tos-Roberta,
author = {Himanshu Mohanty},
title = {Tos-Roberta: RoBERTa-large model for Terms of Service Fairness Classification},
year = {2024},
publisher = {HuggingFace},
journal = {CodeHima/Tos-Roberta},
howpublished = {\url{https://huggingface.co/CodeHima/Tos-Roberta}}
}
```
## Contact
For questions, feedback, or collaborations, please open an issue on the model's Hugging Face repository or contact [Your Contact Information]. |
abbasmahmudiai/text_classification_bert | abbasmahmudiai | "2024-07-02T15:46:06Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"fa",
"dataset:SeyedAli/Persian-Text-Emotion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-07-02T15:38:49Z" | ---
library_name: transformers
license: apache-2.0
datasets:
- SeyedAli/Persian-Text-Emotion
language:
- fa
---
# Model Card for Model ID
bertmodel=HooshvareLab/bert-base-parsbert-uncased
dataset=SeyedAli/Persian-Text-Emotion
epoch=6
Training Loss =0.000300 |
ferrazzipietro/Llama-2-7b-chat-hfspecialTkn_en.layer1_NoQuant_64_32_0.02_8 | ferrazzipietro | "2024-07-02T15:39:49Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T15:39:22Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
movenb3at/PJS | movenb3at | "2024-07-02T15:41:20Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T15:40:30Z" | Entry not found |
Lam-Hung/controlnet_depth_interior | Lam-Hung | "2024-07-02T15:40:31Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T15:40:31Z" | Entry not found |
HachiML/Mists-7B-v01-single-turn | HachiML | "2024-07-02T15:53:47Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mists",
"feature-extraction",
"trl",
"sft",
"generated_from_trainer",
"custom_code",
"base_model:HachiML/Mists-7B-v01-projector-trained",
"license:apache-2.0",
"region:us"
] | feature-extraction | "2024-07-02T15:41:37Z" | ---
base_model: HachiML/Mists-7B-v01-projector-trained
license: apache-2.0
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Mists-7B-v01-single-turn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/siseikatu8/huggingface/runs/aun0jon1)
# Mists-7B-v01-single-turn
This model is a fine-tuned version of [HachiML/Mists-7B-v01-projector-trained](https://huggingface.co/HachiML/Mists-7B-v01-projector-trained) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4228
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6859 | 0.0420 | 400 | 1.1048 |
| 0.7572 | 0.0841 | 800 | 0.8318 |
| 0.664 | 0.1261 | 1200 | 0.7295 |
| 0.6135 | 0.1682 | 1600 | 0.6526 |
| 0.5707 | 0.2102 | 2000 | 0.6007 |
| 0.5506 | 0.2523 | 2400 | 0.5653 |
| 0.5255 | 0.2943 | 2800 | 0.5434 |
| 0.5106 | 0.3363 | 3200 | 0.5219 |
| 0.4909 | 0.3784 | 3600 | 0.5045 |
| 0.4773 | 0.4204 | 4000 | 0.4874 |
| 0.4664 | 0.4625 | 4400 | 0.4762 |
| 0.4555 | 0.5045 | 4800 | 0.4663 |
| 0.4516 | 0.5466 | 5200 | 0.4560 |
| 0.4466 | 0.5886 | 5600 | 0.4490 |
| 0.4403 | 0.6306 | 6000 | 0.4433 |
| 0.4323 | 0.6727 | 6400 | 0.4383 |
| 0.4337 | 0.7147 | 6800 | 0.4324 |
| 0.4214 | 0.7568 | 7200 | 0.4297 |
| 0.4153 | 0.7988 | 7600 | 0.4269 |
| 0.414 | 0.8409 | 8000 | 0.4250 |
| 0.4187 | 0.8829 | 8400 | 0.4238 |
| 0.418 | 0.9250 | 8800 | 0.4230 |
| 0.4126 | 0.9670 | 9200 | 0.4228 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.0.1
- Datasets 2.20.0
- Tokenizers 0.19.1
|
styalai/XT-unknowM-v0.1 | styalai | "2024-07-02T15:45:29Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"pytorch_model_hub_mixin",
"model_hub_mixin",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T15:42:07Z" | ---
tags:
- pytorch_model_hub_mixin
- model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
NikolayKozloff/Llama-3-Instruct-Neurona-8b-v2-Q4_0-GGUF | NikolayKozloff | "2024-07-02T15:42:47Z" | 0 | 1 | transformers | [
"transformers",
"gguf",
"synthetic",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"es",
"en",
"dataset:Danielbrdz/Barcenas-Economia",
"dataset:HiTZ/casimedicos-exp",
"dataset:somosnlp/coser_resumenes",
"dataset:csebuetnlp/CrossSum",
"dataset:Iker/Document-Translation-en-es",
"dataset:somosnlp/es-inclusive-language-it",
"dataset:glaiveai/glaive-code-assistant-v3",
"dataset:glaiveai/glaive-function-calling-v2",
"dataset:Iker/InstructTranslation-EN-ES",
"dataset:somosnlp/lenguaje-claro-dataset",
"dataset:somosnlp/LingComp_QA",
"dataset:Iker/NoticIA",
"dataset:teknium/OpenHermes-2.5",
"dataset:Iker/OpenHermes-2.5-Spanish",
"dataset:Helsinki-NLP/opus-100",
"dataset:projecte-aina/RAG_Multilingual",
"dataset:HiTZ/This-is-not-a-dataset",
"dataset:Iker/Reddit-Post-Translation",
"dataset:wikipedia",
"base_model:Iker/Llama-3-Instruct-Neurona-8b-v2",
"license:llama3",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T15:42:26Z" | ---
base_model: Iker/Llama-3-Instruct-Neurona-8b-v2
datasets:
- Danielbrdz/Barcenas-Economia
- HiTZ/casimedicos-exp
- somosnlp/coser_resumenes
- csebuetnlp/CrossSum
- Iker/Document-Translation-en-es
- somosnlp/es-inclusive-language-it
- glaiveai/glaive-code-assistant-v3
- glaiveai/glaive-function-calling-v2
- Iker/InstructTranslation-EN-ES
- somosnlp/lenguaje-claro-dataset
- somosnlp/LingComp_QA
- Iker/NoticIA
- teknium/OpenHermes-2.5
- Iker/OpenHermes-2.5-Spanish
- Helsinki-NLP/opus-100
- projecte-aina/RAG_Multilingual
- HiTZ/This-is-not-a-dataset
- Iker/Reddit-Post-Translation
- wikipedia
language:
- es
- en
library_name: transformers
license: llama3
pipeline_tag: text-generation
tags:
- synthetic
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/Llama-3-Instruct-Neurona-8b-v2-Q4_0-GGUF
This model was converted to GGUF format from [`Iker/Llama-3-Instruct-Neurona-8b-v2`](https://huggingface.co/Iker/Llama-3-Instruct-Neurona-8b-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Iker/Llama-3-Instruct-Neurona-8b-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Llama-3-Instruct-Neurona-8b-v2-Q4_0-GGUF --hf-file llama-3-instruct-neurona-8b-v2-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Llama-3-Instruct-Neurona-8b-v2-Q4_0-GGUF --hf-file llama-3-instruct-neurona-8b-v2-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/Llama-3-Instruct-Neurona-8b-v2-Q4_0-GGUF --hf-file llama-3-instruct-neurona-8b-v2-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/Llama-3-Instruct-Neurona-8b-v2-Q4_0-GGUF --hf-file llama-3-instruct-neurona-8b-v2-q4_0.gguf -c 2048
```
|
Kimata/FinanceLlama | Kimata | "2024-07-02T15:48:23Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T15:42:29Z" | Entry not found |
DanielTB/priestmodelv1 | DanielTB | "2024-07-02T15:43:21Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T15:43:12Z" | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** DanielTB
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sharathprasaath/PY007-TinyLlama-1.1B-Chat-v0.3 | sharathprasaath | "2024-07-02T15:49:33Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T15:43:22Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Bonnie422/mistral-7b-mj-finetune | Bonnie422 | "2024-07-02T15:47:31Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"base_model:filipealmeida/Mistral-7B-Instruct-v0.1-sharded",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T15:44:07Z" | ---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: filipealmeida/Mistral-7B-Instruct-v0.1-sharded
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
NikolayKozloff/Carrot-Ko-2.1B-Instruct-Q8_0-GGUF | NikolayKozloff | "2024-07-02T15:45:27Z" | 0 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"ko",
"dataset:CarrotAI/ko-instruction-dataset",
"base_model:CarrotAI/Carrot-Ko-2.1B-Instruct",
"license:mit",
"region:us"
] | text-generation | "2024-07-02T15:45:15Z" | ---
base_model: CarrotAI/Carrot-Ko-2.1B-Instruct
datasets:
- CarrotAI/ko-instruction-dataset
language:
- ko
license: mit
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/Carrot-Ko-2.1B-Instruct-Q8_0-GGUF
This model was converted to GGUF format from [`CarrotAI/Carrot-Ko-2.1B-Instruct`](https://huggingface.co/CarrotAI/Carrot-Ko-2.1B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/CarrotAI/Carrot-Ko-2.1B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Carrot-Ko-2.1B-Instruct-Q8_0-GGUF --hf-file carrot-ko-2.1b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Carrot-Ko-2.1B-Instruct-Q8_0-GGUF --hf-file carrot-ko-2.1b-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/Carrot-Ko-2.1B-Instruct-Q8_0-GGUF --hf-file carrot-ko-2.1b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/Carrot-Ko-2.1B-Instruct-Q8_0-GGUF --hf-file carrot-ko-2.1b-instruct-q8_0.gguf -c 2048
```
|
debiao29/Qwen-Qwen1.5-0.5B-1719935143 | debiao29 | "2024-07-02T15:45:48Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-0.5B",
"region:us"
] | null | "2024-07-02T15:45:43Z" | ---
base_model: Qwen/Qwen1.5-0.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
alixnaveed/WADU_v1 | alixnaveed | "2024-07-02T15:48:37Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:514",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:intfloat/multilingual-e5-base",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | "2024-07-02T15:46:34Z" | ---
base_model: intfloat/multilingual-e5-base
datasets: []
language: []
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:514
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: ' لوگوں نے ان سے پوچھا اور میں اس وقت سہل کے اتنا قریب تھا کہ میرے
اور ان کے درمیان کوئی دوسرا حائل نہ تھا کہ رسول اللہ کے احد کے زخم کا علاج کس
دوا سے کیا گیا تھا انہوں نے کہا کہ اس بات کا جاننے والا اب مجھ سے زیادہ کوئی نہیں
رہا علی رضی اللہ عنہ اپنی ڈھال میں پانی لاتے اور حضرت فاطمہ رضی اللہ عنہا آپ کے
منہ سے خون دھوتیں پھر ایک بوریا کا ٹکڑا جلایا گیا اور آپ کے زخم میں بھر دیا گیا '
sentences:
- کیا غسل کے دوران وضو کے ارکان کو ترجیح دینی چاہیے
- رسول اللہ ﷺ کے احد کے زخم کا علاج کس طرح کیا گیا تھا
- جب سپاہی نے غسل کی ضرورت کے بارے میں پوچھا تو رسول اللہ ﷺ کا رد عمل کیا تھا
- source_sentence: ' رسول کریم نے حجۃ الوداع میں جب سر کے بال منڈوائے تو سب سے پہلے
ابوطلحہ رضی اللہ عنہ نے آپ کے بال لیے تھے '
sentences:
- حجۃ الوداع میں رسول کریم ﷺ کے بال کون سب سے پہلے لیے تھے
- تیمم میں کن اعضا کو مسح کرنا کافی ہے
- طبیعی حالات میں مسجد کے بجائے دوسرے مقامات پر نماز پڑھنے کا کیا حکم ہے
- source_sentence: 'رسول اللہ نے فرمایا كہ جب تم میں سے كوئی وضو كرے تو اسے چاہیے
كہ اپنی ناک میں پانی دے پھر اسے صاف كرے اور جو شخص پتھروں سے استنجاء كرے اسے
چاہیے كہ بے جوڑ عدد یعنی ایک یا تین سے استنجاء كرے اور جب تم میں سے كوئی سو
كر اٹھے تو وضو كے پانی میں ہاتھ ڈالنے سے پہلے اسے دھو لے كیونكہ تم میں سے كوئی
نہیں جانتا كہ رات كو اس كا ہاتھ كہاں رہا ہے '
sentences:
- نبی اکرم ﷺ غسل فتح مکہ کے دن کیوں فرما رہے تھے
- استنجاء کے لیے پتھر استعمال کرتے وقت کس تعداد میں استعمال کرنے کا حکم ہے
- رسول اللہ ﷺ کی بیویاں رات میں کہاں قضاء حاجت کے لیے جاتی تھیں
- source_sentence: ' گویا کہ میں آنحضرت کی مانگ میں خوشبو کی چمک دیکھ رہی ہوں اس حال
میں کہ آپ احرام باندھے ہوئے ہیں '
sentences:
- کیا جنابت کی حالت میں وضو کر کے سو سکتے ہیں
- کیا احرام باندھتے وقت خوشبو لگانے کی ممانعت ہے
- کیا کپڑوں میں دھبے ہونے کے باوجود نماز ادا کرنا جائز ہے
- source_sentence: ' نبی کریم جب جنابت کی حالت میں ہوتے اور سونے کا ارادہ کرتے تو
شرمگاہ کو دھو لیتے اور نماز کی طرح وضو کرتے '
sentences:
- کیا احرام کی حالت میں خوشبو لگانا جائز ہے
- کیا رفع حاجت کے بعد صفائی کے لیے پتھروں کا استعمال جائز ہے
- کیا جنابت کی حالت میں شرمگاہ کو دھونا ضروری ہے
---
# SentenceTransformer based on intfloat/multilingual-e5-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) <!-- at revision d13f1b27baf31030b7fd040960d60d909913633f -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("alixnaveed/WADU_v1")
# Run inference
sentences = [
' نبی کریم جب جنابت کی حالت میں ہوتے اور سونے کا ارادہ کرتے تو شرمگاہ کو دھو لیتے اور نماز کی طرح وضو کرتے ',
'کیا جنابت کی حالت میں شرمگاہ کو دھونا ضروری ہے',
'کیا رفع حاجت کے بعد صفائی کے لیے پتھروں کا استعمال جائز ہے',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 514 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 94.86 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 16.88 tokens</li><li>max: 31 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|
| <code> ایک دن میں اپنے گھر کی چھت پر چڑھا تو مجھے رسول اللہ دو اینٹوں پر قضاء حاجت کے وقت بیت المقدس کی طرف منہ کیے ہوئے نظر آئے </code> | <code>کیا قضاء حاجت کے وقت بیت المقدس کی طرف منہ کرنا جائز ہے</code> |
| <code> رسول کریم میری مزاج پرسی کے لیے تشریف لائے میں بیمار تھا ایسا کہ مجھے ہوش تک نہیں تھا آپ نے وضو کیا اور اپنے وضو کا پانی مجھ پر چھڑکا تو مجھے ہوش آ گیا میں نے عرض کیا یا رسول اللہ میرا وارث کون ہو گا میرا تو صرف ایک کلالہ وارث ہے اس پر آیت میراث نازل ہوئی </code> | <code>رسول اللہ ﷺ نے بیمار صحابی پر کیا چھڑکا</code> |
| <code> نبی کریم اور آپ کی کوئی زوجہ مطہرہ ایک برتن میں غسل کرتے تھے اس حدیث میں مسلم بن ابراہیم اور وہب بن جریر کی روایت میں شعبہ سے من الجنابة کا لفظ زیادہ ہے یعنی یہ جنابت کا غسل ہوتا تھا </code> | <code>کیا نبی کریم ﷺ اور آپ کی زوجہ مطہرہ ایک برتن میں جنابت کا غسل کر سکتے تھے</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 6
- `per_device_eval_batch_size`: 6
- `num_train_epochs`: 10
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 6
- `per_device_eval_batch_size`: 6
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 5.8140 | 500 | 0.0563 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.42.3
- PyTorch: 2.3.0+cu121
- Accelerate: 0.31.0
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
mutisya/whisper-large-v3-luo-drL-24_5-v24_23_3 | mutisya | "2024-07-03T01:02:00Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"region:us"
] | null | "2024-07-02T15:46:45Z" | Entry not found |
ANDER0312/test | ANDER0312 | "2024-07-02T15:48:11Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T15:48:11Z" | Entry not found |
NikolayKozloff/Llama-3-Instruct-Neurona-8b-v2-Q5_0-GGUF | NikolayKozloff | "2024-07-02T15:49:24Z" | 0 | 1 | transformers | [
"transformers",
"gguf",
"synthetic",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"es",
"en",
"dataset:Danielbrdz/Barcenas-Economia",
"dataset:HiTZ/casimedicos-exp",
"dataset:somosnlp/coser_resumenes",
"dataset:csebuetnlp/CrossSum",
"dataset:Iker/Document-Translation-en-es",
"dataset:somosnlp/es-inclusive-language-it",
"dataset:glaiveai/glaive-code-assistant-v3",
"dataset:glaiveai/glaive-function-calling-v2",
"dataset:Iker/InstructTranslation-EN-ES",
"dataset:somosnlp/lenguaje-claro-dataset",
"dataset:somosnlp/LingComp_QA",
"dataset:Iker/NoticIA",
"dataset:teknium/OpenHermes-2.5",
"dataset:Iker/OpenHermes-2.5-Spanish",
"dataset:Helsinki-NLP/opus-100",
"dataset:projecte-aina/RAG_Multilingual",
"dataset:HiTZ/This-is-not-a-dataset",
"dataset:Iker/Reddit-Post-Translation",
"dataset:wikipedia",
"base_model:Iker/Llama-3-Instruct-Neurona-8b-v2",
"license:llama3",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T15:49:01Z" | ---
base_model: Iker/Llama-3-Instruct-Neurona-8b-v2
datasets:
- Danielbrdz/Barcenas-Economia
- HiTZ/casimedicos-exp
- somosnlp/coser_resumenes
- csebuetnlp/CrossSum
- Iker/Document-Translation-en-es
- somosnlp/es-inclusive-language-it
- glaiveai/glaive-code-assistant-v3
- glaiveai/glaive-function-calling-v2
- Iker/InstructTranslation-EN-ES
- somosnlp/lenguaje-claro-dataset
- somosnlp/LingComp_QA
- Iker/NoticIA
- teknium/OpenHermes-2.5
- Iker/OpenHermes-2.5-Spanish
- Helsinki-NLP/opus-100
- projecte-aina/RAG_Multilingual
- HiTZ/This-is-not-a-dataset
- Iker/Reddit-Post-Translation
- wikipedia
language:
- es
- en
library_name: transformers
license: llama3
pipeline_tag: text-generation
tags:
- synthetic
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/Llama-3-Instruct-Neurona-8b-v2-Q5_0-GGUF
This model was converted to GGUF format from [`Iker/Llama-3-Instruct-Neurona-8b-v2`](https://huggingface.co/Iker/Llama-3-Instruct-Neurona-8b-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Iker/Llama-3-Instruct-Neurona-8b-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Llama-3-Instruct-Neurona-8b-v2-Q5_0-GGUF --hf-file llama-3-instruct-neurona-8b-v2-q5_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Llama-3-Instruct-Neurona-8b-v2-Q5_0-GGUF --hf-file llama-3-instruct-neurona-8b-v2-q5_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/Llama-3-Instruct-Neurona-8b-v2-Q5_0-GGUF --hf-file llama-3-instruct-neurona-8b-v2-q5_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/Llama-3-Instruct-Neurona-8b-v2-Q5_0-GGUF --hf-file llama-3-instruct-neurona-8b-v2-q5_0.gguf -c 2048
```
|
AmberYifan/sft-safe-spin-v | AmberYifan | "2024-07-02T19:56:37Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:AmberYifan/zephyr-7b-sft-safe",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T15:49:46Z" | ---
license: apache-2.0
base_model: AmberYifan/zephyr-7b-sft-safe
tags:
- generated_from_trainer
model-index:
- name: sft-safe-spin-v
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft-safe-spin-v
This model is a fine-tuned version of [AmberYifan/zephyr-7b-sft-safe](https://huggingface.co/AmberYifan/zephyr-7b-sft-safe) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1338
- Rewards/real: -3.4589
- Rewards/generated: -13.0205
- Rewards/accuracies: 0.9522
- Rewards/margins: 9.5616
- Logps/generated: -186.5033
- Logps/real: -173.6707
- Logits/generated: -3.1233
- Logits/real: -3.5104
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/real | Rewards/generated | Rewards/accuracies | Rewards/margins | Logps/generated | Logps/real | Logits/generated | Logits/real |
|:-------------:|:-----:|:----:|:---------------:|:------------:|:-----------------:|:------------------:|:---------------:|:---------------:|:----------:|:----------------:|:-----------:|
| 0.4603 | 0.06 | 100 | 0.2849 | 0.3170 | -4.8525 | 0.9697 | 5.1696 | -104.8239 | -135.9118 | -4.1358 | -4.3181 |
| 0.2425 | 0.12 | 200 | 0.2396 | -1.7168 | -15.7686 | 0.8997 | 14.0518 | -213.9848 | -156.2504 | -3.7662 | -3.9307 |
| 0.1942 | 0.17 | 300 | 0.1536 | -2.2725 | -23.7294 | 0.9395 | 21.4568 | -293.5923 | -161.8076 | -3.6024 | -3.7832 |
| 0.1872 | 0.23 | 400 | 0.1591 | -1.9811 | -26.8335 | 0.9443 | 24.8523 | -324.6334 | -158.8936 | -3.7387 | -3.9242 |
| 0.5386 | 0.29 | 500 | 0.2124 | -2.1951 | -12.1230 | 0.9260 | 9.9280 | -177.5292 | -161.0329 | -3.5142 | -3.8361 |
| 0.1318 | 0.35 | 600 | 0.1397 | -2.3554 | -19.3040 | 0.9570 | 16.9486 | -249.3387 | -162.6357 | -3.3916 | -3.8319 |
| 0.1311 | 0.41 | 700 | 0.1589 | -2.6398 | -20.7908 | 0.9363 | 18.1510 | -264.2064 | -165.4799 | -3.2845 | -3.7517 |
| 0.121 | 0.47 | 800 | 0.1282 | -2.8500 | -22.0061 | 0.9546 | 19.1561 | -276.3602 | -167.5825 | -3.1277 | -3.6228 |
| 0.1115 | 0.52 | 900 | 0.1392 | -3.3374 | -22.9391 | 0.9419 | 19.6017 | -285.6900 | -172.4560 | -3.0962 | -3.6427 |
| 2.648 | 0.58 | 1000 | 0.1711 | -3.2299 | -10.5445 | 0.9411 | 7.3146 | -161.7435 | -171.3813 | -3.4082 | -3.6724 |
| 0.1078 | 0.64 | 1100 | 0.1340 | -2.8961 | -11.3713 | 0.9498 | 8.4752 | -170.0120 | -168.0435 | -3.3383 | -3.6687 |
| 0.0751 | 0.7 | 1200 | 0.1293 | -2.8024 | -11.6866 | 0.9522 | 8.8842 | -173.1649 | -167.1059 | -3.2816 | -3.6419 |
| 0.0927 | 0.76 | 1300 | 0.1276 | -3.0019 | -12.3015 | 0.9514 | 9.2996 | -179.3133 | -169.1012 | -3.2296 | -3.6103 |
| 0.0963 | 0.81 | 1400 | 0.1256 | -2.9332 | -12.3140 | 0.9546 | 9.3809 | -179.4392 | -168.4139 | -3.2433 | -3.6265 |
| 0.1122 | 0.87 | 1500 | 0.1280 | -3.3660 | -12.7449 | 0.9546 | 9.3789 | -183.7474 | -172.7419 | -3.1923 | -3.5761 |
| 0.092 | 0.93 | 1600 | 0.1407 | -3.5290 | -13.0250 | 0.9459 | 9.4960 | -186.5488 | -174.3717 | -3.1057 | -3.4905 |
| 0.0876 | 0.99 | 1700 | 0.1338 | -3.4589 | -13.0205 | 0.9522 | 9.5616 | -186.5033 | -173.6707 | -3.1233 | -3.5104 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
Blackmoonbear/DRL-HuggingFace-Unit1 | Blackmoonbear | "2024-07-02T15:54:37Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-07-02T15:54:20Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 269.44 +/- 15.76
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Jayantez/q-FrozenLake-v1-4x4-noSlippery | Jayantez | "2024-07-02T15:54:26Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-07-02T15:54:23Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Jayantez/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
NikolayKozloff/Llama-3-Instruct-Neurona-8b-v2-IQ4_NL-GGUF | NikolayKozloff | "2024-07-02T15:57:31Z" | 0 | 1 | transformers | [
"transformers",
"gguf",
"synthetic",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"es",
"en",
"dataset:Danielbrdz/Barcenas-Economia",
"dataset:HiTZ/casimedicos-exp",
"dataset:somosnlp/coser_resumenes",
"dataset:csebuetnlp/CrossSum",
"dataset:Iker/Document-Translation-en-es",
"dataset:somosnlp/es-inclusive-language-it",
"dataset:glaiveai/glaive-code-assistant-v3",
"dataset:glaiveai/glaive-function-calling-v2",
"dataset:Iker/InstructTranslation-EN-ES",
"dataset:somosnlp/lenguaje-claro-dataset",
"dataset:somosnlp/LingComp_QA",
"dataset:Iker/NoticIA",
"dataset:teknium/OpenHermes-2.5",
"dataset:Iker/OpenHermes-2.5-Spanish",
"dataset:Helsinki-NLP/opus-100",
"dataset:projecte-aina/RAG_Multilingual",
"dataset:HiTZ/This-is-not-a-dataset",
"dataset:Iker/Reddit-Post-Translation",
"dataset:wikipedia",
"base_model:Iker/Llama-3-Instruct-Neurona-8b-v2",
"license:llama3",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T15:57:09Z" | ---
base_model: Iker/Llama-3-Instruct-Neurona-8b-v2
datasets:
- Danielbrdz/Barcenas-Economia
- HiTZ/casimedicos-exp
- somosnlp/coser_resumenes
- csebuetnlp/CrossSum
- Iker/Document-Translation-en-es
- somosnlp/es-inclusive-language-it
- glaiveai/glaive-code-assistant-v3
- glaiveai/glaive-function-calling-v2
- Iker/InstructTranslation-EN-ES
- somosnlp/lenguaje-claro-dataset
- somosnlp/LingComp_QA
- Iker/NoticIA
- teknium/OpenHermes-2.5
- Iker/OpenHermes-2.5-Spanish
- Helsinki-NLP/opus-100
- projecte-aina/RAG_Multilingual
- HiTZ/This-is-not-a-dataset
- Iker/Reddit-Post-Translation
- wikipedia
language:
- es
- en
library_name: transformers
license: llama3
pipeline_tag: text-generation
tags:
- synthetic
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/Llama-3-Instruct-Neurona-8b-v2-IQ4_NL-GGUF
This model was converted to GGUF format from [`Iker/Llama-3-Instruct-Neurona-8b-v2`](https://huggingface.co/Iker/Llama-3-Instruct-Neurona-8b-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Iker/Llama-3-Instruct-Neurona-8b-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Llama-3-Instruct-Neurona-8b-v2-IQ4_NL-GGUF --hf-file llama-3-instruct-neurona-8b-v2-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Llama-3-Instruct-Neurona-8b-v2-IQ4_NL-GGUF --hf-file llama-3-instruct-neurona-8b-v2-iq4_nl-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/Llama-3-Instruct-Neurona-8b-v2-IQ4_NL-GGUF --hf-file llama-3-instruct-neurona-8b-v2-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/Llama-3-Instruct-Neurona-8b-v2-IQ4_NL-GGUF --hf-file llama-3-instruct-neurona-8b-v2-iq4_nl-imat.gguf -c 2048
```
|
debiao29/Qwen-Qwen1.5-1.8B-1719935843 | debiao29 | "2024-07-02T15:57:28Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-1.8B",
"region:us"
] | null | "2024-07-02T15:57:23Z" | ---
base_model: Qwen/Qwen1.5-1.8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
tanoManzo/DNABERT-2-117M_ft_Hepg2_1kbpHG19_DHSs_H3K27AC_10xControl | tanoManzo | "2024-07-02T23:05:06Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"custom_code",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-07-02T15:58:58Z" | Entry not found |
Lam-Hung/controlnet_lora_interior | Lam-Hung | "2024-07-02T15:59:42Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T15:59:42Z" | Entry not found |
kayynaik/fine-tuned-medical-model | kayynaik | "2024-07-02T15:59:45Z" | 0 | 0 | null | [
"license:llama2",
"region:us"
] | null | "2024-07-02T15:59:45Z" | ---
license: llama2
---
|
mradermacher/llama3-psychiatrist-v1.3B-fp16-GGUF | mradermacher | "2024-07-02T18:01:49Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"en",
"base_model:wiweka24/llama3-psychiatrist-v1.3B-fp16",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T15:59:59Z" | ---
base_model: wiweka24/llama3-psychiatrist-v1.3B-fp16
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/wiweka24/llama3-psychiatrist-v1.3B-fp16
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama3-psychiatrist-v1.3B-fp16-GGUF/resolve/main/llama3-psychiatrist-v1.3B-fp16.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-psychiatrist-v1.3B-fp16-GGUF/resolve/main/llama3-psychiatrist-v1.3B-fp16.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-psychiatrist-v1.3B-fp16-GGUF/resolve/main/llama3-psychiatrist-v1.3B-fp16.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-psychiatrist-v1.3B-fp16-GGUF/resolve/main/llama3-psychiatrist-v1.3B-fp16.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/llama3-psychiatrist-v1.3B-fp16-GGUF/resolve/main/llama3-psychiatrist-v1.3B-fp16.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-psychiatrist-v1.3B-fp16-GGUF/resolve/main/llama3-psychiatrist-v1.3B-fp16.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama3-psychiatrist-v1.3B-fp16-GGUF/resolve/main/llama3-psychiatrist-v1.3B-fp16.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-psychiatrist-v1.3B-fp16-GGUF/resolve/main/llama3-psychiatrist-v1.3B-fp16.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-psychiatrist-v1.3B-fp16-GGUF/resolve/main/llama3-psychiatrist-v1.3B-fp16.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama3-psychiatrist-v1.3B-fp16-GGUF/resolve/main/llama3-psychiatrist-v1.3B-fp16.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama3-psychiatrist-v1.3B-fp16-GGUF/resolve/main/llama3-psychiatrist-v1.3B-fp16.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-psychiatrist-v1.3B-fp16-GGUF/resolve/main/llama3-psychiatrist-v1.3B-fp16.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-psychiatrist-v1.3B-fp16-GGUF/resolve/main/llama3-psychiatrist-v1.3B-fp16.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/llama3-psychiatrist-v1.3B-fp16-GGUF/resolve/main/llama3-psychiatrist-v1.3B-fp16.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/llama3-psychiatrist-v1.3B-fp16-GGUF/resolve/main/llama3-psychiatrist-v1.3B-fp16.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
nalf3in/Gemma-2-9B-It-SPPO-Iter3-Q4_K_M-GGUF | nalf3in | "2024-07-02T16:01:02Z" | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:openbmb/UltraFeedback",
"base_model:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3",
"license:gemma",
"region:us"
] | text-generation | "2024-07-02T16:00:34Z" | ---
base_model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3
datasets:
- openbmb/UltraFeedback
language:
- en
license: gemma
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# nalf3in/Gemma-2-9B-It-SPPO-Iter3-Q4_K_M-GGUF
This model was converted to GGUF format from [`UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3`](https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo nalf3in/Gemma-2-9B-It-SPPO-Iter3-Q4_K_M-GGUF --hf-file gemma-2-9b-it-sppo-iter3-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo nalf3in/Gemma-2-9B-It-SPPO-Iter3-Q4_K_M-GGUF --hf-file gemma-2-9b-it-sppo-iter3-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo nalf3in/Gemma-2-9B-It-SPPO-Iter3-Q4_K_M-GGUF --hf-file gemma-2-9b-it-sppo-iter3-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo nalf3in/Gemma-2-9B-It-SPPO-Iter3-Q4_K_M-GGUF --hf-file gemma-2-9b-it-sppo-iter3-q4_k_m.gguf -c 2048
```
|
ZarahShibli/tmp_trainer | ZarahShibli | "2024-07-02T17:45:28Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T16:01:15Z" | ---
tags:
- generated_from_trainer
model-index:
- name: tmp_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tmp_trainer
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
NikolayKozloff/Qwen2-1.5B-ITA-Instruct-Q8_0-GGUF | NikolayKozloff | "2024-07-02T16:01:32Z" | 0 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"it",
"en",
"dataset:gsarti/clean_mc4_it",
"dataset:FreedomIntelligence/alpaca-gpt4-italian",
"base_model:e-palmisano/Qwen2-1.5B-ITA-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2024-07-02T16:01:22Z" | ---
base_model: e-palmisano/Qwen2-1.5B-ITA-Instruct
datasets:
- gsarti/clean_mc4_it
- FreedomIntelligence/alpaca-gpt4-italian
language:
- it
- en
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/Qwen2-1.5B-ITA-Instruct-Q8_0-GGUF
This model was converted to GGUF format from [`e-palmisano/Qwen2-1.5B-ITA-Instruct`](https://huggingface.co/e-palmisano/Qwen2-1.5B-ITA-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/e-palmisano/Qwen2-1.5B-ITA-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Qwen2-1.5B-ITA-Instruct-Q8_0-GGUF --hf-file qwen2-1.5b-ita-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Qwen2-1.5B-ITA-Instruct-Q8_0-GGUF --hf-file qwen2-1.5b-ita-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/Qwen2-1.5B-ITA-Instruct-Q8_0-GGUF --hf-file qwen2-1.5b-ita-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/Qwen2-1.5B-ITA-Instruct-Q8_0-GGUF --hf-file qwen2-1.5b-ita-instruct-q8_0.gguf -c 2048
```
|
GeorgeImmanuel/a2c_PickAndPlaceRobot-v2 | GeorgeImmanuel | "2024-07-02T16:06:40Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaPickAndPlace-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-07-02T16:01:50Z" | ---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v3
type: PandaPickAndPlace-v3
metrics:
- type: mean_reward
value: -50.00 +/- 0.00
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaPickAndPlace-v3**
This is a trained model of a **A2C** agent playing **PandaPickAndPlace-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
juanpablomesa/bge-small-bioasq | juanpablomesa | "2024-07-02T16:02:09Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:4012",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:BAAI/bge-small-en-v1.5",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | "2024-07-02T16:02:05Z" | ---
base_model: BAAI/bge-small-en-v1.5
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:4012
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: 'Extensive messenger RNA editing generates transcript and protein
diversity in genes involved in neural excitability, as previously described, as
well as in genes participating in a broad range of other cellular functions. '
sentences:
- Do cephalopods use RNA editing less frequently than other species?
- GV1001 vaccine targets which enzyme?
- Which event results in the acetylation of S6K1?
- source_sentence: Yes, exposure to household furry pets influences the gut microbiota
of infants.
sentences:
- Can pets affect infant microbiomed?
- What is the mode of action of Thiazovivin?
- What are the effects of CAMK4 inhibition?
- source_sentence: "In children with heart failure evidence of the effect of enalapril\
\ is empirical. Enalapril was clinically safe and effective in 50% to 80% of for\
\ children with cardiac failure secondary to congenital heart malformations before\
\ and after cardiac surgery, impaired ventricular function , valvar regurgitation,\
\ congestive cardiomyopathy, , arterial hypertension, life-threatening arrhythmias\
\ coexisting with circulatory insufficiency. \nACE inhibitors have shown a transient\
\ beneficial effect on heart failure due to anticancer drugs and possibly a beneficial\
\ effect in muscular dystrophy-associated cardiomyopathy, which deserves further\
\ studies."
sentences:
- Which receptors can be evaluated with the [18F]altanserin?
- In what proportion of children with heart failure has Enalapril been shown to
be safe and effective?
- Which major signaling pathways are regulated by RIP1?
- source_sentence: Cellular senescence-associated heterochromatic foci (SAHFS) are
a novel type of chromatin condensation involving alterations of linker histone
H1 and linker DNA-binding proteins. SAHFS can be formed by a variety of cell types,
but their mechanism of action remains unclear.
sentences:
- What is the relationship between the X chromosome and a neutrophil drumstick?
- Which microRNAs are involved in exercise adaptation?
- How are SAHFS created?
- source_sentence: Multicluster Pcdh diversity is required for mouse olfactory neural
circuit assembly. The vertebrate clustered protocadherin (Pcdh) cell surface proteins
are encoded by three closely linked gene clusters (Pcdhα, Pcdhβ, and Pcdhγ). Although
deletion of individual Pcdh clusters had subtle phenotypic consequences, the loss
of all three clusters (tricluster deletion) led to a severe axonal arborization
defect and loss of self-avoidance.
sentences:
- What are the effects of the deletion of all three Pcdh clusters (tricluster deletion)
in mice?
- what is the role of MEF-2 in cardiomyocyte differentiation?
- How many periods of regulatory innovation led to the evolution of vertebrates?
---
# BGE small finetuned BIOASQ
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) <!-- at revision 5c38ec7c405ec4b44b94cc5a9bb96e735b38267a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("juanpablomesa/bge-small-bioasq")
# Run inference
sentences = [
'Multicluster Pcdh diversity is required for mouse olfactory neural circuit assembly. The vertebrate clustered protocadherin (Pcdh) cell surface proteins are encoded by three closely linked gene clusters (Pcdhα, Pcdhβ, and Pcdhγ). Although deletion of individual Pcdh clusters had subtle phenotypic consequences, the loss of all three clusters (tricluster deletion) led to a severe axonal arborization defect and loss of self-avoidance.',
'What are the effects of the deletion of all three Pcdh clusters (tricluster deletion) in mice?',
'How many periods of regulatory innovation led to the evolution of vertebrates?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 4,012 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 63.38 tokens</li><li>max: 485 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 16.13 tokens</li><li>max: 49 tokens</li></ul> |
* Samples:
| positive | anchor |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|
| <code>Aberrant patterns of H3K4, H3K9, and H3K27 histone lysine methylation were shown to result in histone code alterations, which induce changes in gene expression, and affect the proliferation rate of cells in medulloblastoma.</code> | <code>What is the implication of histone lysine methylation in medulloblastoma?</code> |
| <code>STAG1/STAG2 proteins are tumour suppressor proteins that suppress cell proliferation and are essential for differentiation.</code> | <code>What is the role of STAG1/STAG2 proteins in differentiation?</code> |
| <code>The association between cell phone use and incident glioblastoma remains unclear. Some studies have reported that cell phone use was associated with incident glioblastoma, and with reduced survival of patients diagnosed with glioblastoma. However, other studies have repeatedly replicated to find an association between cell phone use and glioblastoma.</code> | <code>What is the association between cell phone use and glioblastoma?</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Framework Versions
- Python: 3.11.5
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.31.0
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
styalai/tokenizer-XTmath-8_000 | styalai | "2024-07-02T16:03:13Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T16:03:13Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Shani123/nllb-200-distilled-600M_heb_eng_v3_sci_articles | Shani123 | "2024-07-02T16:06:22Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-07-02T16:03:50Z" | Entry not found |
DeepDream2045/Daredevil-7B-Quant | DeepDream2045 | "2024-07-02T17:03:32Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-07-02T16:04:32Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gisang-lee/mistral-7b-qlora-arc-wandb-test-arc-challenge-all | gisang-lee | "2024-07-02T16:15:31Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-07-02T16:04:43Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
stephen-osullivan/my_awesome_wnut_model | stephen-osullivan | "2024-07-02T16:14:37Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:wnut_17",
"base_model:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-07-02T16:05:20Z" | ---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- wnut_17
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: my_awesome_wnut_model
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut_17
type: wnut_17
config: wnut_17
split: test
args: wnut_17
metrics:
- name: Precision
type: precision
value: 0.5680628272251309
- name: Recall
type: recall
value: 0.40222428174235403
- name: F1
type: f1
value: 0.4709712425393381
- name: Accuracy
type: accuracy
value: 0.9480141934932239
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_wnut_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2966
- Precision: 0.5681
- Recall: 0.4022
- F1: 0.4710
- Accuracy: 0.9480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 107 | 0.2496 | 0.5131 | 0.3624 | 0.4248 | 0.9450 |
| No log | 2.0 | 214 | 0.2794 | 0.5829 | 0.3485 | 0.4362 | 0.9456 |
| No log | 3.0 | 321 | 0.2808 | 0.5755 | 0.3781 | 0.4564 | 0.9465 |
| No log | 4.0 | 428 | 0.2935 | 0.5569 | 0.3902 | 0.4589 | 0.9476 |
| 0.059 | 5.0 | 535 | 0.2966 | 0.5681 | 0.4022 | 0.4710 | 0.9480 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.0.1+cu117
- Datasets 2.20.0
- Tokenizers 0.19.1
|
crocutacrocuto/convnext-base-224-ECCV_backEqSCheck-15 | crocutacrocuto | "2024-07-02T16:06:09Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"convnext",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-07-02T16:05:23Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
brandonyfeng/echo_SVD | brandonyfeng | "2024-07-02T16:40:24Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"diffusers:StableVideoDiffusionPipeline",
"region:us"
] | null | "2024-07-02T16:05:23Z" | Entry not found |
not1010011010/model-AIvaras | not1010011010 | "2024-07-02T16:07:14Z" | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T16:07:13Z" | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** not1010011010
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
CassioBN/xlm-roberta-base_LeNER-Br | CassioBN | "2024-07-02T17:14:43Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:lener_br",
"base_model:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-07-02T16:07:50Z" | ---
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- lener_br
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: xlm-roberta-base_LeNER-Br
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: lener_br
type: lener_br
config: lener_br
split: validation
args: lener_br
metrics:
- name: Precision
type: precision
value: 0.8295165394402035
- name: Recall
type: recall
value: 0.8965896589658966
- name: F1
type: f1
value: 0.8617499339148824
- name: Accuracy
type: accuracy
value: 0.9714166181062949
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base_LeNER-Br
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the lener_br dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Precision: 0.8295
- Recall: 0.8966
- F1: 0.8617
- Accuracy: 0.9714
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2394 | 1.0 | 979 | nan | 0.7134 | 0.8614 | 0.7805 | 0.9638 |
| 0.0375 | 2.0 | 1958 | nan | 0.8035 | 0.9043 | 0.8509 | 0.9670 |
| 0.0256 | 3.0 | 2937 | nan | 0.8026 | 0.8878 | 0.8430 | 0.9761 |
| 0.0194 | 4.0 | 3916 | nan | 0.7836 | 0.8861 | 0.8317 | 0.9670 |
| 0.015 | 5.0 | 4895 | nan | 0.8061 | 0.8988 | 0.8499 | 0.9691 |
| 0.0098 | 6.0 | 5874 | nan | 0.8279 | 0.9076 | 0.8659 | 0.9715 |
| 0.0082 | 7.0 | 6853 | nan | 0.8067 | 0.8905 | 0.8465 | 0.9681 |
| 0.0042 | 8.0 | 7832 | nan | 0.8233 | 0.9021 | 0.8609 | 0.9737 |
| 0.0037 | 9.0 | 8811 | nan | 0.8281 | 0.9010 | 0.8630 | 0.9712 |
| 0.0031 | 10.0 | 9790 | nan | 0.8295 | 0.8966 | 0.8617 | 0.9714 |
### Testing Results
metrics={'test_loss': 0.07461995631456375, 'test_precision': 0.8852040816326531, 'test_recall': 0.9137590520079, 'test_f1': 0.8992549400712667, 'test_accuracy': 0.9883402014967543, 'test_runtime': 13.0766, 'test_samples_per_second': 106.297, 'test_steps_per_second': 13.306})
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
debiao29/google-gemma-2b-1719936475 | debiao29 | "2024-07-02T16:08:02Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"region:us"
] | null | "2024-07-02T16:07:55Z" | ---
base_model: google/gemma-2b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
NikolayKozloff/dictalm2-it-qa-fine-tune-Q8_0-GGUF | NikolayKozloff | "2024-07-02T16:09:38Z" | 0 | 1 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"he",
"base_model:618AI/dictalm2-it-qa-fine-tune",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T16:09:01Z" | ---
base_model: 618AI/dictalm2-it-qa-fine-tune
language:
- he
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/dictalm2-it-qa-fine-tune-Q8_0-GGUF
This model was converted to GGUF format from [`618AI/dictalm2-it-qa-fine-tune`](https://huggingface.co/618AI/dictalm2-it-qa-fine-tune) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/618AI/dictalm2-it-qa-fine-tune) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/dictalm2-it-qa-fine-tune-Q8_0-GGUF --hf-file dictalm2-it-qa-fine-tune-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/dictalm2-it-qa-fine-tune-Q8_0-GGUF --hf-file dictalm2-it-qa-fine-tune-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/dictalm2-it-qa-fine-tune-Q8_0-GGUF --hf-file dictalm2-it-qa-fine-tune-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/dictalm2-it-qa-fine-tune-Q8_0-GGUF --hf-file dictalm2-it-qa-fine-tune-q8_0.gguf -c 2048
```
|
anhphuong/STT_medium | anhphuong | "2024-07-03T01:08:39Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-07-02T16:09:51Z" | ---
language:
- pa
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Medium Japanese - Anh Phuong
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: ja
split: None
args: 'config: hi, split: test'
metrics:
- name: Wer
type: wer
value: 68.60652436568667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Medium Japanese - Anh Phuong
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4725
- Wer: 68.6065
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:-------:|
| 0.0776 | 2.6596 | 1000 | 0.3001 | 72.3520 |
| 0.0076 | 5.3191 | 2000 | 0.3476 | 71.0632 |
| 0.0013 | 7.9787 | 3000 | 0.4063 | 68.8079 |
| 0.0001 | 10.6383 | 4000 | 0.4725 | 68.6065 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
debiao29/Qwen-Qwen1.5-0.5B-1719936629 | debiao29 | "2024-07-02T16:10:33Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-0.5B",
"region:us"
] | null | "2024-07-02T16:10:29Z" | ---
base_model: Qwen/Qwen1.5-0.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
Icchan/Aikatsu | Icchan | "2024-07-02T16:21:11Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-07-02T16:12:12Z" | ---
license: openrail
---
|
1231czx/7b_dpo_iter3_4e7_step50_nll | 1231czx | "2024-07-02T16:15:37Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T16:12:20Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
stablediffusionapi/mymix-g-jem | stablediffusionapi | "2024-07-02T16:15:04Z" | 0 | 0 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-07-02T16:12:37Z" | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# MyMIX-G Jem API Inference
![generated from modelslab.com](https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/generations/20952276681719936639.png)
## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "mymix-g-jem"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/mymix-g-jem)
Model link: [View model](https://modelslab.com/models/mymix-g-jem)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "mymix-g-jem",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
srirama/whisper-small-hi | srirama | "2024-07-02T17:33:18Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-07-02T16:12:38Z" | Entry not found |
lfnothing/opt-125m-gptq | lfnothing | "2024-07-02T16:13:44Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2024-07-02T16:13:22Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
KUD-genai/TAIDE_healthedu_v6 | KUD-genai | "2024-07-02T16:21:04Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:taide/Llama3-TAIDE-LX-8B-Chat-Alpha1",
"region:us"
] | null | "2024-07-02T16:15:34Z" | ---
base_model: taide/Llama3-TAIDE-LX-8B-Chat-Alpha1
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
efeno/RAFT_biomedical_100_PEFT | efeno | "2024-07-02T16:17:31Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T16:15:38Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf | RichardErkhov | "2024-07-02T23:43:16Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-07-02T16:15:55Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Narumashi-RT-11B - GGUF
- Model creator: https://huggingface.co/Alsebay/
- Original model: https://huggingface.co/Alsebay/Narumashi-RT-11B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Narumashi-RT-11B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.Q2_K.gguf) | Q2_K | 3.73GB |
| [Narumashi-RT-11B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.IQ3_XS.gguf) | IQ3_XS | 4.14GB |
| [Narumashi-RT-11B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.IQ3_S.gguf) | IQ3_S | 4.37GB |
| [Narumashi-RT-11B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.Q3_K_S.gguf) | Q3_K_S | 4.34GB |
| [Narumashi-RT-11B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.IQ3_M.gguf) | IQ3_M | 4.51GB |
| [Narumashi-RT-11B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.Q3_K.gguf) | Q3_K | 4.84GB |
| [Narumashi-RT-11B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.Q3_K_M.gguf) | Q3_K_M | 4.84GB |
| [Narumashi-RT-11B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.Q3_K_L.gguf) | Q3_K_L | 5.26GB |
| [Narumashi-RT-11B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.IQ4_XS.gguf) | IQ4_XS | 5.43GB |
| [Narumashi-RT-11B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.Q4_0.gguf) | Q4_0 | 5.66GB |
| [Narumashi-RT-11B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.IQ4_NL.gguf) | IQ4_NL | 5.72GB |
| [Narumashi-RT-11B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.Q4_K_S.gguf) | Q4_K_S | 5.7GB |
| [Narumashi-RT-11B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.Q4_K.gguf) | Q4_K | 6.02GB |
| [Narumashi-RT-11B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.Q4_K_M.gguf) | Q4_K_M | 6.02GB |
| [Narumashi-RT-11B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.Q4_1.gguf) | Q4_1 | 6.27GB |
| [Narumashi-RT-11B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.Q5_0.gguf) | Q5_0 | 6.89GB |
| [Narumashi-RT-11B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.Q5_K_S.gguf) | Q5_K_S | 6.89GB |
| [Narumashi-RT-11B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.Q5_K.gguf) | Q5_K | 7.08GB |
| [Narumashi-RT-11B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.Q5_K_M.gguf) | Q5_K_M | 7.08GB |
| [Narumashi-RT-11B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.Q5_1.gguf) | Q5_1 | 7.51GB |
| [Narumashi-RT-11B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.Q6_K.gguf) | Q6_K | 8.2GB |
| [Narumashi-RT-11B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.Q8_0.gguf) | Q8_0 | 10.62GB |
Original model description:
---
language:
- en
license: cc-by-nc-4.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
- Roleplay
- roleplay
base_model: Sao10K/Fimbulvetr-11B-v2
---
> [!Important]
> Still in experiment
# About this model
This model now can handle (limited) TSF content. If you Character Card have complex plot, maybe you should try other model (maybe bigger parameter?).
**Update: I think it worse than original model: Sao10K/Fimbulvetr-11B-v2. This model was trained with rough translated dataset, so the responses is short, the IQ logic go down, also it will response wrong name, nonsense sentences sometimes...**
Anyways, if you find this is good, please let me know. Will have another update later.
Do you know TSF, TS, TG? A lot of model don't really know about that, so I do some experiment to finetune TSF dataset.
- **Finetuned with rough translate dataset, to increase the accuracy in TSF theme, which is not quite popular. (lewd dataset)**
- **Finetuned from model :** Sao10K/Fimbulvetr-11B-v2 . Thank Sao10K a lot :)
## Still testing, but seem it good enough for handle information. But the logic go down a bit because the rough translate dataset.
## GGUF version? [here is it](https://huggingface.co/Alsebay/Narumashi-RT-11B-GGUF).
## Dataset
Rough translated dataset, you could say that this is bad quality dataset.
```
Dataset(all are novels):
30% skinsuit
30% possession
35% transform(shapeshift)
5% other
```
# Thank Unsloth for good finetuning tool. This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Jiban4/lora_model | Jiban4 | "2024-07-02T16:53:01Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T16:16:28Z" | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** Jiban4
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
debiao29/Qwen-Qwen1.5-1.8B-1719937023 | debiao29 | "2024-07-02T16:17:07Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-1.8B",
"region:us"
] | null | "2024-07-02T16:17:03Z" | ---
base_model: Qwen/Qwen1.5-1.8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
LevonHakobyan/adapter_base_const_lr_overfitcheck | LevonHakobyan | "2024-07-03T01:29:33Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-07-02T16:17:14Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NightBitch24-1/modeloBitch | NightBitch24-1 | "2024-07-02T16:46:15Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T16:17:27Z" | Entry not found |
debiao29/google-gemma-2b-1719937221 | debiao29 | "2024-07-02T16:20:28Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"region:us"
] | null | "2024-07-02T16:20:21Z" | ---
base_model: google/gemma-2b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
vgangal101/distilbert-base-uncased-finetuned-imdb-accelerate | vgangal101 | "2024-07-02T16:21:08Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T16:21:08Z" | Entry not found |
styalai/XTmath-unknowM | styalai | "2024-07-02T16:31:36Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"pytorch_model_hub_mixin",
"model_hub_mixin",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T16:21:14Z" | ---
tags:
- pytorch_model_hub_mixin
- model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
Trelis/multi-qa-MiniLM-L6-dot-v1-ft-triplets-2-cst-epoch-overlap | Trelis | "2024-07-02T16:21:38Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:9729",
"loss:TripletLoss",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:sentence-transformers/multi-qa-MiniLM-L6-dot-v1",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | "2024-07-02T16:21:33Z" | ---
base_model: sentence-transformers/multi-qa-MiniLM-L6-dot-v1
datasets: []
language: []
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:9729
- loss:TripletLoss
widget:
- source_sentence: What is the penalty awarded to the attacking team when a defending
player fails to retire the required seven metres or to the defending try line
after effecting a touch?
sentences:
- '13. 3 a player must not perform a voluntary rollball. ruling = a penalty to the
defending team at the point of the infringement. 13. 4 a player must not delay
in performing the rollball. ruling = a penalty to the defending team at the point
of the infringement. 13. 5 a player may only perform a rollball at the mark under
the following circumstances : 13. 5. 1 when a touch has been made ; or 13. 5.
2 when possession changes following the sixth touch ; or 13. 5. 3 when possession
changes due to the ball being dropped or passed and goes to the ground ; or 13.
5. 4 when possession changes due to an infringement by an attacking player at
a penalty, a tap or a rollball ; or fit playing rules - 5th edition copyright
© touch football australia 2020 11 13. 5. 5 when possession changes after the
half is touched or when the half places the ball on or over the try line ; or
13. 5. 6 in replacement of a penalty tap ; or 13. 5. 7 when so directed by the
referee.'
- 5th edition rules touch football tion rules touch football touch football australia
( tfa ) undertook an extensive internal review of their domestic playing rules
throughout 2018 and 2019. the review was led by an vastly experienced group of
current and past players, coaches, referees and administrators of the sport from
community competitions to the elite international game. this group consulted broadly
within the australian community to develop a set of playing rules that could be
applied across all levels of the sport. the result was the tfa 8th edition playing
rules. at the federation of international touch paris convention held in october
2019 touch football australia presented the tfa 8th edition playing rules and
subsequently offered fit and all national touch associations ( ntas ) royalty
free rights to use the newly developed rules. consequently, the fit board resolved
to adopt the tfa 8th edition playing rules as the 5th edition fit playing rules
to be used across all levels of the game internationally. fit and its members
acknowledge and thank touch football australia for the rights to use these rules.
whilst consistency in the application of the rules of the game is important, fit
encourages its members to offer features in local competition rules to ensure
that all participants enjoy a high quality experience.
- ruling = a penalty to the attacking team at the point of the infringement or on
the seven ( 7 ) metre line directly forward of the infringement. 15. 4 when a
rollball occurs within defending team ’ s seven metre zone or a penalty tap within
ten ( 10 ) metres of the defending team ’ s try line, all players from the defending
team must have both feet on or behind their try line and no other part of the
body in contact with the ground forward of their try line. ruling = a penalty
to the attacking team at the seven ( 7 ) metre line directly forward of the point
of the infringement. 15. 5 after effecting the touch, the defending player must
retire the required seven ( 7 ) metres or to the defending try line as indicated
by the referee without interfering with the attacking team. ruling = a penalty
to the attacking team ten ( 10 ) metres forward of the infringement or if on the
defensive try line, on the seven ( 7 ) metre line. fit playing rules - 5th edition
copyright © touch football australia 2020 13 16 obstruction 16. 1 a player in
possession must not run or otherwise move behind other attacking players or the
referee in an attempt to avoid an imminent touch.
- source_sentence: What is the definition of 'infringement' in Touch Rugby?
sentences:
- 2. 2 teams playing unregistered players are liable to forfeit any match in which
unregistered players have competed. fit playing rules - 5th edition copyright
© touch football australia 2020 5 3 the ball 3. 1 the game is played with an oval,
inflated ball of a shape, colour and size approved by fit or the nta. 3. 2 the
ball shall be inflated to the manufacturers ’ recommended air pressure. 3. 3 the
referee shall immediately pause the match if the size and shape of the ball no
longer complies with clauses 3. 1 or 3. 2 to allow for the ball to replaced or
the issue rectified. 3. 4 the ball must not be hidden under player attire. 4 playing
uniform 4. 1 participating players are to be correctly attired in matching team
uniforms 4. 2 playing uniforms consist of shirt, singlet or other item as approved
by the nta or nta competition provider, shorts and / or tights and socks. 4. 3
all players are to wear a unique identifying number not less than 16cm in height,
clearly displayed on the rear of the playing top. 4. 3. 1 identifying numbers
must feature no more than two ( 2 ) digits.
- end of play when the referee indicates completion of the match. exclusion when
a player is sent to the nearest sin bin area following three ( 3 ) penalties by
the defending team upon entering their seven metre zone. the player is counted
as a player on the field of play and cannot be replaced or interchanged. fit playing
rules - 5th edition copyright © touch football australia 2020 1 fit federation
of international touch field of play the playing area bounded by the sidelines
and dead ball lines, both of which are out of bounds. see appendix 1. forced interchange
when a player is required to undertake a compulsory interchange for an infringement
ruled more serious than a penalty but less serious than a permanent interchange,
sin bin or dismissal. forward a position or direction towards the dead ball line
beyond the team ’ s attacking try line. full time the expiration of the second
period of time allowed for play. half the player who takes possession following
a rollball. half time the break in play between the two halves of a match. imminent
about to occur, it is almost certain to occur. infringement the action of a player
contrary to the rules of the game.
- 2. 2 teams playing unregistered players are liable to forfeit any match in which
unregistered players have competed. fit playing rules - 5th edition copyright
© touch football australia 2020 5 3 the ball 3. 1 the game is played with an oval,
inflated ball of a shape, colour and size approved by fit or the nta. 3. 2 the
ball shall be inflated to the manufacturers ’ recommended air pressure. 3. 3 the
referee shall immediately pause the match if the size and shape of the ball no
longer complies with clauses 3. 1 or 3. 2 to allow for the ball to replaced or
the issue rectified. 3. 4 the ball must not be hidden under player attire. 4 playing
uniform 4. 1 participating players are to be correctly attired in matching team
uniforms 4. 2 playing uniforms consist of shirt, singlet or other item as approved
by the nta or nta competition provider, shorts and / or tights and socks. 4. 3
all players are to wear a unique identifying number not less than 16cm in height,
clearly displayed on the rear of the playing top. 4. 3. 1 identifying numbers
must feature no more than two ( 2 ) digits.
- source_sentence: What is the penalty awarded to the non-offending team when an offence
is identified?
sentences:
- ruling = a penalty to the attacking team at a point ten ( 10 ) metres directly
forward of the infringement. 13. 12 players of the defending team must not move
forward of the onside position until the half has made contact with the ball,
unless directed to do so by the referee or in accordance with 13. 12. 1. 13. 12.
1 when the half is not within one ( 1 ) metre of the rollball, onside players
of the defending team may move forward as soon as the player performing the rollball
releases the ball. if the half is not in position and a defending player moves
forward and makes contact with the ball, a change of possession results. 13. 13
if in the act of performing the rollball, the attacking player makes contact with
the sideline or any ground outside the field of play a change of possession will
occur with the rollball to be taken seven ( 7 ) metres in field. 13. 14 after
a touch is made between the dead ball line and the seven ( 7 ) metre line, an
attacking team is permitted to rollball on the seven ( 7 ) metre line at a point
directly in line with where the touch was made.
- 10. 4 if the ball is accidentally knocked from the hands of a player in possession
during a touch, the touch counts and the attacking team retains possession. 10.
5 the defending player must not deliberately knock the ball from the hands of
a player in possession during a touch. ruling = a penalty to the attacking team
at the point of the infringement. 10. 6 a player must not pass or otherwise deliver
the ball after a touch has been made. ruling = a penalty to the defending team
at the point of the infringement, or if in - goal the nearest point on the seven
( 7 ) metre line. 10. 7 the half may pass or run with the ball but cannot get
touched while in possession of the ball. ruling = a change of possession occurs
at the point of the touch, or if in - goal the nearest point on the seven ( 7
) metre line. 10. 8 if a touch is made in the in - goal area before the ball is
grounded, the player in possession is to perform a rollball seven ( 7 ) metres
from the team ’ s attacking try line, provided it is not the sixth touch and the
player is not half.
- 4. 10 referees and players may wear sport monitoring equipment and medical supports
such as knee or ankle braces provided, at the sole discretion of competition ’
s controlling body, the items are not dangerous. 5 team composition 5. 1 a team
consists of a maximum of 14 players, no more than six ( 6 ) of whom are allowed
on the field at any time. fit playing rules - 5th edition 6 copyright © touch
football australia 2020 ruling = a penalty awarded to the non - offending team
at the time the offence is identified seven ( 7 ) metres infield on the halfway
line or the position of the ball, whichever is the greater advantage. 5. 2 a team
must have a minimum of four ( 4 ) players on the field for a match to commence
or continue, except during a drop - off. 5. 3 where the number of players on the
field from one team falls below four ( 4 ), the match is to be abandoned and the
non - offending team is to be declared the winner. 5. 3. 1 this does not apply
for players sent to the sin bin area.
- source_sentence: What is the requirement for adapting or altering rules for local
competitions?
sentences:
- 'whilst consistency in the application of the rules of the game is important,
fit encourages its members to offer features in local competition rules to ensure
that all participants enjoy a high quality experience. these rules in no way restrict
any nta or their authorised competition providers from having different match
conditions to these rules. any adaptation of or alterations to the rules for local
competitions should be clearly articulated in relevant competition guidelines
and be readily available for players, coaches and referees alike preamble copyright
© touch football australia 2020 all rights reserved. these touch football rules
are protected by copyright laws. except as permitted under the copyright act,
these rules must not be reproduced by any process, electronic or otherwise, without
the written permission of touch football australia. fit playing rules - 5th edition
copyright © touch football australia 2020 appendix 1 – field of play contents
01 i the field of play 5 02 i player registration 5 03 i the ball 6 04 i playing
uniform 6 05 i team composition 6 06 i team coach and team officials 7 07 i commencement
and recommencement of play 7 08 i match duration 8 09 i possession 8 10 i the
touch 9 11 i passing 10 12 i ball touched in flight 10 13 i the rollball 11 14
i scoring 13 15 i offside 13 16 i obstruction 14 17 i interchange 14 18 i penalty
15 19 i advantage 16 20 i misconduct 16 21 i forced interchange 16 22 i sin bin
16 23 i dismissal 17 24 i drop - off 17 25 i match officials 18 fit playing rules
- 5th edition copyright © touch football australia 2020 fit playing rules - 5th
edition copyright © touch football australia 2020 definitions and terminology
unless the contrary intention appears, the following definitions and terminology
apply to the game of touch : term / phrase definition / description advantage
the period of time after an infringement in which the non - offending side has
the opportunity to gain advantage either territorial, tactical or in the form
of a try.'
- 5th edition rules touch football tion rules touch football touch football australia
( tfa ) undertook an extensive internal review of their domestic playing rules
throughout 2018 and 2019. the review was led by an vastly experienced group of
current and past players, coaches, referees and administrators of the sport from
community competitions to the elite international game. this group consulted broadly
within the australian community to develop a set of playing rules that could be
applied across all levels of the sport. the result was the tfa 8th edition playing
rules. at the federation of international touch paris convention held in october
2019 touch football australia presented the tfa 8th edition playing rules and
subsequently offered fit and all national touch associations ( ntas ) royalty
free rights to use the newly developed rules. consequently, the fit board resolved
to adopt the tfa 8th edition playing rules as the 5th edition fit playing rules
to be used across all levels of the game internationally. fit and its members
acknowledge and thank touch football australia for the rights to use these rules.
whilst consistency in the application of the rules of the game is important, fit
encourages its members to offer features in local competition rules to ensure
that all participants enjoy a high quality experience.
- 5. 3. 1 this does not apply for players sent to the sin bin area. 5. 4 in mixed
gender competitions, the maximum number of males allowed on the field of play
is three ( 3 ), the minimum male requirement is one ( 1 ) and the minimum female
requirement is one ( 1 ). 6 team coach and team officials 6. 1 the team coach
( s ) and team officials may be permitted inside the perimeter but shall be required
to be positioned either in the interchange area or at the end of the field of
play for the duration of the match. 6. 2 the team coach ( s ) and team officials
may move from one position to the other but shall do so without delay. while in
a position at the end of the field of play, the team coach ( s ) or team official
must remain no closer than five ( 5 ) metres from the dead ball line and must
not coach or communicate ( verbal or non - verbal ) with either team or the referees.
- source_sentence: What is the minimum number of males and females required on the
field of play in mixed gender competitions?
sentences:
- ruling = a penalty to the attacking team at the seven ( 7 ) metre line in line
with the point of the infringement. 11 passing 11. 1 a player in possession may
not kick, pass, flick, knock, throw, hand - off or otherwise propel the ball in
a forward direction, either intentionally or otherwise to another player. ruling
= a penalty will be awarded to the defending team at the mark where the ball was
propelled forward unless advantage is applied. 11. 2 a player in possession may
not intentionally kick, pass, flick, knock, throw, hand - off or otherwise propel
the ball in a forward direction over an opposition player and regain possession.
ruling = a penalty will be awarded to the defending team at the mark where the
ball was propelled forward. 12 ball touched in flight 12. 1 if a player from the
defending team deliberately makes contact with the ball in flight and the ball
goes to ground, the attacking team retains the ball and the touch count restarts
as zero ( 0 ) touch. 12. 2 if a player from the defending team deliberately makes
contact with the ball in flight and the ball is retrieved by an attacking player,
without touching the ground, play continues and the next touch is zero ( 0 ) touch.
- 5. 3. 1 this does not apply for players sent to the sin bin area. 5. 4 in mixed
gender competitions, the maximum number of males allowed on the field of play
is three ( 3 ), the minimum male requirement is one ( 1 ) and the minimum female
requirement is one ( 1 ). 6 team coach and team officials 6. 1 the team coach
( s ) and team officials may be permitted inside the perimeter but shall be required
to be positioned either in the interchange area or at the end of the field of
play for the duration of the match. 6. 2 the team coach ( s ) and team officials
may move from one position to the other but shall do so without delay. while in
a position at the end of the field of play, the team coach ( s ) or team official
must remain no closer than five ( 5 ) metres from the dead ball line and must
not coach or communicate ( verbal or non - verbal ) with either team or the referees.
- tap and tap penalty the method of commencing the match, recommencing the match
after half time and after a try has been scored. the tap is also the method of
recommencing play when a penalty is awarded. the tap is taken by placing the ball
on the ground at or behind the mark, releasing both hands from the ball, tapping
the ball gently with either foot or touching the foot on the ball. the ball must
not roll or move more than one ( 1 ) metre in any direction and must be retrieved
cleanly, without touching the ground again. the player may face any direction
and use either foot. provided it is at the mark, the ball does not have to be
lifted from the ground prior to a tap being taken. team a group of players constituting
one ( 1 ) side in a competition match. tfa touch football australia limited touch
any contact between the player in possession and a defending player. a touch includes
contact on the ball, hair or clothing and may be made by a defending player or
by the player in possession. touch count the progressive number of touches that
each team has before a change of possession, from zero ( 0 ) to six ( 6 ).
---
# SentenceTransformer based on sentence-transformers/multi-qa-MiniLM-L6-dot-v1
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/multi-qa-MiniLM-L6-dot-v1](https://huggingface.co/sentence-transformers/multi-qa-MiniLM-L6-dot-v1). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/multi-qa-MiniLM-L6-dot-v1](https://huggingface.co/sentence-transformers/multi-qa-MiniLM-L6-dot-v1) <!-- at revision c3bdeb02464bc83f9b85156a3386a50bfbf3e6a8 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Dot Product
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Trelis/multi-qa-MiniLM-L6-dot-v1-ft-triplets-2-cst-epoch-overlap")
# Run inference
sentences = [
'What is the minimum number of males and females required on the field of play in mixed gender competitions?',
'5. 3. 1 this does not apply for players sent to the sin bin area. 5. 4 in mixed gender competitions, the maximum number of males allowed on the field of play is three ( 3 ), the minimum male requirement is one ( 1 ) and the minimum female requirement is one ( 1 ). 6 team coach and team officials 6. 1 the team coach ( s ) and team officials may be permitted inside the perimeter but shall be required to be positioned either in the interchange area or at the end of the field of play for the duration of the match. 6. 2 the team coach ( s ) and team officials may move from one position to the other but shall do so without delay. while in a position at the end of the field of play, the team coach ( s ) or team official must remain no closer than five ( 5 ) metres from the dead ball line and must not coach or communicate ( verbal or non - verbal ) with either team or the referees.',
'tap and tap penalty the method of commencing the match, recommencing the match after half time and after a try has been scored. the tap is also the method of recommencing play when a penalty is awarded. the tap is taken by placing the ball on the ground at or behind the mark, releasing both hands from the ball, tapping the ball gently with either foot or touching the foot on the ball. the ball must not roll or move more than one ( 1 ) metre in any direction and must be retrieved cleanly, without touching the ground again. the player may face any direction and use either foot. provided it is at the mark, the ball does not have to be lifted from the ground prior to a tap being taken. team a group of players constituting one ( 1 ) side in a competition match. tfa touch football australia limited touch any contact between the player in possession and a defending player. a touch includes contact on the ball, hair or clothing and may be made by a defending player or by the player in possession. touch count the progressive number of touches that each team has before a change of possession, from zero ( 0 ) to six ( 6 ).',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `learning_rate`: 2e-05
- `num_train_epochs`: 2
- `lr_scheduler_type`: constant
- `warmup_ratio`: 0.3
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: constant
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.3
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | loss |
|:------:|:----:|:-------------:|:------:|
| 0.0066 | 2 | 4.2577 | - |
| 0.0131 | 4 | 4.0287 | - |
| 0.0197 | 6 | 4.1022 | - |
| 0.0262 | 8 | 3.8676 | - |
| 0.0328 | 10 | 3.836 | - |
| 0.0393 | 12 | 3.5016 | - |
| 0.0459 | 14 | 3.5338 | - |
| 0.0525 | 16 | 3.2488 | - |
| 0.0590 | 18 | 3.1999 | - |
| 0.0656 | 20 | 3.1266 | - |
| 0.0721 | 22 | 2.7272 | - |
| 0.0787 | 24 | 2.9005 | - |
| 0.0852 | 26 | 2.4328 | - |
| 0.0918 | 28 | 2.0929 | - |
| 0.0984 | 30 | 1.9004 | - |
| 0.1049 | 32 | 2.0227 | - |
| 0.1115 | 34 | 1.8577 | - |
| 0.1180 | 36 | 1.6767 | - |
| 0.1246 | 38 | 1.6381 | - |
| 0.1311 | 40 | 1.391 | - |
| 0.1377 | 42 | 1.6477 | - |
| 0.1443 | 44 | 1.7922 | - |
| 0.1508 | 46 | 1.3904 | - |
| 0.1574 | 48 | 1.7869 | - |
| 0.1639 | 50 | 1.6701 | - |
| 0.1705 | 52 | 1.1823 | - |
| 0.1770 | 54 | 0.9928 | - |
| 0.1836 | 56 | 1.4254 | - |
| 0.1902 | 58 | 1.4535 | - |
| 0.1967 | 60 | 1.3876 | - |
| 0.2033 | 62 | 1.411 | - |
| 0.2098 | 64 | 0.9778 | - |
| 0.2164 | 66 | 0.9914 | - |
| 0.2230 | 68 | 0.9946 | - |
| 0.2295 | 70 | 0.9828 | - |
| 0.2361 | 72 | 0.7432 | - |
| 0.2426 | 74 | 1.315 | - |
| 0.2492 | 76 | 0.6955 | - |
| 0.2525 | 77 | - | 0.5755 |
| 0.2557 | 78 | 0.7919 | - |
| 0.2623 | 80 | 1.001 | - |
| 0.2689 | 82 | 1.086 | - |
| 0.2754 | 84 | 0.8213 | - |
| 0.2820 | 86 | 0.9834 | - |
| 0.2885 | 88 | 1.0083 | - |
| 0.2951 | 90 | 0.6879 | - |
| 0.3016 | 92 | 0.672 | - |
| 0.3082 | 94 | 0.663 | - |
| 0.3148 | 96 | 1.0285 | - |
| 0.3213 | 98 | 1.0634 | - |
| 0.3279 | 100 | 0.9316 | - |
| 0.3344 | 102 | 0.88 | - |
| 0.3410 | 104 | 0.9057 | - |
| 0.3475 | 106 | 0.7303 | - |
| 0.3541 | 108 | 0.8927 | - |
| 0.3607 | 110 | 0.6541 | - |
| 0.3672 | 112 | 0.6616 | - |
| 0.3738 | 114 | 0.9198 | - |
| 0.3803 | 116 | 0.4953 | - |
| 0.3869 | 118 | 0.7159 | - |
| 0.3934 | 120 | 0.6596 | - |
| 0.4 | 122 | 0.8359 | - |
| 0.4066 | 124 | 0.7748 | - |
| 0.4131 | 126 | 0.5949 | - |
| 0.4197 | 128 | 0.4204 | - |
| 0.4262 | 130 | 1.0151 | - |
| 0.4328 | 132 | 0.4979 | - |
| 0.4393 | 134 | 0.6496 | - |
| 0.4459 | 136 | 0.6039 | - |
| 0.4525 | 138 | 0.6333 | - |
| 0.4590 | 140 | 0.5494 | - |
| 0.4656 | 142 | 0.5599 | - |
| 0.4721 | 144 | 0.353 | - |
| 0.4787 | 146 | 0.6529 | - |
| 0.4852 | 148 | 0.5215 | - |
| 0.4918 | 150 | 0.6408 | - |
| 0.4984 | 152 | 0.6084 | - |
| 0.5049 | 154 | 0.8947 | 0.2713 |
| 0.5115 | 156 | 0.515 | - |
| 0.5180 | 158 | 0.4075 | - |
| 0.5246 | 160 | 0.7453 | - |
| 0.5311 | 162 | 0.4385 | - |
| 0.5377 | 164 | 0.5747 | - |
| 0.5443 | 166 | 0.725 | - |
| 0.5508 | 168 | 0.6338 | - |
| 0.5574 | 170 | 0.2453 | - |
| 0.5639 | 172 | 0.4578 | - |
| 0.5705 | 174 | 0.3541 | - |
| 0.5770 | 176 | 0.5326 | - |
| 0.5836 | 178 | 0.3699 | - |
| 0.5902 | 180 | 0.1714 | - |
| 0.5967 | 182 | 0.3149 | - |
| 0.6033 | 184 | 0.561 | - |
| 0.6098 | 186 | 0.2852 | - |
| 0.6164 | 188 | 0.2715 | - |
| 0.6230 | 190 | 0.5521 | - |
| 0.6295 | 192 | 0.2852 | - |
| 0.6361 | 194 | 0.5103 | - |
| 0.6426 | 196 | 0.3866 | - |
| 0.6492 | 198 | 0.4369 | - |
| 0.6557 | 200 | 0.3936 | - |
| 0.6623 | 202 | 0.5084 | - |
| 0.6689 | 204 | 0.4912 | - |
| 0.6754 | 206 | 0.2659 | - |
| 0.6820 | 208 | 0.6209 | - |
| 0.6885 | 210 | 0.3022 | - |
| 0.6951 | 212 | 0.2738 | - |
| 0.7016 | 214 | 0.5555 | - |
| 0.7082 | 216 | 0.3672 | - |
| 0.7148 | 218 | 0.3489 | - |
| 0.7213 | 220 | 0.6139 | - |
| 0.7279 | 222 | 0.4402 | - |
| 0.7344 | 224 | 0.2829 | - |
| 0.7410 | 226 | 0.3669 | - |
| 0.7475 | 228 | 0.729 | - |
| 0.7541 | 230 | 0.2565 | - |
| 0.7574 | 231 | - | 0.1849 |
| 0.7607 | 232 | 0.2596 | - |
| 0.7672 | 234 | 0.2359 | - |
| 0.7738 | 236 | 0.4406 | - |
| 0.7803 | 238 | 0.2629 | - |
| 0.7869 | 240 | 0.3583 | - |
| 0.7934 | 242 | 0.5298 | - |
| 0.8 | 244 | 0.6225 | - |
| 0.8066 | 246 | 0.3853 | - |
| 0.8131 | 248 | 0.4741 | - |
| 0.8197 | 250 | 0.3991 | - |
| 0.8262 | 252 | 0.5629 | - |
| 0.8328 | 254 | 0.2935 | - |
| 0.8393 | 256 | 0.3563 | - |
| 0.8459 | 258 | 0.3628 | - |
| 0.8525 | 260 | 0.2416 | - |
| 0.8590 | 262 | 0.1493 | - |
| 0.8656 | 264 | 0.2488 | - |
| 0.8721 | 266 | 0.4055 | - |
| 0.8787 | 268 | 0.1286 | - |
| 0.8852 | 270 | 0.4217 | - |
| 0.8918 | 272 | 0.3529 | - |
| 0.8984 | 274 | 0.1921 | - |
| 0.9049 | 276 | 0.1736 | - |
| 0.9115 | 278 | 0.4308 | - |
| 0.9180 | 280 | 0.0992 | - |
| 0.9246 | 282 | 0.3927 | - |
| 0.9311 | 284 | 0.3451 | - |
| 0.9377 | 286 | 0.4504 | - |
| 0.9443 | 288 | 0.3065 | - |
| 0.9508 | 290 | 0.2844 | - |
| 0.9574 | 292 | 0.4308 | - |
| 0.9639 | 294 | 0.1754 | - |
| 0.9705 | 296 | 0.2608 | - |
| 0.9770 | 298 | 0.4232 | - |
| 0.9836 | 300 | 0.3234 | - |
| 0.9902 | 302 | 0.24 | - |
| 0.9967 | 304 | 0.2112 | - |
| 1.0033 | 306 | 0.6322 | - |
| 1.0098 | 308 | 0.2987 | 0.1357 |
| 1.0164 | 310 | 0.4052 | - |
| 1.0230 | 312 | 0.1458 | - |
| 1.0295 | 314 | 0.2593 | - |
| 1.0361 | 316 | 0.193 | - |
| 1.0426 | 318 | 0.29 | - |
| 1.0492 | 320 | 0.299 | - |
| 1.0557 | 322 | 0.0841 | - |
| 1.0623 | 324 | 0.0534 | - |
| 1.0689 | 326 | 0.2166 | - |
| 1.0754 | 328 | 0.2431 | - |
| 1.0820 | 330 | 0.2621 | - |
| 1.0885 | 332 | 0.0986 | - |
| 1.0951 | 334 | 0.4274 | - |
| 1.1016 | 336 | 0.2388 | - |
| 1.1082 | 338 | 0.0899 | - |
| 1.1148 | 340 | 0.158 | - |
| 1.1213 | 342 | 0.1748 | - |
| 1.1279 | 344 | 0.1226 | - |
| 1.1344 | 346 | 0.1815 | - |
| 1.1410 | 348 | 0.2312 | - |
| 1.1475 | 350 | 0.4114 | - |
| 1.1541 | 352 | 0.2258 | - |
| 1.1607 | 354 | 0.1519 | - |
| 1.1672 | 356 | 0.1536 | - |
| 1.1738 | 358 | 0.103 | - |
| 1.1803 | 360 | 0.2901 | - |
| 1.1869 | 362 | 0.1629 | - |
| 1.1934 | 364 | 0.1541 | - |
| 1.2 | 366 | 0.1986 | - |
| 1.2066 | 368 | 0.2492 | - |
| 1.2131 | 370 | 0.2137 | - |
| 1.2197 | 372 | 0.1954 | - |
| 1.2262 | 374 | 0.1947 | - |
| 1.2328 | 376 | 0.2114 | - |
| 1.2393 | 378 | 0.4277 | - |
| 1.2459 | 380 | 0.3636 | - |
| 1.2525 | 382 | 0.4151 | - |
| 1.2590 | 384 | 0.2258 | - |
| 1.2623 | 385 | - | 0.1095 |
| 1.2656 | 386 | 0.2794 | - |
| 1.2721 | 388 | 0.2504 | - |
| 1.2787 | 390 | 0.3785 | - |
| 1.2852 | 392 | 0.2448 | - |
| 1.2918 | 394 | 0.3936 | - |
| 1.2984 | 396 | 0.1686 | - |
| 1.3049 | 398 | 0.2301 | - |
| 1.3115 | 400 | 0.1533 | - |
| 1.3180 | 402 | 0.2516 | - |
| 1.3246 | 404 | 0.1238 | - |
| 1.3311 | 406 | 0.1629 | - |
| 1.3377 | 408 | 0.1395 | - |
| 1.3443 | 410 | 0.1093 | - |
| 1.3508 | 412 | 0.0899 | - |
| 1.3574 | 414 | 0.1793 | - |
| 1.3639 | 416 | 0.0648 | - |
| 1.3705 | 418 | 0.2402 | - |
| 1.3770 | 420 | 0.2711 | - |
| 1.3836 | 422 | 0.1457 | - |
| 1.3902 | 424 | 0.1338 | - |
| 1.3967 | 426 | 0.3074 | - |
| 1.4033 | 428 | 0.0738 | - |
| 1.4098 | 430 | 0.1702 | - |
| 1.4164 | 432 | 0.111 | - |
| 1.4230 | 434 | 0.249 | - |
| 1.4295 | 436 | 0.1143 | - |
| 1.4361 | 438 | 0.2255 | - |
| 1.4426 | 440 | 0.3167 | - |
| 1.4492 | 442 | 0.0751 | - |
| 1.4557 | 444 | 0.1101 | - |
| 1.4623 | 446 | 0.2098 | - |
| 1.4689 | 448 | 0.2086 | - |
| 1.4754 | 450 | 0.0978 | - |
| 1.4820 | 452 | 0.3184 | - |
| 1.4885 | 454 | 0.1347 | - |
| 1.4951 | 456 | 0.2259 | - |
| 1.5016 | 458 | 0.1651 | - |
| 1.5082 | 460 | 0.2183 | - |
| 1.5148 | 462 | 0.1315 | 0.0771 |
| 1.5213 | 464 | 0.2672 | - |
| 1.5279 | 466 | 0.1783 | - |
| 1.5344 | 468 | 0.0408 | - |
| 1.5410 | 470 | 0.4634 | - |
| 1.5475 | 472 | 0.1762 | - |
| 1.5541 | 474 | 0.0553 | - |
| 1.5607 | 476 | 0.2445 | - |
| 1.5672 | 478 | 0.1988 | - |
| 1.5738 | 480 | 0.1985 | - |
| 1.5803 | 482 | 0.1484 | - |
| 1.5869 | 484 | 0.1403 | - |
| 1.5934 | 486 | 0.1993 | - |
| 1.6 | 488 | 0.1486 | - |
| 1.6066 | 490 | 0.2899 | - |
| 1.6131 | 492 | 0.2464 | - |
| 1.6197 | 494 | 0.1352 | - |
| 1.6262 | 496 | 0.1233 | - |
| 1.6328 | 498 | 0.0413 | - |
| 1.6393 | 500 | 0.091 | - |
| 1.6459 | 502 | 0.0828 | - |
| 1.6525 | 504 | 0.1488 | - |
| 1.6590 | 506 | 0.1246 | - |
| 1.6656 | 508 | 0.2795 | - |
| 1.6721 | 510 | 0.067 | - |
| 1.6787 | 512 | 0.168 | - |
| 1.6852 | 514 | 0.2215 | - |
| 1.6918 | 516 | 0.0854 | - |
| 1.6984 | 518 | 0.2192 | - |
| 1.7049 | 520 | 0.1479 | - |
| 1.7115 | 522 | 0.1924 | - |
| 1.7180 | 524 | 0.2075 | - |
| 1.7246 | 526 | 0.208 | - |
| 1.7311 | 528 | 0.1743 | - |
| 1.7377 | 530 | 0.0817 | - |
| 1.7443 | 532 | 0.1513 | - |
| 1.7508 | 534 | 0.3422 | - |
| 1.7574 | 536 | 0.1101 | - |
| 1.7639 | 538 | 0.2815 | - |
| 1.7672 | 539 | - | 0.0693 |
| 1.7705 | 540 | 0.1837 | - |
| 1.7770 | 542 | 0.0879 | - |
| 1.7836 | 544 | 0.0746 | - |
| 1.7902 | 546 | 0.2052 | - |
| 1.7967 | 548 | 0.1416 | - |
| 1.8033 | 550 | 0.1141 | - |
| 1.8098 | 552 | 0.0312 | - |
| 1.8164 | 554 | 0.139 | - |
| 1.8230 | 556 | 0.1078 | - |
| 1.8295 | 558 | 0.1302 | - |
| 1.8361 | 560 | 0.0124 | - |
| 1.8426 | 562 | 0.2641 | - |
| 1.8492 | 564 | 0.1625 | - |
| 1.8557 | 566 | 0.1907 | - |
| 1.8623 | 568 | 0.0 | - |
| 1.8689 | 570 | 0.1721 | - |
| 1.8754 | 572 | 0.1178 | - |
| 1.8820 | 574 | 0.0345 | - |
| 1.8885 | 576 | 0.0924 | - |
| 1.8951 | 578 | 0.0513 | - |
| 1.9016 | 580 | 0.0929 | - |
| 1.9082 | 582 | 0.1502 | - |
| 1.9148 | 584 | 0.0338 | - |
| 1.9213 | 586 | 0.1348 | - |
| 1.9279 | 588 | 0.0297 | - |
| 1.9344 | 590 | 0.0306 | - |
| 1.9410 | 592 | 0.1416 | - |
| 1.9475 | 594 | 0.0427 | - |
| 1.9541 | 596 | 0.1916 | - |
| 1.9607 | 598 | 0.1969 | - |
| 1.9672 | 600 | 0.0765 | - |
| 1.9738 | 602 | 0.1035 | - |
| 1.9803 | 604 | 0.261 | - |
| 1.9869 | 606 | 0.0845 | - |
| 1.9934 | 608 | 0.0566 | - |
| 2.0 | 610 | 0.704 | - |
</details>
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.42.3
- PyTorch: 2.1.1+cu121
- Accelerate: 0.31.0
- Datasets: 2.17.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
jw-hf-test/jw4 | jw-hf-test | "2024-07-02T17:52:28Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T16:21:38Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Ammartatox/sentientscribeGGUF | Ammartatox | "2024-07-02T16:32:59Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T16:21:59Z" | ---
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
---
# Uploaded model
- **Developed by:** Ammartatox
- **License:** apache-2.0
- **Finetuned from model :** NousResearch/Nous-Hermes-2-Mistral-7B-DPO
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hasininawoda/output1 | hasininawoda | "2024-07-02T16:27:09Z" | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2024-07-02T16:22:21Z" | ---
base_model: CompVis/stable-diffusion-v1-4
library_name: diffusers
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - hasininawoda/output1
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were fine-tuned on the None dataset. You can find some example images in the following.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
efeno/RAFT_biomedical_60_PEFT | efeno | "2024-07-02T16:23:46Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T16:22:33Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
debiao29/Qwen-Qwen1.5-0.5B-1719937379 | debiao29 | "2024-07-02T16:23:03Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-0.5B",
"region:us"
] | null | "2024-07-02T16:22:59Z" | ---
base_model: Qwen/Qwen1.5-0.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-gguf | RichardErkhov | "2024-07-02T16:28:52Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-07-02T16:23:49Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
TeenyTinyLlama-460m - GGUF
- Model creator: https://huggingface.co/nicholasKluge/
- Original model: https://huggingface.co/nicholasKluge/TeenyTinyLlama-460m/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [TeenyTinyLlama-460m.Q2_K.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-gguf/blob/main/TeenyTinyLlama-460m.Q2_K.gguf) | Q2_K | 0.17GB |
| [TeenyTinyLlama-460m.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-gguf/blob/main/TeenyTinyLlama-460m.IQ3_XS.gguf) | IQ3_XS | 0.19GB |
| [TeenyTinyLlama-460m.IQ3_S.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-gguf/blob/main/TeenyTinyLlama-460m.IQ3_S.gguf) | IQ3_S | 0.2GB |
| [TeenyTinyLlama-460m.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-gguf/blob/main/TeenyTinyLlama-460m.Q3_K_S.gguf) | Q3_K_S | 0.2GB |
| [TeenyTinyLlama-460m.IQ3_M.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-gguf/blob/main/TeenyTinyLlama-460m.IQ3_M.gguf) | IQ3_M | 0.21GB |
| [TeenyTinyLlama-460m.Q3_K.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-gguf/blob/main/TeenyTinyLlama-460m.Q3_K.gguf) | Q3_K | 0.22GB |
| [TeenyTinyLlama-460m.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-gguf/blob/main/TeenyTinyLlama-460m.Q3_K_M.gguf) | Q3_K_M | 0.22GB |
| [TeenyTinyLlama-460m.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-gguf/blob/main/TeenyTinyLlama-460m.Q3_K_L.gguf) | Q3_K_L | 0.24GB |
| [TeenyTinyLlama-460m.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-gguf/blob/main/TeenyTinyLlama-460m.IQ4_XS.gguf) | IQ4_XS | 0.24GB |
| [TeenyTinyLlama-460m.Q4_0.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-gguf/blob/main/TeenyTinyLlama-460m.Q4_0.gguf) | Q4_0 | 0.25GB |
| [TeenyTinyLlama-460m.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-gguf/blob/main/TeenyTinyLlama-460m.IQ4_NL.gguf) | IQ4_NL | 0.26GB |
| [TeenyTinyLlama-460m.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-gguf/blob/main/TeenyTinyLlama-460m.Q4_K_S.gguf) | Q4_K_S | 0.26GB |
| [TeenyTinyLlama-460m.Q4_K.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-gguf/blob/main/TeenyTinyLlama-460m.Q4_K.gguf) | Q4_K | 0.27GB |
| [TeenyTinyLlama-460m.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-gguf/blob/main/TeenyTinyLlama-460m.Q4_K_M.gguf) | Q4_K_M | 0.27GB |
| [TeenyTinyLlama-460m.Q4_1.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-gguf/blob/main/TeenyTinyLlama-460m.Q4_1.gguf) | Q4_1 | 0.28GB |
| [TeenyTinyLlama-460m.Q5_0.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-gguf/blob/main/TeenyTinyLlama-460m.Q5_0.gguf) | Q5_0 | 0.3GB |
| [TeenyTinyLlama-460m.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-gguf/blob/main/TeenyTinyLlama-460m.Q5_K_S.gguf) | Q5_K_S | 0.3GB |
| [TeenyTinyLlama-460m.Q5_K.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-gguf/blob/main/TeenyTinyLlama-460m.Q5_K.gguf) | Q5_K | 0.31GB |
| [TeenyTinyLlama-460m.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-gguf/blob/main/TeenyTinyLlama-460m.Q5_K_M.gguf) | Q5_K_M | 0.31GB |
| [TeenyTinyLlama-460m.Q5_1.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-gguf/blob/main/TeenyTinyLlama-460m.Q5_1.gguf) | Q5_1 | 0.33GB |
| [TeenyTinyLlama-460m.Q6_K.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-gguf/blob/main/TeenyTinyLlama-460m.Q6_K.gguf) | Q6_K | 0.36GB |
| [TeenyTinyLlama-460m.Q8_0.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-gguf/blob/main/TeenyTinyLlama-460m.Q8_0.gguf) | Q8_0 | 0.46GB |
Original model description:
---
language:
- pt
license: apache-2.0
library_name: transformers
tags:
- text-generation-inference
datasets:
- nicholasKluge/Pt-Corpus-Instruct
metrics:
- perplexity
pipeline_tag: text-generation
widget:
- text: 'A PUCRS é uma universidade '
example_title: Exemplo
- text: A muitos anos atrás, em uma galáxia muito distante, vivia uma raça de
example_title: Exemplo
- text: Em meio a um escândalo, a frente parlamentar pediu ao Senador Silva para
example_title: Exemplo
inference:
parameters:
repetition_penalty: 1.2
temperature: 0.2
top_k: 20
top_p: 0.2
max_new_tokens: 150
co2_eq_emissions:
emissions: 41100
source: CodeCarbon
training_type: pre-training
geographical_location: Germany
hardware_used: NVIDIA A100-SXM4-40GB
model-index:
- name: TeenyTinyLlama-460m
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: ENEM Challenge (No Images)
type: eduagarcia/enem_challenge
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 20.15
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-460m
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BLUEX (No Images)
type: eduagarcia-temp/BLUEX_without_images
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 25.73
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-460m
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: OAB Exams
type: eduagarcia/oab_exams
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 27.02
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-460m
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 RTE
type: assin2
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 53.61
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-460m
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 STS
type: eduagarcia/portuguese_benchmark
split: test
args:
num_few_shot: 15
metrics:
- type: pearson
value: 13.0
name: pearson
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-460m
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: FaQuAD NLI
type: ruanchaves/faquad-nli
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 46.41
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-460m
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HateBR Binary
type: ruanchaves/hatebr
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 33.59
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-460m
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: PT Hate Speech Binary
type: hate_speech_portuguese
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 22.99
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-460m
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: tweetSentBR
type: eduagarcia-temp/tweetsentbr
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 17.28
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-460m
name: Open Portuguese LLM Leaderboard
---
# TeenyTinyLlama-460m
<img src="./logo.png" alt="A curious llama exploring a mushroom forest." height="200">
## Model Summary
Large language models (LLMs) have significantly advanced natural language processing, but their progress has yet to be equal across languages. While most LLMs are trained in high-resource languages like English, multilingual models generally underperform monolingual ones. Additionally, aspects of their multilingual foundation sometimes restrict the byproducts they produce, like computational demands and licensing regimes. Hence, we developed the _TeenyTinyLlama_ pair: two compact models for Brazilian Portuguese text generation.
Read our preprint on [Article](https://www.sciencedirect.com/science/article/pii/S2666827024000343).
## Details
- **Architecture:** a Transformer-based model pre-trained via causal language modeling
- **Size:** 468,239,360 parameters
- **Context length:** 2048 tokens
- **Dataset:** [Pt-Corpus Instruct](https://huggingface.co/datasets/nicholasKluge/Pt-Corpus-Instruct) (6.2B tokens)
- **Language:** Portuguese
- **Number of steps:** 1,200,000
- **GPU:** 1 NVIDIA A100-SXM4-40GB
- **Training time**: ~ 280 hours
- **Emissions:** 41.1 KgCO2 (Germany)
- **Total energy consumption:** 115.69 kWh
This repository has the [source code](https://github.com/Nkluge-correa/TeenyTinyLlama) used to train this model. The main libraries used are:
- [Transformers](https://github.com/huggingface/transformers)
- [PyTorch](https://github.com/pytorch/pytorch)
- [Datasets](https://github.com/huggingface/datasets)
- [Tokenizers](https://github.com/huggingface/tokenizers)
- [Sentencepiece](https://github.com/google/sentencepiece)
- [Accelerate](https://github.com/huggingface/accelerate)
- [FlashAttention](https://github.com/Dao-AILab/flash-attention)
- [Codecarbon](https://github.com/mlco2/codecarbon)
## Intended Uses
The primary intended use of TeenyTinyLlama is to research the challenges related to developing language models for low-resource languages. Checkpoints saved during training are intended to provide a controlled setting for performing scientific experiments. You may also further fine-tune and adapt TeenyTinyLlama for deployment, as long as your use is following the Apache 2.0 license. If you decide to use pre-trained TeenyTinyLlama as a basis for your fine-tuned model, please conduct your own risk and bias assessment.
## Out-of-scope Use
TeenyTinyLlama is not intended for deployment. It is not a product and should not be used for human-facing interactions.
TeenyTinyLlama models are Brazilian Portuguese language only and are not suitable for translation or generating text in other languages.
TeenyTinyLlama has not been fine-tuned for downstream contexts in which language models are commonly deployed.
## Basic usage
Using the `pipeline`:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="nicholasKluge/TeenyTinyLlama-460m")
completions = generator("Astronomia é a ciência", num_return_sequences=2, max_new_tokens=100)
for comp in completions:
print(f"🤖 {comp['generated_text']}")
```
Using the `AutoTokenizer` and `AutoModelForCausalLM`:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Load model and the tokenizer
tokenizer = AutoTokenizer.from_pretrained("nicholasKluge/TeenyTinyLlama-460m", revision='main')
model = AutoModelForCausalLM.from_pretrained("nicholasKluge/TeenyTinyLlama-460m", revision='main')
# Pass the model to your device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.eval()
model.to(device)
# Tokenize the inputs and pass them to the device
inputs = tokenizer("Astronomia é a ciência", return_tensors="pt").to(device)
# Generate some text
completions = model.generate(**inputs, num_return_sequences=2, max_new_tokens=100)
# Print the generated text
for i, completion in enumerate(completions):
print(f'🤖 {tokenizer.decode(completion)}')
```
## Limitations
Like almost all other language models trained on large text datasets scraped from the web, the TTL pair exhibited behavior that does not make them an out-of-the-box solution to many real-world applications, especially those requiring factual, reliable, nontoxic text generation. Our models are all subject to the following:
- **Hallucinations:** This model can produce content that can be mistaken for truth but is, in fact, misleading or entirely false, i.e., hallucination.
- **Biases and Toxicity:** This model inherits the social and historical stereotypes from the data used to train it. Given these biases, the model can produce toxic content, i.e., harmful, offensive, or detrimental to individuals, groups, or communities.
- **Unreliable Code:** The model may produce incorrect code snippets and statements. These code generations should not be treated as suggestions or accurate solutions.
- **Language Limitations:** The model is primarily designed to understand standard Brazilian Portuguese. Other languages might challenge its comprehension, leading to potential misinterpretations or errors in response.
- **Repetition and Verbosity:** The model may get stuck on repetition loops (especially if the repetition penalty during generations is set to a meager value) or produce verbose responses unrelated to the prompt it was given.
Hence, even though our models are released with a permissive license, we urge users to perform their risk analysis on these models if intending to use them for real-world applications and also have humans moderating the outputs of these models in applications where they will interact with an audience, guaranteeing users are always aware they are interacting with a language model.
## Evaluations
During our training runs, both models showed consistent convergence. At no point did our evaluation curves show signs of overfitting or saturation. In the case of our 460m parameter model, we intentionally trained past the optimal point by approximately 75,000 steps to assess if there were any signs of saturation, but our evaluations consistently gave better results. We hypothesize that our models are under-trained but can improve if further trained to pass the Chinchilla optimal range.
| Processed Tokens | Perplexity | Energy Consumption (kWh) | Emissions (KgCO2eq) |
|------------------|------------|---------------------------|----------------------|
| 8.1M | 20.49 | 9.40 | 3.34 |
| 1.6B | 16.90 | 18.82 | 6.70 |
| 2.4B | 15.43 | 28.59 | 10.16 |
| 3.2B | 14.64 | 38.20 | 13.57 |
| 4.0B | 14.08 | 48.04 | 17.07 |
| 4.9B | 13.61 | 57.74 | 20.52 |
| 5.7B | 13.25 | 67.32 | 23.92 |
| 6.5B | 12.87 | 76.84 | 27.30 |
| 7.3B | 12.57 | 86.40 | 30.70 |
| 8.1B | 12.27 | 96.19 | 34.18 |
| 9.0B | 11.96 | 106.06 | 37.70 |
| 9.8B | 11.77 | 115.69 | 41.31 |
## Benchmarks
Evaluations on benchmarks were performed using the [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) (by [EleutherAI](https://www.eleuther.ai/)). [Laiviet](https://github.com/laiviet/lm-evaluation-harness) translated the tasks from the LM-Evaluation-Harness we used. The results of models marked with an "*" were extracted from the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
| | **ARC** | **HellaSwag** | **MMLU** | **TruthfulQA** | **Average** |
|------------------|-----------|---------------|-----------|----------------|-------------|
| Pythia-410m | 24.83* | 41.29* | 25.99* | 40.95* | 33.26 |
| **TTL-460m** | 29.40 | 33.00 | 28.55 | 41.10 | 33.01 |
| Bloom-560m | 24.74* | 37.15* | 24.22* | 42.44* | 32.13 |
| Xglm-564M | 25.56 | 34.64* | 25.18* | 42.53 | 31.97 |
| OPT-350m | 23.55* | 36.73* | 26.02* | 40.83* | 31.78 |
| **TTL-160m** | 26.15 | 29.29 | 28.11 | 41.12 | 31.16 |
| Pythia-160m | 24.06* | 31.39* | 24.86* | 44.34* | 31.16 |
| OPT-125m | 22.87* | 31.47* | 26.02* | 42.87* | 30.80 |
| GPorTuguese-2 | 22.48 | 29.62 | 27.36 | 41.44 | 30.22 |
| Gpt2-small | 21.48* | 31.60* | 25.79* | 40.65* | 29.97 |
| Multilingual GPT | 23.81 | 26.37* | 25.17* | 39.62 | 28.73 |
Evaluations on Brazilian Portuguese benchmarks were performed using a [Portuguese implementation of the EleutherAI LM Evaluation Harness](https://github.com/eduagarcia/lm-evaluation-harness-pt) (created by [Eduardo Garcia](https://github.com/eduagarcia/lm-evaluation-harness-pt)).
| | **ASSIN2 RTE** | **ASSIN2 STS** | **BLUEX** | **ENEM** | **FAQUAD NLI** | **HateBR** | **OAB Exams** | **Average** |
|----------------|----------------|----------------|-----------|----------|----------------|------------|---------------|-------------|
| Qwen-1.8B | 64.83 | 19.53 | 26.15 | 30.23 | 43.97 | 33.33 | 27.20 | 35.03 |
| TinyLlama-1.1B | 58.93 | 13.57 | 22.81 | 22.25 | 43.97 | 36.92 | 23.64 | 31.72 |
| **TTL-460m** | 53.93 | 12.66 | 22.81 | 19.87 | 49.01 | 33.59 | 27.06 | 31.27 |
| XGLM-564m | 49.61 | 22.91 | 19.61 | 19.38 | 43.97 | 33.99 | 23.42 | 30.41 |
| Bloom-1b7 | 53.60 | 4.81 | 21.42 | 18.96 | 43.97 | 34.89 | 23.05 | 28.67 |
| **TTL-160m** | 53.36 | 2.58 | 21.84 | 18.75 | 43.97 | 36.88 | 22.60 | 28.56 |
| OPT-125m | 39.77 | 2.00 | 21.84 | 17.42 | 43.97 | 47.04 | 22.78 | 27.83 |
| Pythia-160 | 33.33 | 12.81 | 16.13 | 16.66 | 50.36 | 41.09 | 22.82 | 27.60 |
| OLMo-1b | 34.12 | 9.28 | 18.92 | 20.29 | 43.97 | 41.33 | 22.96 | 27.26 |
| Bloom-560m | 33.33 | 8.48 | 18.92 | 19.03 | 43.97 | 37.07 | 23.05 | 26.26 |
| Pythia-410m | 33.33 | 4.80 | 19.47 | 19.45 | 43.97 | 33.33 | 23.01 | 25.33 |
| OPT-350m | 33.33 | 3.65 | 20.72 | 17.35 | 44.71 | 33.33 | 23.01 | 25.15 |
| GPT-2 small | 33.26 | 0.00 | 10.43 | 11.20 | 43.52 | 33.68 | 13.12 | 20.74 |
| GPorTuguese | 33.33 | 3.85 | 14.74 | 3.01 | 28.81 | 33.33 | 21.23 | 19.75 |
| Samba-1.1B | 33.33 | 1.30 | 8.07 | 10.22 | 17.72 | 35.79 | 15.03 | 17.35 |
## Fine-Tuning Comparisons
To further evaluate the downstream capabilities of our models, we decided to employ a basic fine-tuning procedure for our TTL pair on a subset of tasks from the Poeta benchmark. We apply the same procedure for comparison purposes on both [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased) models, given that they are also LLM trained from scratch in Brazilian Portuguese and have a similar size range to our models. We used these comparisons to assess if our pre-training runs produced LLM capable of producing good results ("good" here means "close to BERTimbau") when utilized for downstream applications.
| Models | IMDB | FaQuAD-NLI | HateBr | Assin2 | AgNews | Average |
|-----------------|-----------|------------|-----------|-----------|-----------|---------|
| BERTimbau-large | **93.58** | 92.26 | 91.57 | **88.97** | 94.11 | 92.10 |
| BERTimbau-small | 92.22 | **93.07** | 91.28 | 87.45 | 94.19 | 91.64 |
| **TTL-460m** | 91.64 | 91.18 | **92.28** | 86.43 | **94.42** | 91.19 |
| **TTL-160m** | 91.14 | 90.00 | 90.71 | 85.78 | 94.05 | 90.34 |
All the shown results are the higher accuracy scores achieved on the respective task test sets after fine-tuning the models on the training sets. All fine-tuning runs used the same hyperparameters, and the code implementation can be found in the [model cards](https://huggingface.co/nicholasKluge/TeenyTinyLlama-460m-HateBR) of our fine-tuned models.
## Cite as 🤗
```latex
@misc{correa24ttllama,
title = {TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese},
author = {Corr{\^e}a, Nicholas Kluge and Falk, Sophia and Fatimah, Shiza and Sen, Aniket and De Oliveira, Nythamar},
journal={arXiv preprint arXiv:2401.16640},
year={2024}
}
@misc{correa24ttllama,
doi = {10.1016/j.mlwa.2024.100558},
url = {https://www.sciencedirect.com/science/article/pii/S2666827024000343},
title = {TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese},
author = {Corr{\^e}a, Nicholas Kluge and Falk, Sophia and Fatimah, Shiza and Sen, Aniket and De Oliveira, Nythamar},
journal={Machine Learning With Applications},
publisher = {Springer},
year={2024}
}
```
## Funding
This repository was built as part of the RAIES ([Rede de Inteligência Artificial Ética e Segura](https://www.raies.org/)) initiative, a project supported by FAPERGS - ([Fundação de Amparo à Pesquisa do Estado do Rio Grande do Sul](https://fapergs.rs.gov.br/inicial)), Brazil.
## License
TeenyTinyLlama-460m is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.
|
CarlosJefte/gemma-7b-bnb-4bit | CarlosJefte | "2024-07-02T21:43:47Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/gemma-7b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-07-02T16:24:44Z" | ---
base_model: unsloth/gemma-7b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
- sft
---
# Uploaded model
- **Developed by:** CarlosJefte
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
karpathy/gpt2_1558M_final2_hf | karpathy | "2024-07-02T16:26:57Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T16:25:28Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lucasbalponti/split6 | lucasbalponti | "2024-07-02T16:26:47Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:neuralmind/bert-large-portuguese-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-07-02T16:25:40Z" | ---
license: mit
base_model: neuralmind/bert-large-portuguese-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: split6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# split6
This model is a fine-tuned version of [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-large-portuguese-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1595
- Accuracy: 0.9586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.29 | 1.0 | 8509 | 0.1733 | 0.9394 |
| 0.2369 | 2.0 | 17018 | 0.1300 | 0.9642 |
| 0.2126 | 3.0 | 25527 | 0.1595 | 0.9586 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cu118
- Datasets 2.20.0
- Tokenizers 0.19.1
|
hassanaitnacer/wav2vec2-large-xlsr-moroccan-darija-v1 | hassanaitnacer | "2024-07-02T20:23:55Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-07-02T16:25:52Z" | Entry not found |
allSafe101/test | allSafe101 | "2024-07-02T16:26:19Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T16:26:19Z" | Entry not found |
rhuang1/fraud-classification-18-llama-2-7b | rhuang1 | "2024-07-02T16:39:25Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T16:30:40Z" | Entry not found |
somashekar2002/LLM-for-quiz-gen | somashekar2002 | "2024-07-02T16:31:04Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T16:31:04Z" | Entry not found |
RichardErkhov/phanerozoic_-_Tiny-Pirate-1.1b-v0.1-gguf | RichardErkhov | "2024-07-02T16:40:33Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-07-02T16:31:22Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Tiny-Pirate-1.1b-v0.1 - GGUF
- Model creator: https://huggingface.co/phanerozoic/
- Original model: https://huggingface.co/phanerozoic/Tiny-Pirate-1.1b-v0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Tiny-Pirate-1.1b-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Pirate-1.1b-v0.1-gguf/blob/main/Tiny-Pirate-1.1b-v0.1.Q2_K.gguf) | Q2_K | 0.4GB |
| [Tiny-Pirate-1.1b-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Pirate-1.1b-v0.1-gguf/blob/main/Tiny-Pirate-1.1b-v0.1.IQ3_XS.gguf) | IQ3_XS | 0.44GB |
| [Tiny-Pirate-1.1b-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Pirate-1.1b-v0.1-gguf/blob/main/Tiny-Pirate-1.1b-v0.1.IQ3_S.gguf) | IQ3_S | 0.47GB |
| [Tiny-Pirate-1.1b-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Pirate-1.1b-v0.1-gguf/blob/main/Tiny-Pirate-1.1b-v0.1.Q3_K_S.gguf) | Q3_K_S | 0.47GB |
| [Tiny-Pirate-1.1b-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Pirate-1.1b-v0.1-gguf/blob/main/Tiny-Pirate-1.1b-v0.1.IQ3_M.gguf) | IQ3_M | 0.48GB |
| [Tiny-Pirate-1.1b-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Pirate-1.1b-v0.1-gguf/blob/main/Tiny-Pirate-1.1b-v0.1.Q3_K.gguf) | Q3_K | 0.51GB |
| [Tiny-Pirate-1.1b-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Pirate-1.1b-v0.1-gguf/blob/main/Tiny-Pirate-1.1b-v0.1.Q3_K_M.gguf) | Q3_K_M | 0.51GB |
| [Tiny-Pirate-1.1b-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Pirate-1.1b-v0.1-gguf/blob/main/Tiny-Pirate-1.1b-v0.1.Q3_K_L.gguf) | Q3_K_L | 0.55GB |
| [Tiny-Pirate-1.1b-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Pirate-1.1b-v0.1-gguf/blob/main/Tiny-Pirate-1.1b-v0.1.IQ4_XS.gguf) | IQ4_XS | 0.57GB |
| [Tiny-Pirate-1.1b-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Pirate-1.1b-v0.1-gguf/blob/main/Tiny-Pirate-1.1b-v0.1.Q4_0.gguf) | Q4_0 | 0.59GB |
| [Tiny-Pirate-1.1b-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Pirate-1.1b-v0.1-gguf/blob/main/Tiny-Pirate-1.1b-v0.1.IQ4_NL.gguf) | IQ4_NL | 0.6GB |
| [Tiny-Pirate-1.1b-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Pirate-1.1b-v0.1-gguf/blob/main/Tiny-Pirate-1.1b-v0.1.Q4_K_S.gguf) | Q4_K_S | 0.6GB |
| [Tiny-Pirate-1.1b-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Pirate-1.1b-v0.1-gguf/blob/main/Tiny-Pirate-1.1b-v0.1.Q4_K.gguf) | Q4_K | 0.62GB |
| [Tiny-Pirate-1.1b-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Pirate-1.1b-v0.1-gguf/blob/main/Tiny-Pirate-1.1b-v0.1.Q4_K_M.gguf) | Q4_K_M | 0.62GB |
| [Tiny-Pirate-1.1b-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Pirate-1.1b-v0.1-gguf/blob/main/Tiny-Pirate-1.1b-v0.1.Q4_1.gguf) | Q4_1 | 0.65GB |
| [Tiny-Pirate-1.1b-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Pirate-1.1b-v0.1-gguf/blob/main/Tiny-Pirate-1.1b-v0.1.Q5_0.gguf) | Q5_0 | 0.71GB |
| [Tiny-Pirate-1.1b-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Pirate-1.1b-v0.1-gguf/blob/main/Tiny-Pirate-1.1b-v0.1.Q5_K_S.gguf) | Q5_K_S | 0.71GB |
| [Tiny-Pirate-1.1b-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Pirate-1.1b-v0.1-gguf/blob/main/Tiny-Pirate-1.1b-v0.1.Q5_K.gguf) | Q5_K | 0.73GB |
| [Tiny-Pirate-1.1b-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Pirate-1.1b-v0.1-gguf/blob/main/Tiny-Pirate-1.1b-v0.1.Q5_K_M.gguf) | Q5_K_M | 0.73GB |
| [Tiny-Pirate-1.1b-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Pirate-1.1b-v0.1-gguf/blob/main/Tiny-Pirate-1.1b-v0.1.Q5_1.gguf) | Q5_1 | 0.77GB |
| [Tiny-Pirate-1.1b-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Pirate-1.1b-v0.1-gguf/blob/main/Tiny-Pirate-1.1b-v0.1.Q6_K.gguf) | Q6_K | 0.84GB |
| [Tiny-Pirate-1.1b-v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Pirate-1.1b-v0.1-gguf/blob/main/Tiny-Pirate-1.1b-v0.1.Q8_0.gguf) | Q8_0 | 1.09GB |
Original model description:
---
license: cc-by-nc-4.0
language:
- en
widget:
- text: |
What is best in life?
example_title: "Healthy Eating Tips"
---
![tinypirate.png](https://huggingface.co/phanerozoic/Tiny-Pirate-1.1b-v0.1/resolve/main/tinypirate.png)
# Tiny-Pirate-1.1b-v0.1
Tiny-Pirate-1.1b-v0.1 is a compact and specialized language model designed for generating authentic pirate-themed content. This version is fine-tuned from the TinyLlama-1.1B model, specifically adapted to operate efficiently in CPU-only and resource-limited environments.
- **Developed by**: phanerozoic
- **License**: cc-by-nc-4.0
- **Finetuned from**: TinyLlama-1.1B
### Version Control
Introducing Tiny-Pirate-1.1b-v0.1 to mark the initial release of this specialized language model.
### Performance
The Tiny-Pirate-1.1B model exhibits a robust ability to generate pirate-themed content, demonstrating a strong grasp of pirate vernacular and thematic elements. The responses are notably coherent and contextually appropriate, reflecting the model's adeptness at maintaining a consistent pirate tone. However, there are instances where the responses could benefit from more precise and direct answers to the questions posed, suggesting a potential area for further fine-tuning.
### Direct Use
Ideal for applications requiring thematic language generation in resource-constrained environments, such as edge computing, mobile devices, and lightweight AI applications.
### Training Data
Utilized the same pirate-themed dataset as MistralPirate-7b-v0.3, ensuring rich and diverse inputs for fine-tuning.
### Custom Stopping Strings
To enhance output quality, the following custom stopping strings were employed:
- "},"
- "User:"
- "You:"
- "\nUser"
- "\nUser:"
- "me:"
- ""\n"
### Training Hyperparameters and Fine-Tuning Details
- **LoRA Rank**: 16
- **LoRA Alpha**: 32
- **True Batch Size**: 4
- **Gradient Accumulation Steps**: 1
- **Epochs**: 1
- **Learning Rate**: 3e-4
- **LR Scheduler**: Linear
- **LLaMA Target Projections**: All targets modified
- **Fine-Tuning Approach**: LoRA peft merged back into the base model
### Limitations
While adept at generating pirate-themed content, Tiny-Pirate-v0.1 may not handle highly complex language tasks as larger models do. Its specialization in pirate dialect limits its use in general language applications.
### Compute Infrastructure
Efficiently trained on an RTX 6000 Ada GPU, taking approximately 2-3 minutes, showcasing resource-effective training for specialized models.
### Results
The model successfully produced responses that are thematically aligned with typical pirate lore and language. The outputs are engaging and largely relevant to the queries, showcasing the model's capacity to handle a variety of pirate-related topics from navigation to mythology. The use of pirate dialect is consistent and immersive, contributing to the overall thematic experience. However, the depth of responses varies, indicating room for improvement in handling more complex queries or providing more detailed explanations.
### Summary
Tiny-Pirate-1.1B stands out as an effective tool for generating pirate-themed content, particularly suitable for applications where thematic consistency and lighter computational demands are key. While the model shows competence in creating thematically rich and linguistically coherent outputs, there is potential for enhancing its ability to handle complex scenarios and provide more detailed, context-specific responses. Overall, Tiny-Pirate-1.1B represents a promising step in the realm of specialized, lightweight language models, combining thematic accuracy with operational efficiency.
### Acknowledgments
Gratitude is extended to the developers of TinyLlama-1.1B for their foundational work, which was instrumental in the creation of Tiny-Pirate-v0.1.
|
RichardErkhov/Sao10K_-_Fimbulvetr-10.7B-v1-gguf | RichardErkhov | "2024-07-03T00:07:34Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-07-02T16:33:22Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Fimbulvetr-10.7B-v1 - GGUF
- Model creator: https://huggingface.co/Sao10K/
- Original model: https://huggingface.co/Sao10K/Fimbulvetr-10.7B-v1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Fimbulvetr-10.7B-v1.Q2_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-10.7B-v1-gguf/blob/main/Fimbulvetr-10.7B-v1.Q2_K.gguf) | Q2_K | 3.73GB |
| [Fimbulvetr-10.7B-v1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-10.7B-v1-gguf/blob/main/Fimbulvetr-10.7B-v1.IQ3_XS.gguf) | IQ3_XS | 4.14GB |
| [Fimbulvetr-10.7B-v1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-10.7B-v1-gguf/blob/main/Fimbulvetr-10.7B-v1.IQ3_S.gguf) | IQ3_S | 4.37GB |
| [Fimbulvetr-10.7B-v1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-10.7B-v1-gguf/blob/main/Fimbulvetr-10.7B-v1.Q3_K_S.gguf) | Q3_K_S | 4.34GB |
| [Fimbulvetr-10.7B-v1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-10.7B-v1-gguf/blob/main/Fimbulvetr-10.7B-v1.IQ3_M.gguf) | IQ3_M | 4.51GB |
| [Fimbulvetr-10.7B-v1.Q3_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-10.7B-v1-gguf/blob/main/Fimbulvetr-10.7B-v1.Q3_K.gguf) | Q3_K | 4.84GB |
| [Fimbulvetr-10.7B-v1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-10.7B-v1-gguf/blob/main/Fimbulvetr-10.7B-v1.Q3_K_M.gguf) | Q3_K_M | 4.84GB |
| [Fimbulvetr-10.7B-v1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-10.7B-v1-gguf/blob/main/Fimbulvetr-10.7B-v1.Q3_K_L.gguf) | Q3_K_L | 5.26GB |
| [Fimbulvetr-10.7B-v1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-10.7B-v1-gguf/blob/main/Fimbulvetr-10.7B-v1.IQ4_XS.gguf) | IQ4_XS | 5.43GB |
| [Fimbulvetr-10.7B-v1.Q4_0.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-10.7B-v1-gguf/blob/main/Fimbulvetr-10.7B-v1.Q4_0.gguf) | Q4_0 | 5.66GB |
| [Fimbulvetr-10.7B-v1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-10.7B-v1-gguf/blob/main/Fimbulvetr-10.7B-v1.IQ4_NL.gguf) | IQ4_NL | 5.72GB |
| [Fimbulvetr-10.7B-v1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-10.7B-v1-gguf/blob/main/Fimbulvetr-10.7B-v1.Q4_K_S.gguf) | Q4_K_S | 5.7GB |
| [Fimbulvetr-10.7B-v1.Q4_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-10.7B-v1-gguf/blob/main/Fimbulvetr-10.7B-v1.Q4_K.gguf) | Q4_K | 6.02GB |
| [Fimbulvetr-10.7B-v1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-10.7B-v1-gguf/blob/main/Fimbulvetr-10.7B-v1.Q4_K_M.gguf) | Q4_K_M | 6.02GB |
| [Fimbulvetr-10.7B-v1.Q4_1.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-10.7B-v1-gguf/blob/main/Fimbulvetr-10.7B-v1.Q4_1.gguf) | Q4_1 | 6.27GB |
| [Fimbulvetr-10.7B-v1.Q5_0.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-10.7B-v1-gguf/blob/main/Fimbulvetr-10.7B-v1.Q5_0.gguf) | Q5_0 | 6.89GB |
| [Fimbulvetr-10.7B-v1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-10.7B-v1-gguf/blob/main/Fimbulvetr-10.7B-v1.Q5_K_S.gguf) | Q5_K_S | 6.89GB |
| [Fimbulvetr-10.7B-v1.Q5_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-10.7B-v1-gguf/blob/main/Fimbulvetr-10.7B-v1.Q5_K.gguf) | Q5_K | 7.08GB |
| [Fimbulvetr-10.7B-v1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-10.7B-v1-gguf/blob/main/Fimbulvetr-10.7B-v1.Q5_K_M.gguf) | Q5_K_M | 7.08GB |
| [Fimbulvetr-10.7B-v1.Q5_1.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-10.7B-v1-gguf/blob/main/Fimbulvetr-10.7B-v1.Q5_1.gguf) | Q5_1 | 7.51GB |
| [Fimbulvetr-10.7B-v1.Q6_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-10.7B-v1-gguf/blob/main/Fimbulvetr-10.7B-v1.Q6_K.gguf) | Q6_K | 8.2GB |
| [Fimbulvetr-10.7B-v1.Q8_0.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-10.7B-v1-gguf/blob/main/Fimbulvetr-10.7B-v1.Q8_0.gguf) | Q8_0 | 10.62GB |
Original model description:
---
license: cc-by-nc-4.0
language:
- en
---
My current low-budget daily driver, so far.
Frostwindv2 + Sensualize v1.1 + ___ data on uncen Instruct Solar.
This is meant to be a verbose, smart Roleplaying model. I think I captured those two parts this time. Well, for my own cards and scenarios anyway, it passed with flying colours.
I recommend using min-p, I liked Universal-Light preset in SillyTavern.
Experimental.
***
### Prompt Format: Alpaca
```
### Instruction:
<Prompt>
### Input:
<Insert Context Here>
### Response:
```
### 31-Jan-24 Update:
forgot to link GGUF quants here: https://huggingface.co/Sao10K/Fimbulvetr-10.7B-v1-GGUF
or thebloke already quantized them to gptq and others. lonestriker did exl2 quants so ty ty a lot.
anyway, I have a few solar-based ideas before I move to the new InternLM, Yi, Mixtral or back to 70B.
|
isanthosh2004/llama3-fake-news | isanthosh2004 | "2024-07-02T20:15:00Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T16:33:46Z" | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** isanthosh2004
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
PereLluis13/relik-reader-deberta-large-wikipedia-aida-full-interleave-cont | PereLluis13 | "2024-07-02T16:37:33Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"relik-reader",
"feature-extraction",
"custom_code",
"region:us"
] | feature-extraction | "2024-07-02T16:36:37Z" | Entry not found |
styalai/XTmath-0.2b | styalai | "2024-07-02T16:47:05Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"pytorch_model_hub_mixin",
"model_hub_mixin",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T16:37:24Z" | ---
tags:
- pytorch_model_hub_mixin
- model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
PereLluis13/relik-entity-linking-large-wikipedia-aida-interleave-cont | PereLluis13 | "2024-07-02T17:02:25Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T16:38:07Z" | Entry not found |
nalf3in/gemma-2-9b-Q4_K_M-GGUF | nalf3in | "2024-07-02T16:38:45Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:google/gemma-2-9b",
"license:gemma",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T16:38:20Z" | ---
base_model: google/gemma-2-9b
library_name: transformers
license: gemma
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# nalf3in/gemma-2-9b-Q4_K_M-GGUF
This model was converted to GGUF format from [`google/gemma-2-9b`](https://huggingface.co/google/gemma-2-9b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/google/gemma-2-9b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo nalf3in/gemma-2-9b-Q4_K_M-GGUF --hf-file gemma-2-9b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo nalf3in/gemma-2-9b-Q4_K_M-GGUF --hf-file gemma-2-9b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo nalf3in/gemma-2-9b-Q4_K_M-GGUF --hf-file gemma-2-9b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo nalf3in/gemma-2-9b-Q4_K_M-GGUF --hf-file gemma-2-9b-q4_k_m.gguf -c 2048
```
|
InfiniteEcho/q-FrozenLake-v1-4x4-noSlippery | InfiniteEcho | "2024-07-02T16:38:50Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-07-02T16:38:46Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="InfiniteEcho/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
KasuleTrevor/wav2vec2-large-xls-r-300m-lg-cv-100hr-v2 | KasuleTrevor | "2024-07-02T20:21:46Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-07-02T16:39:39Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
juanpablomesa/bge-small-bioasq-batch64 | juanpablomesa | "2024-07-02T16:40:23Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:4012",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:BAAI/bge-small-en-v1.5",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | "2024-07-02T16:40:19Z" | ---
base_model: BAAI/bge-small-en-v1.5
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:4012
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: 'Extensive messenger RNA editing generates transcript and protein
diversity in genes involved in neural excitability, as previously described, as
well as in genes participating in a broad range of other cellular functions. '
sentences:
- Do cephalopods use RNA editing less frequently than other species?
- GV1001 vaccine targets which enzyme?
- Which event results in the acetylation of S6K1?
- source_sentence: Yes, exposure to household furry pets influences the gut microbiota
of infants.
sentences:
- Can pets affect infant microbiomed?
- What is the mode of action of Thiazovivin?
- What are the effects of CAMK4 inhibition?
- source_sentence: "In children with heart failure evidence of the effect of enalapril\
\ is empirical. Enalapril was clinically safe and effective in 50% to 80% of for\
\ children with cardiac failure secondary to congenital heart malformations before\
\ and after cardiac surgery, impaired ventricular function , valvar regurgitation,\
\ congestive cardiomyopathy, , arterial hypertension, life-threatening arrhythmias\
\ coexisting with circulatory insufficiency. \nACE inhibitors have shown a transient\
\ beneficial effect on heart failure due to anticancer drugs and possibly a beneficial\
\ effect in muscular dystrophy-associated cardiomyopathy, which deserves further\
\ studies."
sentences:
- Which receptors can be evaluated with the [18F]altanserin?
- In what proportion of children with heart failure has Enalapril been shown to
be safe and effective?
- Which major signaling pathways are regulated by RIP1?
- source_sentence: Cellular senescence-associated heterochromatic foci (SAHFS) are
a novel type of chromatin condensation involving alterations of linker histone
H1 and linker DNA-binding proteins. SAHFS can be formed by a variety of cell types,
but their mechanism of action remains unclear.
sentences:
- What is the relationship between the X chromosome and a neutrophil drumstick?
- Which microRNAs are involved in exercise adaptation?
- How are SAHFS created?
- source_sentence: Multicluster Pcdh diversity is required for mouse olfactory neural
circuit assembly. The vertebrate clustered protocadherin (Pcdh) cell surface proteins
are encoded by three closely linked gene clusters (Pcdhα, Pcdhβ, and Pcdhγ). Although
deletion of individual Pcdh clusters had subtle phenotypic consequences, the loss
of all three clusters (tricluster deletion) led to a severe axonal arborization
defect and loss of self-avoidance.
sentences:
- What are the effects of the deletion of all three Pcdh clusters (tricluster deletion)
in mice?
- what is the role of MEF-2 in cardiomyocyte differentiation?
- How many periods of regulatory innovation led to the evolution of vertebrates?
---
# BGE small finetuned BIOASQ
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) <!-- at revision 5c38ec7c405ec4b44b94cc5a9bb96e735b38267a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("juanpablomesa/bge-small-bioasq-batch64")
# Run inference
sentences = [
'Multicluster Pcdh diversity is required for mouse olfactory neural circuit assembly. The vertebrate clustered protocadherin (Pcdh) cell surface proteins are encoded by three closely linked gene clusters (Pcdhα, Pcdhβ, and Pcdhγ). Although deletion of individual Pcdh clusters had subtle phenotypic consequences, the loss of all three clusters (tricluster deletion) led to a severe axonal arborization defect and loss of self-avoidance.',
'What are the effects of the deletion of all three Pcdh clusters (tricluster deletion) in mice?',
'How many periods of regulatory innovation led to the evolution of vertebrates?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 4,012 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 63.38 tokens</li><li>max: 485 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 16.13 tokens</li><li>max: 49 tokens</li></ul> |
* Samples:
| positive | anchor |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|
| <code>Aberrant patterns of H3K4, H3K9, and H3K27 histone lysine methylation were shown to result in histone code alterations, which induce changes in gene expression, and affect the proliferation rate of cells in medulloblastoma.</code> | <code>What is the implication of histone lysine methylation in medulloblastoma?</code> |
| <code>STAG1/STAG2 proteins are tumour suppressor proteins that suppress cell proliferation and are essential for differentiation.</code> | <code>What is the role of STAG1/STAG2 proteins in differentiation?</code> |
| <code>The association between cell phone use and incident glioblastoma remains unclear. Some studies have reported that cell phone use was associated with incident glioblastoma, and with reduced survival of patients diagnosed with glioblastoma. However, other studies have repeatedly replicated to find an association between cell phone use and glioblastoma.</code> | <code>What is the association between cell phone use and glioblastoma?</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Framework Versions
- Python: 3.11.5
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.31.0
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
RichardErkhov/MaziyarPanahi_-_Llama-3-11B-Instruct-v0.1-gguf | RichardErkhov | "2024-07-03T00:28:19Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-07-02T16:40:23Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3-11B-Instruct-v0.1 - GGUF
- Model creator: https://huggingface.co/MaziyarPanahi/
- Original model: https://huggingface.co/MaziyarPanahi/Llama-3-11B-Instruct-v0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-3-11B-Instruct-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-11B-Instruct-v0.1-gguf/blob/main/Llama-3-11B-Instruct-v0.1.Q2_K.gguf) | Q2_K | 4.16GB |
| [Llama-3-11B-Instruct-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-11B-Instruct-v0.1-gguf/blob/main/Llama-3-11B-Instruct-v0.1.IQ3_XS.gguf) | IQ3_XS | 4.61GB |
| [Llama-3-11B-Instruct-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-11B-Instruct-v0.1-gguf/blob/main/Llama-3-11B-Instruct-v0.1.IQ3_S.gguf) | IQ3_S | 4.83GB |
| [Llama-3-11B-Instruct-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-11B-Instruct-v0.1-gguf/blob/main/Llama-3-11B-Instruct-v0.1.Q3_K_S.gguf) | Q3_K_S | 4.81GB |
| [Llama-3-11B-Instruct-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-11B-Instruct-v0.1-gguf/blob/main/Llama-3-11B-Instruct-v0.1.IQ3_M.gguf) | IQ3_M | 4.98GB |
| [Llama-3-11B-Instruct-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-11B-Instruct-v0.1-gguf/blob/main/Llama-3-11B-Instruct-v0.1.Q3_K.gguf) | Q3_K | 5.3GB |
| [Llama-3-11B-Instruct-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-11B-Instruct-v0.1-gguf/blob/main/Llama-3-11B-Instruct-v0.1.Q3_K_M.gguf) | Q3_K_M | 5.3GB |
| [Llama-3-11B-Instruct-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-11B-Instruct-v0.1-gguf/blob/main/Llama-3-11B-Instruct-v0.1.Q3_K_L.gguf) | Q3_K_L | 5.73GB |
| [Llama-3-11B-Instruct-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-11B-Instruct-v0.1-gguf/blob/main/Llama-3-11B-Instruct-v0.1.IQ4_XS.gguf) | IQ4_XS | 5.93GB |
| [Llama-3-11B-Instruct-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-11B-Instruct-v0.1-gguf/blob/main/Llama-3-11B-Instruct-v0.1.Q4_0.gguf) | Q4_0 | 6.17GB |
| [Llama-3-11B-Instruct-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-11B-Instruct-v0.1-gguf/blob/main/Llama-3-11B-Instruct-v0.1.IQ4_NL.gguf) | IQ4_NL | 6.23GB |
| [Llama-3-11B-Instruct-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-11B-Instruct-v0.1-gguf/blob/main/Llama-3-11B-Instruct-v0.1.Q4_K_S.gguf) | Q4_K_S | 6.21GB |
| [Llama-3-11B-Instruct-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-11B-Instruct-v0.1-gguf/blob/main/Llama-3-11B-Instruct-v0.1.Q4_K.gguf) | Q4_K | 6.53GB |
| [Llama-3-11B-Instruct-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-11B-Instruct-v0.1-gguf/blob/main/Llama-3-11B-Instruct-v0.1.Q4_K_M.gguf) | Q4_K_M | 6.53GB |
| [Llama-3-11B-Instruct-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-11B-Instruct-v0.1-gguf/blob/main/Llama-3-11B-Instruct-v0.1.Q4_1.gguf) | Q4_1 | 6.81GB |
| [Llama-3-11B-Instruct-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-11B-Instruct-v0.1-gguf/blob/main/Llama-3-11B-Instruct-v0.1.Q5_0.gguf) | Q5_0 | 7.45GB |
| [Llama-3-11B-Instruct-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-11B-Instruct-v0.1-gguf/blob/main/Llama-3-11B-Instruct-v0.1.Q5_K_S.gguf) | Q5_K_S | 7.45GB |
| [Llama-3-11B-Instruct-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-11B-Instruct-v0.1-gguf/blob/main/Llama-3-11B-Instruct-v0.1.Q5_K.gguf) | Q5_K | 7.64GB |
| [Llama-3-11B-Instruct-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-11B-Instruct-v0.1-gguf/blob/main/Llama-3-11B-Instruct-v0.1.Q5_K_M.gguf) | Q5_K_M | 7.64GB |
| [Llama-3-11B-Instruct-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-11B-Instruct-v0.1-gguf/blob/main/Llama-3-11B-Instruct-v0.1.Q5_1.gguf) | Q5_1 | 8.09GB |
| [Llama-3-11B-Instruct-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-11B-Instruct-v0.1-gguf/blob/main/Llama-3-11B-Instruct-v0.1.Q6_K.gguf) | Q6_K | 8.81GB |
| [Llama-3-11B-Instruct-v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-11B-Instruct-v0.1-gguf/blob/main/Llama-3-11B-Instruct-v0.1.Q8_0.gguf) | Q8_0 | 11.41GB |
Original model description:
---
base_model: "meta-llama/Meta-Llama-3-8B-Instruct"
library_name: transformers
tags:
- mergekit
- merge
- facebook
- meta
- pytorch
- llama
- llama-3
language:
- en
pipeline_tag: text-generation
license: other
license_name: llama3
license_link: LICENSE
inference: false
model_creator: MaziyarPanahi
model_name: Llama-3-11B-Instruct-v0.1
quantized_by: MaziyarPanahi
---
<img src="./llama-3-merges.webp" alt="Goku 8x22B v0.1 Logo" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Llama-3-11B-Instruct-v0.1
This model is a self-merge of `meta-llama/Meta-Llama-3-8B-Instruct` model.
# How to use
You can use this model by using `MaziyarPanahi/Llama-3-11B-Instruct-v0.1` as the model name in Hugging Face's
transformers library.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
from transformers import pipeline
import torch
model_id = "MaziyarPanahi/Llama-3-11B-Instruct-v0.1"
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto",
trust_remote_code=True,
# attn_implementation="flash_attention_2"
)
tokenizer = AutoTokenizer.from_pretrained(
model_id,
trust_remote_code=True
)
streamer = TextStreamer(tokenizer)
pipeline = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
model_kwargs={"torch_dtype": torch.bfloat16},
streamer=streamer
)
# Then you can use the pipeline to generate text.
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.95,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## Prompt template
```text
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are a helpful assistant.<|eot_id|><|start_header_id|>user<|end_header_id|>
what's 25-4*2+3<|eot_id|><|start_header_id|>assistant<|end_header_id|>
To evaluate this expression, we need to follow the order of operations (PEMDAS):
1. First, multiply 4 and 2: 4*2 = 8
2. Then, subtract 8 from 25: 25 - 8 = 17
3. Finally, add 3: 17 + 3 = 20
So, 25-4*2+3 = 20!<|eot_id|>
To evaluate this expression, we need to follow the order of operations (PEMDAS):
1. First, multiply 4 and 2: 4*2 = 8
2. Then, subtract 8 from 25: 25 - 8 = 17
3. Finally, add 3: 17 + 3 = 20
So, 25-4*2+3 = 20!
```
|
InfiniteEcho/Taxi-v3 | InfiniteEcho | "2024-07-02T16:40:49Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-07-02T16:40:46Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.78
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="InfiniteEcho/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
orangeX/TEST | orangeX | "2024-07-02T16:41:12Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-07-02T16:41:12Z" | ---
license: openrail
---
|
KUD-genai/TAIDE_healthedu_v6_gguf | KUD-genai | "2024-07-02T16:58:16Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-07-02T16:41:32Z" | Entry not found |
manbeast3b/ZZZZZZZZdriver136c | manbeast3b | "2024-07-02T16:49:59Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T16:41:41Z" | Entry not found |
gisang-lee/mistral-7b-qlora-arc-wandb-test-arc-challenge-train-val | gisang-lee | "2024-07-02T16:52:32Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-07-02T16:41:50Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hungkvbn/naschainhk6 | hungkvbn | "2024-07-02T17:40:39Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T16:41:56Z" | Entry not found |
dyada/mistral-Multiclass-company-industry-V0 | dyada | "2024-07-02T19:15:17Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-07-02T16:42:16Z" | Entry not found |
mradermacher/Nethena-13B-GGUF | mradermacher | "2024-07-02T17:31:51Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:NeverSleep/Nethena-13B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T16:42:17Z" | ---
base_model: NeverSleep/Nethena-13B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/NeverSleep/Nethena-13B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Nethena-13B-GGUF/resolve/main/Nethena-13B.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Nethena-13B-GGUF/resolve/main/Nethena-13B.IQ3_XS.gguf) | IQ3_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Nethena-13B-GGUF/resolve/main/Nethena-13B.IQ3_S.gguf) | IQ3_S | 5.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Nethena-13B-GGUF/resolve/main/Nethena-13B.Q3_K_S.gguf) | Q3_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Nethena-13B-GGUF/resolve/main/Nethena-13B.IQ3_M.gguf) | IQ3_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/Nethena-13B-GGUF/resolve/main/Nethena-13B.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Nethena-13B-GGUF/resolve/main/Nethena-13B.Q3_K_L.gguf) | Q3_K_L | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/Nethena-13B-GGUF/resolve/main/Nethena-13B.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/Nethena-13B-GGUF/resolve/main/Nethena-13B.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Nethena-13B-GGUF/resolve/main/Nethena-13B.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Nethena-13B-GGUF/resolve/main/Nethena-13B.Q5_K_S.gguf) | Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Nethena-13B-GGUF/resolve/main/Nethena-13B.Q5_K_M.gguf) | Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/Nethena-13B-GGUF/resolve/main/Nethena-13B.Q6_K.gguf) | Q6_K | 10.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Nethena-13B-GGUF/resolve/main/Nethena-13B.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-gguf | RichardErkhov | "2024-07-02T16:52:26Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-07-02T16:42:52Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
TinyLlama-1.1B-2.5T-chat - GGUF
- Model creator: https://huggingface.co/AIGym/
- Original model: https://huggingface.co/AIGym/TinyLlama-1.1B-2.5T-chat/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [TinyLlama-1.1B-2.5T-chat.Q2_K.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-gguf/blob/main/TinyLlama-1.1B-2.5T-chat.Q2_K.gguf) | Q2_K | 0.4GB |
| [TinyLlama-1.1B-2.5T-chat.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-gguf/blob/main/TinyLlama-1.1B-2.5T-chat.IQ3_XS.gguf) | IQ3_XS | 0.44GB |
| [TinyLlama-1.1B-2.5T-chat.IQ3_S.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-gguf/blob/main/TinyLlama-1.1B-2.5T-chat.IQ3_S.gguf) | IQ3_S | 0.47GB |
| [TinyLlama-1.1B-2.5T-chat.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-gguf/blob/main/TinyLlama-1.1B-2.5T-chat.Q3_K_S.gguf) | Q3_K_S | 0.47GB |
| [TinyLlama-1.1B-2.5T-chat.IQ3_M.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-gguf/blob/main/TinyLlama-1.1B-2.5T-chat.IQ3_M.gguf) | IQ3_M | 0.48GB |
| [TinyLlama-1.1B-2.5T-chat.Q3_K.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-gguf/blob/main/TinyLlama-1.1B-2.5T-chat.Q3_K.gguf) | Q3_K | 0.51GB |
| [TinyLlama-1.1B-2.5T-chat.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-gguf/blob/main/TinyLlama-1.1B-2.5T-chat.Q3_K_M.gguf) | Q3_K_M | 0.51GB |
| [TinyLlama-1.1B-2.5T-chat.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-gguf/blob/main/TinyLlama-1.1B-2.5T-chat.Q3_K_L.gguf) | Q3_K_L | 0.55GB |
| [TinyLlama-1.1B-2.5T-chat.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-gguf/blob/main/TinyLlama-1.1B-2.5T-chat.IQ4_XS.gguf) | IQ4_XS | 0.57GB |
| [TinyLlama-1.1B-2.5T-chat.Q4_0.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-gguf/blob/main/TinyLlama-1.1B-2.5T-chat.Q4_0.gguf) | Q4_0 | 0.59GB |
| [TinyLlama-1.1B-2.5T-chat.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-gguf/blob/main/TinyLlama-1.1B-2.5T-chat.IQ4_NL.gguf) | IQ4_NL | 0.6GB |
| [TinyLlama-1.1B-2.5T-chat.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-gguf/blob/main/TinyLlama-1.1B-2.5T-chat.Q4_K_S.gguf) | Q4_K_S | 0.6GB |
| [TinyLlama-1.1B-2.5T-chat.Q4_K.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-gguf/blob/main/TinyLlama-1.1B-2.5T-chat.Q4_K.gguf) | Q4_K | 0.62GB |
| [TinyLlama-1.1B-2.5T-chat.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-gguf/blob/main/TinyLlama-1.1B-2.5T-chat.Q4_K_M.gguf) | Q4_K_M | 0.62GB |
| [TinyLlama-1.1B-2.5T-chat.Q4_1.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-gguf/blob/main/TinyLlama-1.1B-2.5T-chat.Q4_1.gguf) | Q4_1 | 0.65GB |
| [TinyLlama-1.1B-2.5T-chat.Q5_0.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-gguf/blob/main/TinyLlama-1.1B-2.5T-chat.Q5_0.gguf) | Q5_0 | 0.71GB |
| [TinyLlama-1.1B-2.5T-chat.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-gguf/blob/main/TinyLlama-1.1B-2.5T-chat.Q5_K_S.gguf) | Q5_K_S | 0.71GB |
| [TinyLlama-1.1B-2.5T-chat.Q5_K.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-gguf/blob/main/TinyLlama-1.1B-2.5T-chat.Q5_K.gguf) | Q5_K | 0.73GB |
| [TinyLlama-1.1B-2.5T-chat.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-gguf/blob/main/TinyLlama-1.1B-2.5T-chat.Q5_K_M.gguf) | Q5_K_M | 0.73GB |
| [TinyLlama-1.1B-2.5T-chat.Q5_1.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-gguf/blob/main/TinyLlama-1.1B-2.5T-chat.Q5_1.gguf) | Q5_1 | 0.77GB |
| [TinyLlama-1.1B-2.5T-chat.Q6_K.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-gguf/blob/main/TinyLlama-1.1B-2.5T-chat.Q6_K.gguf) | Q6_K | 0.84GB |
| [TinyLlama-1.1B-2.5T-chat.Q8_0.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-gguf/blob/main/TinyLlama-1.1B-2.5T-chat.Q8_0.gguf) | Q8_0 | 1.09GB |
Original model description:
---
license: apache-2.0
tags:
- finetuned
pipeline_tag: text-generation
model-index:
- name: TinyLlama-1.1B-2.5T-chat
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 34.47
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/TinyLlama-1.1B-2.5T-chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 59.71
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/TinyLlama-1.1B-2.5T-chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 26.45
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/TinyLlama-1.1B-2.5T-chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 38.8
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/TinyLlama-1.1B-2.5T-chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.01
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/TinyLlama-1.1B-2.5T-chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 1.14
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/TinyLlama-1.1B-2.5T-chat
name: Open LLM Leaderboard
---
# TinyLlama-1.1B-2.5T-chat
It was created by starting with the TinyLlama-1.1B-2.5T-chat and training it on a llama dataset. We have attached the wandb report in pdf form to view the training run at a glance.
# Reson
This model was fine tuned to allow it to follow direction and is a steeping stone to further training.
# Referrals
Run Pod - This is who I use to train th emodels on huggingface. If you use it we both get free crdits. - <a href="https://runpod.io?ref=kilq83n1" target="_blank" style="color: #3498db; text-decoration: none; font-weight: bold;">Visit Runpod's Website!</a>
Paypal - If you want to leave a tip, it is appecaheted. - <a href="https://paypal.me/OpenSourceTraining" target="_blank" style="color: #3498db; text-decoration: none; font-weight: bold;">Visit My Paypal!</a>
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_AIGym__TinyLlama-1.1B-2.5T-chat)
| Metric |Value|
|---------------------------------|----:|
|Avg. |36.93|
|AI2 Reasoning Challenge (25-Shot)|34.47|
|HellaSwag (10-Shot) |59.71|
|MMLU (5-Shot) |26.45|
|TruthfulQA (0-shot) |38.80|
|Winogrande (5-shot) |61.01|
|GSM8k (5-shot) | 1.14|
|
ProElectro07/subbb | ProElectro07 | "2024-07-02T16:43:56Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T16:43:34Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Nikhilbk/ppo-LunarLander-v2 | Nikhilbk | "2024-07-02T16:44:04Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-07-02T16:43:45Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 260.31 +/- 16.53
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jaich/identify_countries | jaich | "2024-07-02T16:44:19Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-07-02T16:44:04Z" | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: identify_countries
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.0
---
# identify_countries
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### german
![german](images/german.jpg)
#### india
![india](images/india.jpg)
#### italy
![italy](images/italy.jpg)
#### london
![london](images/london.jpg)
#### paris
![paris](images/paris.jpg) |
Ammartatox/sentientscribe | Ammartatox | "2024-07-02T16:54:11Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T16:44:21Z" | ---
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** Ammartatox
- **License:** apache-2.0
- **Finetuned from model :** NousResearch/Nous-Hermes-2-Mistral-7B-DPO
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
twright8/setfit-oversample-labels-lobbying | twright8 | "2024-07-02T20:33:42Z" | 0 | 0 | setfit | [
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"model-index",
"region:us"
] | text-classification | "2024-07-02T16:44:23Z" | ---
library_name: setfit
metrics:
- f1
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: To make introductions between Camelot's Chairman and the Cabinet Secretary.
We discussed the operation of the UK National Lottery and how to maximise returns
to National Lottery Good Causes as well as our plans to celebrate the 25th birthday
of The National Lottery.
- text: Discussion on crime
- text: To discuss Northern Powerhouse Rail and HS2
- text: To discuss food security
- text: Electricity market
inference: false
model-index:
- name: SetFit
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: f1
value: 0.9056603773584904
name: F1
- type: accuracy
value: 0.9572649572649573
name: Accuracy
---
# SetFit
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. A [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
<!-- - **Sentence Transformer:** [Unknown](https://huggingface.co/unknown) -->
- **Classification head:** a [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 4 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
## Evaluation
### Metrics
| Label | F1 | Accuracy |
|:--------|:-------|:---------|
| **all** | 0.9057 | 0.9573 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("twright8/setfit-oversample-labels-lobbying")
# Run inference
preds = model("Electricity market")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 2 | 21.5644 | 153 |
### Training Hyperparameters
- batch_size: (16, 2)
- num_epochs: (6, 9)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (7.928034854554858e-06, 2.7001088851580374e-05)
- head_learning_rate: 0.009321171293151879
- loss: CoSENTLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: True
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:--------:|:-------------:|:---------------:|
| 0.0018 | 1 | 8.669 | - |
| 0.0880 | 50 | 8.6617 | - |
| 0.1761 | 100 | 12.5549 | - |
| 0.2641 | 150 | 3.1895 | - |
| 0.3521 | 200 | 16.3181 | - |
| 0.4401 | 250 | 0.7513 | - |
| 0.5282 | 300 | 4.6653 | - |
| 0.0018 | 1 | 0.0059 | - |
| 0.0880 | 50 | 3.4564 | - |
| 0.1761 | 100 | 0.5523 | - |
| 0.2641 | 150 | 0.2372 | - |
| 0.3521 | 200 | 4.288 | - |
| 0.4401 | 250 | 0.0027 | - |
| 0.5282 | 300 | 0.0002 | - |
| 0.6162 | 350 | 0.0002 | - |
| 0.7042 | 400 | 0.0001 | - |
| 0.7923 | 450 | 0.0015 | - |
| 0.8803 | 500 | 3.5596 | - |
| 0.9683 | 550 | 0.0 | - |
| 1.0 | 568 | - | 10.2261 |
| 1.0563 | 600 | 0.0 | - |
| 1.1444 | 650 | 0.0011 | - |
| 1.2324 | 700 | 0.0013 | - |
| 1.3204 | 750 | 0.0037 | - |
| 1.4085 | 800 | 0.0013 | - |
| 1.4965 | 850 | 0.0002 | - |
| 1.5845 | 900 | 0.0 | - |
| 1.6725 | 950 | 0.0 | - |
| 1.7606 | 1000 | 0.0001 | - |
| 1.8486 | 1050 | 0.0001 | - |
| 1.9366 | 1100 | 0.0001 | - |
| 2.0 | 1136 | - | 8.4908 |
| 2.0246 | 1150 | 0.0001 | - |
| 2.1127 | 1200 | 0.0 | - |
| 2.2007 | 1250 | 0.0005 | - |
| 2.2887 | 1300 | 0.0004 | - |
| 2.3768 | 1350 | 0.0 | - |
| 2.4648 | 1400 | 0.0009 | - |
| 2.5528 | 1450 | 0.0 | - |
| 2.6408 | 1500 | 0.0 | - |
| 2.7289 | 1550 | 0.0 | - |
| 2.8169 | 1600 | 0.0 | - |
| 2.9049 | 1650 | 0.0001 | - |
| 2.9930 | 1700 | 0.0003 | - |
| 3.0 | 1704 | - | 8.5594 |
| 3.0810 | 1750 | 0.0001 | - |
| 3.1690 | 1800 | 0.0 | - |
| 3.2570 | 1850 | 0.0002 | - |
| 3.3451 | 1900 | 0.0001 | - |
| 3.4331 | 1950 | 0.0 | - |
| 3.5211 | 2000 | 0.0 | - |
| 3.6092 | 2050 | 0.0 | - |
| 3.6972 | 2100 | 0.0 | - |
| 3.7852 | 2150 | 0.0 | - |
| 3.8732 | 2200 | 0.0002 | - |
| 3.9613 | 2250 | 0.0001 | - |
| **4.0** | **2272** | **-** | **8.4573** |
| 4.0493 | 2300 | 0.0 | - |
| 4.1373 | 2350 | 0.0 | - |
| 4.2254 | 2400 | 0.0002 | - |
| 4.3134 | 2450 | 0.0 | - |
| 4.4014 | 2500 | 0.0003 | - |
| 4.4894 | 2550 | 0.0001 | - |
| 4.5775 | 2600 | 0.0001 | - |
| 4.6655 | 2650 | 0.0001 | - |
| 4.7535 | 2700 | 0.0001 | - |
| 4.8415 | 2750 | 0.0001 | - |
| 4.9296 | 2800 | 0.0012 | - |
| 5.0 | 2840 | - | 8.6305 |
| 5.0176 | 2850 | 0.0009 | - |
| 5.1056 | 2900 | 0.0 | - |
| 5.1937 | 2950 | 0.0001 | - |
| 5.2817 | 3000 | 0.0 | - |
| 5.3697 | 3050 | 0.0 | - |
| 5.4577 | 3100 | 0.0001 | - |
| 5.5458 | 3150 | 0.0007 | - |
| 5.6338 | 3200 | 0.0002 | - |
| 5.7218 | 3250 | 0.0 | - |
| 5.8099 | 3300 | 0.0001 | - |
| 5.8979 | 3350 | 0.0002 | - |
| 5.9859 | 3400 | 0.0 | - |
| 6.0 | 3408 | - | 8.9528 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 3.0.1
- Transformers: 4.39.0
- PyTorch: 2.3.1+cu118
- Datasets: 2.20.0
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |