modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
youknownothing/Fluently-v4 | youknownothing | "2024-07-02T17:46:13Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"sd1.5",
"fluently",
"text-to-image",
"base_model:runwayml/stable-diffusion-v1-5",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-07-02T17:46:13Z" | ---
license: other
license_name: fluently-license
license_link: https://huggingface.co/spaces/fluently/License
library_name: diffusers
pipeline_tag: text-to-image
base_model: runwayml/stable-diffusion-v1-5
tags:
- safetensors
- stable-diffusion
- sd1.5
- fluently
inference:
parameters:
num_inference_steps: 30
guidance_scale: 5.5
negative_prompt: >-
(deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong
anatomy, extra limb, missing limb, floating limbs, (mutated hands and
fingers:1.4), disconnected limbs, mutation, mutated, ugly, disgusting,
blurry, amputation
---
# **Fluently** V4.0 (Global Realese) - one model for all tasks ([Fluently XL](https://huggingface.co/fluently/Fluently-XL-v1))
![preview](images/preview.png)
I would like to introduce my model - **Fluently**! This model was made by merging. I crossed a lot of checkpoints and loras.
## About this model
In a nutshell, I took a few checkpoints and a bunch of Loras, I crossed through an extension in AUTOMATIC1111 - SuperMerger (available in extensions). Many factors were taken into account such as: eye quality, correct anatomy, reductions in required promt for a good result.
### What makes my model different from others
- Correct anatomy: my model has the correct anatomy
- Face details: my model generates beautifully detailed faces and eyes even without AfterDetailer
- Artistic: my model activates beautiful artistry when it needs it
- Inpainting: my model is pretty good at Inpainting/Outpainting and I don't need to put in a specially designed model
- Anime & Comic Book style: my model can draw great Anime and Comic Book art
### Merge details
Below is a severely truncated list of models and loras of this model merging:
*Models*:
- Juggernaut Final
- Deliberate V2
- RPG
- Realistic Vision V1.3
- DreamDrop V1
and more models...
*Loras*:
- LowRA V2
- Intricate Details
- Detail Slider
- Xeno Detailer
and more loras...
## How to use this model
### Quick Start
1. Install this model and start the AUTOMATIC1111
2. Select this checkpoint
3. Generate images!
#### Optimal Parameters
- Steps: 20-30
- Sampler: DPM++ 2M Karras/Euler a
- CFG Scale: 5-7
- CLIP-skip: 1
- Negative Prompt: practically unnecessary
#### Addion for this model
style.csv for this model: [click](https://drive.google.com/file/d/1KZrWX66A2byBAdtcVPkBTMU0g00Hm6Ta/view?usp=sharing)
|
fecia/cates_phi3_1 | fecia | "2024-07-02T17:51:05Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T17:46:45Z" | ---
license: apache-2.0
---
# cates_phi3_1
cates_phi3_1 is an SFT fine-tuned version of microsoft/Phi-3-mini-4k-instruct using a custom training dataset.
This model was made with [Phinetune]()
## Process
- Learning Rate: 1.41e-05
- Maximum Sequence Length: 2048
- Dataset: fecia/cates
- Split: train
## 💻 Usage
```python
!pip install -qU transformers
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
model = "fecia/cates_phi3_1"
tokenizer = AutoTokenizer.from_pretrained(model)
# Example prompt
prompt = "Your example prompt here"
# Generate a response
model = AutoModelForCausalLM.from_pretrained(model)
pipeline = pipeline("text-generation", model=model, tokenizer=tokenizer)
outputs = pipeline(prompt, max_length=50, num_return_sequences=1)
print(outputs[0]["generated_text"])
``` |
youknownothing/Fluently-XL-Final | youknownothing | "2024-07-02T17:46:58Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"sdxl",
"fluetnly-xl",
"fluently",
"trained",
"text-to-image",
"dataset:ehristoforu/midjourney-images",
"dataset:ehristoforu/dalle-3-images",
"dataset:ehristoforu/fav_images",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-07-02T17:46:57Z" | ---
license: other
license_name: fluently-license
license_link: https://huggingface.co/spaces/fluently/License
extra_gated_prompt: >-
By clicking "Agree", you agree to the [License
Agreement](https://huggingface.co/spaces/fluently/License/blob/main/LICENSE.md)
extra_gated_fields:
Name: text
Email: text
Country: country
Who you are?:
type: select
options:
- 'Researcher'
- 'Student'
- 'Teacher'
- 'Model creator'
- 'Non-profit company'
- 'Commercial company'
datasets:
- ehristoforu/midjourney-images
- ehristoforu/dalle-3-images
- ehristoforu/fav_images
library_name: diffusers
pipeline_tag: text-to-image
base_model: stabilityai/stable-diffusion-xl-base-1.0
tags:
- safetensors
- stable-diffusion
- sdxl
- fluetnly-xl
- fluently
- trained
inference:
parameters:
num_inference_steps: 25
guidance_scale: 5
negative_prompt: "(deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, (mutated hands and fingers:1.4), disconnected limbs, mutation, mutated, ugly, disgusting, blurry, amputation"
---
# **Fluently XL** FINAL - the best XL-model
![preview](images/preview.png)
*This is the **final release**. Improved overall aesthetics, improved lighting and more.*
Introducing Fluently XL, you are probably ready to argue with the name of the model: “The best XL-model”, but now I will prove to you why it is true.
## About this model
The model was obtained through training on *expensive graphics accelerators*, a lot of work was done, now we will show why this XL model is better than others.
### Features
- Correct anatomy
- Art and realism in one
- Controling contrast
- Great nature
- Great faces without AfterDetailer
### More info
Our model is better than others because we do not mix but **train**, but at first it may seem that the model is not very good, but if you are a real professional you will like it.
## Using
Optimal parameters in Automatic1111/ComfyUI:
- Sampling steps: 20-35
- Sampler method: Euler a/Euler
- CFG Scale: 4-6.5
## End
Let's remove models that copy each other from the top and put one that is actually developing, thank you) |
youknownothing/Fluently-XL-Final-onnx | youknownothing | "2024-07-02T17:47:06Z" | 0 | 0 | null | [
"onnx",
"text-to-image",
"region:us"
] | text-to-image | "2024-07-02T17:47:06Z" | ---
pipeline_tag: text-to-image
---
# Fluently XL Final - Onnx Olive DirectML Optimized
## Original Model
https://huggingface.co/fluently/Fluently-XL-Final
## C# Inference Demo
https://github.com/TensorStack-AI/OnnxStack
```csharp
// Create Pipeline
var pipeline = StableDiffusionXLPipeline.CreatePipeline("D:\\Models\\Fluently-XL-Final-onnx");
// Prompt
var promptOptions = new PromptOptions
{
Prompt = "Craft an image of a nurse taking care of a patient in a hospital room, with medical equipment and a warm smile."
};
// Run pipeline
var result = await pipeline.GenerateImageAsync(promptOptions);
// Save Image Result
await result.SaveAsync("Result.png");
```
## Inference Result
![Intro Image](Sample.png) |
qsdcfqsdfcxqfqs/Sunak-to-emphasise-importance-of-voting-in-final-stretch-plea-to-5d-updated | qsdcfqsdfcxqfqs | "2024-07-02T17:48:02Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T17:48:02Z" | Entry not found |
qsdcfqsdfcxqfqs/11th-Airborne-gets-first-new-commander-since-Armys-Arctic-command-created-2-years-ago-1e-updated | qsdcfqsdfcxqfqs | "2024-07-02T17:48:02Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T17:48:02Z" | Entry not found |
CodeZero123/llama3-8b-bnb-4bit-niv-ai-instruct-16bit | CodeZero123 | "2024-07-02T17:54:06Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T17:48:22Z" | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** CodeZero123
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Solomonik/flan-t5-base-vin-validation | Solomonik | "2024-07-03T01:24:03Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-07-02T17:48:36Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kheopss/kheops_fr_en_epoch1_4bits_GPTQ_V2 | kheopss | "2024-07-02T17:53:14Z" | 0 | 0 | transformers | [
"transformers",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2024-07-02T17:51:03Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Weni/ZeroShot-Agents-Llama3-4.0.43-ORPO-AWQ | Weni | "2024-07-02T20:51:47Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] | text-generation | "2024-07-02T17:52:44Z" | Entry not found |
REPLACE/separated | REPLACE | "2024-07-02T19:01:58Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T17:54:26Z" | # SEPARATED
---
## Introduction
SEPARATED is a 2D platformer game where you can talk to NPCs. Most of the game is not yet implemented.
## Table of Contents
- [SEPARATED](#separated)
- [Introduction](#introduction)
- [Table of Contents](#table-of-contents)
- [Player Inputs ∆](#player-inputs-)
- [Debugging Keyboard Shortcuts](#debugging-keyboard-shortcuts)
- [TODO](#todo)
- [`filesystem_watcher` and `asset_processor`](#filesystem_watcher-and-asset_processor)
- [Rust Things 🦀](#rust-things-)
- [Run in Wolf Mode (Debug)](#run-in-wolf-mode-debug)
- [Pedantic linting](#pedantic-linting)
- [Linting on all packages, treating warnings as errors](#linting-on-all-packages-treating-warnings-as-errors)
- [Format code](#format-code)
- [Test without default features](#test-without-default-features)
- [Test with only the `bevy_ui` features](#test-with-only-the-bevy_ui-features)
- [Test with all features enabled](#test-with-all-features-enabled)
- [Test with all features enabled on nightly](#test-with-all-features-enabled-on-nightly)
- [Generate documentation with all features enabled](#generate-documentation-with-all-features-enabled)
- [`seldom_state` + `input_manager` Example](#seldom_state--input_manager-example)
## Player Inputs ∆
| Input | KeyCode | Gamepad Button/Axis |
| :----------- | :-----------------------: | :-------------------------: |
| **Run** | **Shift** | Xbox: **X** PS5: **Square** |
| **Interact** | **E** | Xbox: **B** PS5: **◯** |
| **Attack1** | **Q** | Xbox/PS5: **L1** |
| **Jump** | **Space** | Xbox: **A** PS5: **╳** |
| **Move** | **WASD** + **Arrow Keys** | **Any Axis + D-Pad** |
## Debugging Keyboard Shortcuts
| Action | KeyCode |
| :----------------------------- | :-----: |
| Toggle Physics Wireframes | F9 |
| StateInspector (**GameState**) | F10 |
| WorldInspector | F11 |
## TODO
---
- **Use WyRand instead of `thread_rng()`**
```rust
fn print_random_value(mut rng: ResMut<GlobalEntropy<WyRand>>) {
println!("Random value: {}", rng.next_u32());
}
use bevy_rand::WyRand;
use bevy_rand::prelude::{GlobalEntropy, ForkableRng};
#[derive(Component)]
struct Source;
fn setup_source(mut commands: Commands, mut global: ResMut<GlobalEntropy<WyRand>>) {
commands
.spawn((
Source,
global.fork_rng(),
));
}
```
---
```rust
if ( jumping || falling ) {
if velocity.y.abs() < jumpHangTimeThreshold {
// Increase acceleration for this duration also.
// Reduce gravity.
}
}
// If the player is moving downwards..
if velocity.y < 0 {
// Increase gravity while falling.
gravityScale *= fallGravityMultiplier;
// Cap maximum fall speed, so when falling over large distances,
// we don't accelerate to insanely high speeds.
}
```
- **Localization**
- ⚠️ Started work by integrating `bevy_device_lang`. Requires a proper system that saves this value and allows the player to change it in the game menu, and also requires starting work on localization and saving and loading settings.
- **`bevy_asepritesheet` + `bevy_ecs_ldtk` integration.**
- **Patrol**
- Flip sprite when turning around!
- **Movement Improvements**
- Movement animations.
- Movement particle effects.
- Coyote (Grace) Time after falling off a ledge.
- Maybe needs a raycast in front of the player? Timer needs to start before falling off a ledge.
- **Jump Improvements**
- Jumping animations.
- Jumping particle effects.
- Wall Jumping
- ~~Prevent player movement for a short duration during the wall jump.~~ Reduce run force? Maybe a lerp between the wall jump speed and running speed?
- Air Time
- Jump Height
- Increase the player's jump height the longer the jump button is being held down.
- Clamp maximum falling speed.
- Coyote Time while jumping and pressing the jump button.
- There is already some check for being in the air we just need the input part I think.
- Bonus Air Time
- Peak Control
- Fast Fall
- Increase Player's falling speed after the peak of their jump by adjusting gravity.
- **Game Feel Improvements**
This is kinda broad but always iterate over every small mechanic towards more fun.
- **AI Stuff** ⚠️ Started work
- Pass player input(s) to ai-brain so it can use it for prediction.
- Basic Timer with Action Scheduling
- Thirst ✅
- Fatigue ⚠️
- **Pathfinding** ⚠️ Started work
- Use something to copy `dxil.dll` and `dxcompiler.dll` to Windows builds.
- **YarnSpinner**
- Begin YarnSpinner integration ✅
- YarnSpinner+LDTK integration ⚠️ Started work
- **UI**
- sickle_ui
- labels ✅
- keycap/gamepad button switching ⚠️
## `filesystem_watcher` and `asset_processor`
???
## Rust Things 🦀
---
### Run in Wolf Mode (Debug)
```pwsh
cargo run --profile awoo 2>&1 | Out-String -Stream | Where-Object { $_ -notmatch "ID3D12Device::CreateCommittedResource:" -and $_ -notmatch "Live Object at" -and $_ -notmatch "LineGizmo" -and $_ -notmatch "End of Frame" -and $_ -notmatch "prepare_windows" -and $_ -notmatch "cleanup" -and $_ -notmatch "SwapChain" -and $_ -notmatch "create_view" }
```
### Pedantic linting
```bash
cargo clippy -- -W clippy::pedantic
```
### Linting on all packages, treating warnings as errors
```bash
cargo clippy --workspace --all-targets --all-features -- -D warnings
```
This command runs the `clippy` linter on all packages in the workspace, for all targets and features. The `-D warnings` option treats any warnings as errors.
### Format code
```bash
cargo fmt --all
```
This command formats the code in every package using the default formatting rules provided by `rustfmt`.
### Test without default features
```bash
cargo test --no-default-features
```
This command runs tests in the package, but disables the default features.
### Test with only the `bevy_ui` features
```bash
cargo test --no-default-features --features="bevy_ui"
```
This command runs tests with only the `bevy_ui` feature enabled.
### Test with all features enabled
```bash
cargo test --all-features
```
This command runs tests with all features enabled.
### Test with all features enabled on nightly
```bash
cargo +nightly build --all-features
```
This command builds the package with all features enabled using the nightly version of the Rust compiler. This is typically used for generating documentation on docs.rs.
### Generate documentation with all features enabled
```bash
cargo +nightly doc --all-features --no-deps
```
This command generates documentation for the package with all features enabled, without including dependencies, using the nightly version of the Rust compiler.
## `seldom_state` + `input_manager` Example
```rust
// In this game, you can move with the left and right arrow keys, and jump with space.
// `input-manager` handles the input.
use bevy::prelude::*;
use input_manager::{ axislike::VirtualAxis, prelude::* };
use seldom_state::prelude::*;
fn main() {
App::new()
.add_plugins((DefaultPlugins, InputManagerPlugin::<Action>::default(), StateMachinePlugin))
.add_systems(Startup, init)
.add_systems(Update, (walk, fall))
.run();
}
const JUMP_VELOCITY: f32 = 500.0;
fn init(mut commands: Commands, asset_server: Res<AssetServer>) {
commands.spawn(Camera2dBundle::default());
commands.spawn((
SpriteBundle {
transform: Transform::from_xyz(500.0, 0.0, 0.0),
texture: asset_server.load("player.png"),
..default()
},
// From `input-manager`
InputManagerBundle {
input_map: InputMap::default()
.insert(Action::Move, VirtualAxis::horizontal_arrow_keys())
.insert(Action::Move, SingleAxis::symmetric(GamepadAxisType::LeftStickX, 0.0))
.insert(Action::Jump, KeyCode::Space)
.insert(Action::Jump, GamepadButtonType::South)
.build(),
..default()
},
// This state machine achieves a very rigid movement system. Consider a state machine for
// whatever parts of your player controller that involve discrete states. Like the movement
// in Castlevania and Celeste, and the attacks in a fighting game.
StateMachine::default()
// Whenever the player presses jump, jump
.trans::<Grounded, _>(just_pressed(Action::Jump), Falling {
velocity: JUMP_VELOCITY,
})
// When the player hits the ground, idle
.trans::<Falling, _>(grounded, Grounded::Idle)
// When the player is grounded, set their movement direction
.trans_builder(value_unbounded(Action::Move), |_: &Grounded, value| {
Some(match value {
value if value > 0.5 => Grounded::Right,
value if value < -0.5 => Grounded::Left,
_ => Grounded::Idle,
})
}),
Grounded::Idle,
));
}
#[derive(Actionlike, Clone, Eq, Hash, PartialEq, Reflect)]
enum Action {
Move,
Jump,
}
fn grounded(In(entity): In<Entity>, fallings: Query<(&Transform, &Falling)>) -> bool {
let (transform, falling) = fallings.get(entity).unwrap();
transform.translation.y <= 0.0 && falling.velocity <= 0.0
}
#[derive(Clone, Copy, Component, Reflect)]
#[component(storage = "SparseSet")]
enum Grounded {
Left = -1,
Idle = 0,
Right = 1,
}
#[derive(Clone, Component, Reflect)]
#[component(storage = "SparseSet")]
struct Falling {
velocity: f32,
}
const PLAYER_SPEED: f32 = 200.0;
fn walk(mut groundeds: Query<(&mut Transform, &Grounded)>, time: Res<Time>) {
for (mut transform, grounded) in &mut groundeds {
transform.translation.x += (*grounded as i32 as f32) * time.delta_seconds() * PLAYER_SPEED;
}
}
const GRAVITY: f32 = -1000.0;
fn fall(mut fallings: Query<(&mut Transform, &mut Falling)>, time: Res<Time>) {
for (mut transform, mut falling) in &mut fallings {
let dt = time.delta_seconds();
falling.velocity += dt * GRAVITY;
transform.translation.y += dt * falling.velocity;
}
}
```
|
RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf | RichardErkhov | "2024-07-02T18:05:23Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-07-02T17:55:20Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
MELT-TinyLlama-1.1B-Chat-v1.0 - GGUF
- Model creator: https://huggingface.co/IBI-CAAI/
- Original model: https://huggingface.co/IBI-CAAI/MELT-TinyLlama-1.1B-Chat-v1.0/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [MELT-TinyLlama-1.1B-Chat-v1.0.Q2_K.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.Q2_K.gguf) | Q2_K | 0.4GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.IQ3_XS.gguf) | IQ3_XS | 0.44GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.IQ3_S.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.IQ3_S.gguf) | IQ3_S | 0.47GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.Q3_K_S.gguf) | Q3_K_S | 0.47GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.IQ3_M.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.IQ3_M.gguf) | IQ3_M | 0.48GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.Q3_K.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.Q3_K.gguf) | Q3_K | 0.51GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.Q3_K_M.gguf) | Q3_K_M | 0.51GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.Q3_K_L.gguf) | Q3_K_L | 0.55GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.IQ4_XS.gguf) | IQ4_XS | 0.57GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.Q4_0.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.Q4_0.gguf) | Q4_0 | 0.59GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.IQ4_NL.gguf) | IQ4_NL | 0.6GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.Q4_K_S.gguf) | Q4_K_S | 0.6GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.Q4_K.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.Q4_K.gguf) | Q4_K | 0.62GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.Q4_K_M.gguf) | Q4_K_M | 0.62GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.Q4_1.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.Q4_1.gguf) | Q4_1 | 0.65GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.Q5_0.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.Q5_0.gguf) | Q5_0 | 0.71GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.Q5_K_S.gguf) | Q5_K_S | 0.71GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.Q5_K.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.Q5_K.gguf) | Q5_K | 0.73GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.Q5_K_M.gguf) | Q5_K_M | 0.73GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.Q5_1.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.Q5_1.gguf) | Q5_1 | 0.77GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.Q6_K.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.Q6_K.gguf) | Q6_K | 0.84GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.Q8_0.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.Q8_0.gguf) | Q8_0 | 1.09GB |
Original model description:
---
license: apache-2.0
language:
- en
library_name: transformers
---
# Model Card MELT-TinyLlama-1.1B-Chat-v1.0
The MELT-TinyLlama-1.1B-Chat-v1.0 Large Language Model (LLM) is a pretrained generative text model pre-trained and fine-tuned on using publically avalable medical data.
MELT-TinyLlama-1.1B-Chat-v1.0 demonstrates a 13.76% improvement over TinyLlama-1.1B-Chat-v1.0 across 3 medical benchmarks including, USMLE, Indian AIIMS, and NEET medical examination examples.
## Model Details
The Medical Education Language Transformer (MELT) models have been trained on a wide-range of text, chat, Q/A, and instruction data in the medical domain.
While the model was evaluated using publically avalable [USMLE](https://www.usmle.org/), Indian AIIMS, and NEET medical examination example questions, its use it intented to be more broadly applicable.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Center for Applied AI](https://caai.ai.uky.edu/)
- **Funded by:** [Institute or Biomedical Informatics](https://www.research.uky.edu/IBI)
- **Model type:** LLM
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model:** [TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)
## Uses
MELT is intended for research purposes only. MELT models are best suited for prompts using a QA or chat format.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
MELT is intended for research purposes only and should not be used for medical advice.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
MELT was training using collections publicly available, which likely contain biased and inaccurate information. The training and evaluation datasets have not been evaluated for content or accuracy.
## How to Get Started with the Model
Use this model like you would any llama-2-7b-chat-hf model.
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The following datasets were used for training:
[Expert Med](https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/Q3A969)
[MedQA train](https://huggingface.co/datasets/bigbio/med_qa)
[MedMCQA train](https://github.com/MedMCQA/MedMCQA?tab=readme-ov-file#data-download-and-preprocessing)
[LiveQA](https://github.com/abachaa/LiveQA_MedicalTask_TREC2017)
[MedicationQA](https://huggingface.co/datasets/truehealth/medicationqa)
[MMLU clinical topics](https://huggingface.co/datasets/Stevross/mmlu)
[Medical Flashcards](https://huggingface.co/datasets/medalpaca/medical_meadow_medical_flashcards)
[Wikidoc](https://huggingface.co/datasets/medalpaca/medical_meadow_wikidoc)
[Wikidoc Patient Information](https://huggingface.co/datasets/medalpaca/medical_meadow_wikidoc_patient_information)
[MEDIQA](https://huggingface.co/datasets/medalpaca/medical_meadow_mediqa)
[MMMLU](https://huggingface.co/datasets/medalpaca/medical_meadow_mmmlu)
[icliniq 10k](https://drive.google.com/file/d/1ZKbqgYqWc7DJHs3N9TQYQVPdDQmZaClA/view?usp=sharing)
[HealthCare Magic 100k](https://drive.google.com/file/d/1lyfqIwlLSClhgrCutWuEe_IACNq6XNUt/view?usp=sharing)
[GenMedGPT-5k](https://drive.google.com/file/d/1nDTKZ3wZbZWTkFMBkxlamrzbNz0frugg/view?usp=sharing)
[Mental Health Conversational](https://huggingface.co/datasets/heliosbrahma/mental_health_conversational_dataset)
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Training Hyperparameters
- **Lora Rank:** 64
- **Lora Alpha:** 16
- **Lora Targets:** "o_proj","down_proj","v_proj","gate_proj","up_proj","k_proj","q_proj"
- **LR:** 2e-4
- **Epoch:** 3
- **Precision:** bf16 <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
MELT-TinyLlama-1.1B-Chat-v1.0 demonstrates an average 13.76% improvement over TinyLlama-1.1B-Chat-v1.0 across 3 USMLE, Indian AIIMS, and NEET medical examination benchmarks.
### TinyLlama-1.1B-Chat-v1.0
- **medqa:** {'base': {'Average': 25.49, 'STEP-1': 24.48, 'STEP-2&3': 26.64}}
- **mausmle:** {'base': {'Average': 19.71, 'STEP-1': 21.18, 'STEP-2': 20.69, 'STEP-3': 17.76}}
- **medmcqa:** {'base': {'Average': 28.52, 'MEDICINE': 29.35, 'OPHTHALMOLOGY': 28.57, 'ANATOMY': 30.82, 'PATHOLOGY': 29.07, 'PHYSIOLOGY': 20.45, 'DENTAL': 30.09, 'RADIOLOGY': 14.29, 'BIOCHEMISTRY': 22.31, 'ANAESTHESIA': 26.09, 'GYNAECOLOGY': 24.84, 'PHARMACOLOGY': 32.02, 'SOCIAL': 31.11, 'PEDIATRICS': 31.82, 'ENT': 28.95, 'SURGERY': 31.45, 'MICROBIOLOGY': 26.03, 'FORENSIC': 16.28, 'PSYCHIATRY': 22.22, 'SKIN': 40.0, 'ORTHOPAEDICS': 21.43, 'UNKNOWN': 0.0}}
- **average:** 24.57%
### MELT-TinyLlama-1.1B-Chat-v1.0
- **medqa:** {'base': {'Average': 29.5, 'STEP-1': 28.17, 'STEP-2&3': 31.03}}
- **mausmle:** {'base': {'Average': 21.51, 'STEP-1': 27.06, 'STEP-2': 19.54, 'STEP-3': 18.69}}
- **medmcqa:** {'base': {'Average': 32.84, 'MEDICINE': 27.72, 'OPHTHALMOLOGY': 38.1, 'ANATOMY': 39.73, 'PATHOLOGY': 32.56, 'PHYSIOLOGY': 35.61, 'DENTAL': 32.23, 'RADIOLOGY': 41.07, 'BIOCHEMISTRY': 33.06, 'ANAESTHESIA': 39.13, 'GYNAECOLOGY': 22.88, 'PHARMACOLOGY': 32.58, 'SOCIAL': 26.67, 'PEDIATRICS': 34.09, 'ENT': 42.11, 'SURGERY': 33.47, 'MICROBIOLOGY': 30.14, 'FORENSIC': 41.86, 'PSYCHIATRY': 55.56, 'SKIN': 60.0, 'ORTHOPAEDICS': 35.71, 'UNKNOWN': 100.0}}
- **average:** 27.95%
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[MedQA test](https://huggingface.co/datasets/bigbio/med_qa)
[MedMCQA test](https://github.com/MedMCQA/MedMCQA?tab=readme-ov-file#data-download-and-preprocessing)
[MA USMLE](https://huggingface.co/datasets/medalpaca/medical_meadow_usmle_self_assessment)
## Disclaimer:
The use of large language models, such as this one, is provided without warranties or guarantees of any kind. While every effort has been made to ensure accuracy, completeness, and reliability of the information generated, it should be noted that these models may produce responses that are inaccurate, outdated, or inappropriate for specific purposes. Users are advised to exercise discretion and judgment when relying on the information generated by these models. The outputs should not be considered as professional, legal, medical, financial, or any other form of advice. It is recommended to seek expert advice or consult appropriate sources for specific queries or critical decision-making. The creators, developers, and providers of these models disclaim any liability for damages, losses, or any consequences arising from the use, reliance upon, or interpretation of the information provided by these models. The user assumes full responsibility for their interactions and usage of the generated content. By using these language models, users agree to indemnify and hold harmless the developers, providers, and affiliates from any claims, damages, or liabilities that may arise from their use. Please be aware that these models are constantly evolving, and their capabilities, limitations, and outputs may change over time without prior notice. Your use of this language model signifies your acceptance and understanding of this disclaimer.
|
Anujgr8/Whisper-Anuj-small-Odia-final | Anujgr8 | "2024-07-02T22:50:33Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-07-02T17:55:26Z" | ---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper-Anuj-small-Odia-final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper-Anuj-small-Odia-final
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1031
- Wer: 43.2742
- Cer: 24.0740
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 6
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1800
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:-------:|
| 0.0352 | 4.7244 | 600 | 0.0984 | 41.3566 | 16.3829 |
| 0.0024 | 9.4488 | 1200 | 0.1041 | 49.1414 | 27.9848 |
| 0.0002 | 14.1732 | 1800 | 0.1031 | 43.2742 | 24.0740 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
jw-hf-test/jw5 | jw-hf-test | "2024-07-02T17:58:31Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T17:55:45Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ovieyra21/speecht5_tts_mabama_nl | ovieyra21 | "2024-07-02T17:55:45Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T17:55:45Z" | Entry not found |
ferrazzipietro/Meta-Llama-3-8B-Instruct_en.layer1_NoQuant_32_16_0.02_8 | ferrazzipietro | "2024-07-02T17:56:12Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T17:55:59Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
starnet/02-star-07-02-01 | starnet | "2024-07-02T18:02:15Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T17:58:22Z" | Entry not found |
fifala/03-fifa-07-02-01 | fifala | "2024-07-02T18:01:36Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T17:58:34Z" | Entry not found |
healtori/01-heal-07-02-01 | healtori | "2024-07-02T18:02:11Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T17:58:58Z" | Entry not found |
miggwp/distilbert-base-uncased-finetuned-the-fire-flower | miggwp | "2024-07-02T17:59:23Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-07-02T17:59:01Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vrathi101/moe-merged-model-auto_qtz-v0.gguf | vrathi101 | "2024-07-02T18:11:57Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-07-02T17:59:44Z" | Entry not found |
juanpablomesa/bge-small-bioasq-3epochs-batch32 | juanpablomesa | "2024-07-02T18:00:02Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:4012",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:BAAI/bge-small-en-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | "2024-07-02T17:59:57Z" | ---
base_model: BAAI/bge-small-en-v1.5
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:4012
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: 'Extensive messenger RNA editing generates transcript and protein
diversity in genes involved in neural excitability, as previously described, as
well as in genes participating in a broad range of other cellular functions. '
sentences:
- Do cephalopods use RNA editing less frequently than other species?
- GV1001 vaccine targets which enzyme?
- Which event results in the acetylation of S6K1?
- source_sentence: Yes, exposure to household furry pets influences the gut microbiota
of infants.
sentences:
- Can pets affect infant microbiomed?
- What is the mode of action of Thiazovivin?
- What are the effects of CAMK4 inhibition?
- source_sentence: "In children with heart failure evidence of the effect of enalapril\
\ is empirical. Enalapril was clinically safe and effective in 50% to 80% of for\
\ children with cardiac failure secondary to congenital heart malformations before\
\ and after cardiac surgery, impaired ventricular function , valvar regurgitation,\
\ congestive cardiomyopathy, , arterial hypertension, life-threatening arrhythmias\
\ coexisting with circulatory insufficiency. \nACE inhibitors have shown a transient\
\ beneficial effect on heart failure due to anticancer drugs and possibly a beneficial\
\ effect in muscular dystrophy-associated cardiomyopathy, which deserves further\
\ studies."
sentences:
- Which receptors can be evaluated with the [18F]altanserin?
- In what proportion of children with heart failure has Enalapril been shown to
be safe and effective?
- Which major signaling pathways are regulated by RIP1?
- source_sentence: Cellular senescence-associated heterochromatic foci (SAHFS) are
a novel type of chromatin condensation involving alterations of linker histone
H1 and linker DNA-binding proteins. SAHFS can be formed by a variety of cell types,
but their mechanism of action remains unclear.
sentences:
- What is the relationship between the X chromosome and a neutrophil drumstick?
- Which microRNAs are involved in exercise adaptation?
- How are SAHFS created?
- source_sentence: Multicluster Pcdh diversity is required for mouse olfactory neural
circuit assembly. The vertebrate clustered protocadherin (Pcdh) cell surface proteins
are encoded by three closely linked gene clusters (Pcdhα, Pcdhβ, and Pcdhγ). Although
deletion of individual Pcdh clusters had subtle phenotypic consequences, the loss
of all three clusters (tricluster deletion) led to a severe axonal arborization
defect and loss of self-avoidance.
sentences:
- What are the effects of the deletion of all three Pcdh clusters (tricluster deletion)
in mice?
- what is the role of MEF-2 in cardiomyocyte differentiation?
- How many periods of regulatory innovation led to the evolution of vertebrates?
model-index:
- name: BGE small finetuned BIOASQ
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: BAAI/bge small en v1.5
type: BAAI/bge-small-en-v1.5
metrics:
- type: cosine_accuracy@1
value: 0.8373408769448374
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.925035360678925
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9476661951909476
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9618104667609618
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8373408769448374
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.30834512022630833
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.18953323903818953
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09618104667609619
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8373408769448374
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.925035360678925
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9476661951909476
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9618104667609618
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9048218842329923
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8860235513347253
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.886766844616012
name: Cosine Map@100
---
# BGE small finetuned BIOASQ
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) <!-- at revision 5c38ec7c405ec4b44b94cc5a9bb96e735b38267a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("juanpablomesa/bge-small-bioasq-3epochs-batch32")
# Run inference
sentences = [
'Multicluster Pcdh diversity is required for mouse olfactory neural circuit assembly. The vertebrate clustered protocadherin (Pcdh) cell surface proteins are encoded by three closely linked gene clusters (Pcdhα, Pcdhβ, and Pcdhγ). Although deletion of individual Pcdh clusters had subtle phenotypic consequences, the loss of all three clusters (tricluster deletion) led to a severe axonal arborization defect and loss of self-avoidance.',
'What are the effects of the deletion of all three Pcdh clusters (tricluster deletion) in mice?',
'How many periods of regulatory innovation led to the evolution of vertebrates?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `BAAI/bge-small-en-v1.5`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.8373 |
| cosine_accuracy@3 | 0.925 |
| cosine_accuracy@5 | 0.9477 |
| cosine_accuracy@10 | 0.9618 |
| cosine_precision@1 | 0.8373 |
| cosine_precision@3 | 0.3083 |
| cosine_precision@5 | 0.1895 |
| cosine_precision@10 | 0.0962 |
| cosine_recall@1 | 0.8373 |
| cosine_recall@3 | 0.925 |
| cosine_recall@5 | 0.9477 |
| cosine_recall@10 | 0.9618 |
| cosine_ndcg@10 | 0.9048 |
| cosine_mrr@10 | 0.886 |
| **cosine_map@100** | **0.8868** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 4,012 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 63.38 tokens</li><li>max: 485 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 16.13 tokens</li><li>max: 49 tokens</li></ul> |
* Samples:
| positive | anchor |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|
| <code>Aberrant patterns of H3K4, H3K9, and H3K27 histone lysine methylation were shown to result in histone code alterations, which induce changes in gene expression, and affect the proliferation rate of cells in medulloblastoma.</code> | <code>What is the implication of histone lysine methylation in medulloblastoma?</code> |
| <code>STAG1/STAG2 proteins are tumour suppressor proteins that suppress cell proliferation and are essential for differentiation.</code> | <code>What is the role of STAG1/STAG2 proteins in differentiation?</code> |
| <code>The association between cell phone use and incident glioblastoma remains unclear. Some studies have reported that cell phone use was associated with incident glioblastoma, and with reduced survival of patients diagnosed with glioblastoma. However, other studies have repeatedly replicated to find an association between cell phone use and glioblastoma.</code> | <code>What is the association between cell phone use and glioblastoma?</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | BAAI/bge-small-en-v1.5_cosine_map@100 |
|:------:|:----:|:-------------:|:-------------------------------------:|
| 0.0794 | 10 | 0.5344 | - |
| 0.1587 | 20 | 0.4615 | - |
| 0.2381 | 30 | 0.301 | - |
| 0.3175 | 40 | 0.2169 | - |
| 0.3968 | 50 | 0.1053 | - |
| 0.4762 | 60 | 0.1432 | - |
| 0.5556 | 70 | 0.1589 | - |
| 0.6349 | 80 | 0.1458 | - |
| 0.7143 | 90 | 0.1692 | - |
| 0.7937 | 100 | 0.1664 | - |
| 0.8730 | 110 | 0.1252 | - |
| 0.9524 | 120 | 0.1243 | - |
| 1.0 | 126 | - | 0.8858 |
| 0.0794 | 10 | 0.1393 | - |
| 0.1587 | 20 | 0.1504 | - |
| 0.2381 | 30 | 0.1009 | - |
| 0.3175 | 40 | 0.0689 | - |
| 0.3968 | 50 | 0.0301 | - |
| 0.4762 | 60 | 0.0647 | - |
| 0.5556 | 70 | 0.0748 | - |
| 0.6349 | 80 | 0.0679 | - |
| 0.7143 | 90 | 0.1091 | - |
| 0.7937 | 100 | 0.0953 | - |
| 0.8730 | 110 | 0.089 | - |
| 0.9524 | 120 | 0.0758 | - |
| 1.0 | 126 | - | 0.8878 |
| 0.0794 | 10 | 0.092 | - |
| 0.1587 | 20 | 0.0748 | - |
| 0.2381 | 30 | 0.0392 | - |
| 0.3175 | 40 | 0.014 | - |
| 0.3968 | 50 | 0.0057 | - |
| 0.4762 | 60 | 0.0208 | - |
| 0.5556 | 70 | 0.0173 | - |
| 0.6349 | 80 | 0.0195 | - |
| 0.7143 | 90 | 0.0349 | - |
| 0.7937 | 100 | 0.0483 | - |
| 0.8730 | 110 | 0.0254 | - |
| 0.9524 | 120 | 0.0325 | - |
| 1.0 | 126 | - | 0.8883 |
| 1.0317 | 130 | 0.0582 | - |
| 1.1111 | 140 | 0.0475 | - |
| 1.1905 | 150 | 0.0325 | - |
| 1.2698 | 160 | 0.0058 | - |
| 1.3492 | 170 | 0.0054 | - |
| 1.4286 | 180 | 0.0047 | - |
| 1.5079 | 190 | 0.0076 | - |
| 1.5873 | 200 | 0.0091 | - |
| 1.6667 | 210 | 0.0232 | - |
| 1.7460 | 220 | 0.0147 | - |
| 1.8254 | 230 | 0.0194 | - |
| 1.9048 | 240 | 0.0186 | - |
| 1.9841 | 250 | 0.0141 | - |
| 2.0 | 252 | - | 0.8857 |
| 2.0635 | 260 | 0.037 | - |
| 2.1429 | 270 | 0.0401 | - |
| 2.2222 | 280 | 0.0222 | - |
| 2.3016 | 290 | 0.0134 | - |
| 2.3810 | 300 | 0.008 | - |
| 2.4603 | 310 | 0.0199 | - |
| 2.5397 | 320 | 0.017 | - |
| 2.6190 | 330 | 0.0164 | - |
| 2.6984 | 340 | 0.0344 | - |
| 2.7778 | 350 | 0.0352 | - |
| 2.8571 | 360 | 0.0346 | - |
| 2.9365 | 370 | 0.0256 | - |
| 3.0 | 378 | - | 0.8868 |
### Framework Versions
- Python: 3.11.5
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.31.0
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
Niggendar/wowXLPD_wowPDV2 | Niggendar | "2024-07-02T18:05:21Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-07-02T18:00:32Z" | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/double7_-_vicuna-160m-gguf | RichardErkhov | "2024-07-02T18:04:26Z" | 0 | 0 | null | [
"gguf",
"arxiv:2401.06706",
"region:us"
] | null | "2024-07-02T18:00:52Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
vicuna-160m - GGUF
- Model creator: https://huggingface.co/double7/
- Original model: https://huggingface.co/double7/vicuna-160m/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [vicuna-160m.Q2_K.gguf](https://huggingface.co/RichardErkhov/double7_-_vicuna-160m-gguf/blob/main/vicuna-160m.Q2_K.gguf) | Q2_K | 0.07GB |
| [vicuna-160m.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/double7_-_vicuna-160m-gguf/blob/main/vicuna-160m.IQ3_XS.gguf) | IQ3_XS | 0.07GB |
| [vicuna-160m.IQ3_S.gguf](https://huggingface.co/RichardErkhov/double7_-_vicuna-160m-gguf/blob/main/vicuna-160m.IQ3_S.gguf) | IQ3_S | 0.07GB |
| [vicuna-160m.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/double7_-_vicuna-160m-gguf/blob/main/vicuna-160m.Q3_K_S.gguf) | Q3_K_S | 0.07GB |
| [vicuna-160m.IQ3_M.gguf](https://huggingface.co/RichardErkhov/double7_-_vicuna-160m-gguf/blob/main/vicuna-160m.IQ3_M.gguf) | IQ3_M | 0.08GB |
| [vicuna-160m.Q3_K.gguf](https://huggingface.co/RichardErkhov/double7_-_vicuna-160m-gguf/blob/main/vicuna-160m.Q3_K.gguf) | Q3_K | 0.08GB |
| [vicuna-160m.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/double7_-_vicuna-160m-gguf/blob/main/vicuna-160m.Q3_K_M.gguf) | Q3_K_M | 0.08GB |
| [vicuna-160m.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/double7_-_vicuna-160m-gguf/blob/main/vicuna-160m.Q3_K_L.gguf) | Q3_K_L | 0.08GB |
| [vicuna-160m.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/double7_-_vicuna-160m-gguf/blob/main/vicuna-160m.IQ4_XS.gguf) | IQ4_XS | 0.09GB |
| [vicuna-160m.Q4_0.gguf](https://huggingface.co/RichardErkhov/double7_-_vicuna-160m-gguf/blob/main/vicuna-160m.Q4_0.gguf) | Q4_0 | 0.09GB |
| [vicuna-160m.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/double7_-_vicuna-160m-gguf/blob/main/vicuna-160m.IQ4_NL.gguf) | IQ4_NL | 0.09GB |
| [vicuna-160m.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/double7_-_vicuna-160m-gguf/blob/main/vicuna-160m.Q4_K_S.gguf) | Q4_K_S | 0.09GB |
| [vicuna-160m.Q4_K.gguf](https://huggingface.co/RichardErkhov/double7_-_vicuna-160m-gguf/blob/main/vicuna-160m.Q4_K.gguf) | Q4_K | 0.1GB |
| [vicuna-160m.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/double7_-_vicuna-160m-gguf/blob/main/vicuna-160m.Q4_K_M.gguf) | Q4_K_M | 0.1GB |
| [vicuna-160m.Q4_1.gguf](https://huggingface.co/RichardErkhov/double7_-_vicuna-160m-gguf/blob/main/vicuna-160m.Q4_1.gguf) | Q4_1 | 0.1GB |
| [vicuna-160m.Q5_0.gguf](https://huggingface.co/RichardErkhov/double7_-_vicuna-160m-gguf/blob/main/vicuna-160m.Q5_0.gguf) | Q5_0 | 0.11GB |
| [vicuna-160m.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/double7_-_vicuna-160m-gguf/blob/main/vicuna-160m.Q5_K_S.gguf) | Q5_K_S | 0.11GB |
| [vicuna-160m.Q5_K.gguf](https://huggingface.co/RichardErkhov/double7_-_vicuna-160m-gguf/blob/main/vicuna-160m.Q5_K.gguf) | Q5_K | 0.11GB |
| [vicuna-160m.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/double7_-_vicuna-160m-gguf/blob/main/vicuna-160m.Q5_K_M.gguf) | Q5_K_M | 0.11GB |
| [vicuna-160m.Q5_1.gguf](https://huggingface.co/RichardErkhov/double7_-_vicuna-160m-gguf/blob/main/vicuna-160m.Q5_1.gguf) | Q5_1 | 0.12GB |
| [vicuna-160m.Q6_K.gguf](https://huggingface.co/RichardErkhov/double7_-_vicuna-160m-gguf/blob/main/vicuna-160m.Q6_K.gguf) | Q6_K | 0.12GB |
| [vicuna-160m.Q8_0.gguf](https://huggingface.co/RichardErkhov/double7_-_vicuna-160m-gguf/blob/main/vicuna-160m.Q8_0.gguf) | Q8_0 | 0.16GB |
Original model description:
---
license: apache-2.0
datasets:
- anon8231489123/ShareGPT_Vicuna_unfiltered
language:
- en
pipeline_tag: text-generation
---
## Model description
This is a Vicuna-like model with only 160M parameters, which is fine-tuned from [LLaMA-160m](https://huggingface.co/JackFram/llama-160m) on [ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) data.
The training setup follows the [Vicuna suite](https://github.com/lm-sys/FastChat).
The model is mainly developed as a base Small Speculative Model in [MCSD paper](https://arxiv.org/pdf/2401.06706.pdf). As a comparison, it can be better aligned to the Vicuna models than LLaMA-160m with little loss of alignment to the LLaMA models.
| Draft Model | Target Model | Alignment |
| -------------- | ------------- | --------- |
| LLaMA-68/160M | LLaMA-13/33B | 😃 |
| LLaMA-68/160M | Vicuna-13/33B | 😟 |
| Vicuna-68/160M | LLaMA-13/33B | 😃 |
| Vicuna-68/160M | Vicuna-13/33B | 😃 |
|
KamalJamwal/Florence-2-ft-docVQA | KamalJamwal | "2024-07-02T18:45:29Z" | 0 | 0 | transformers | [
"transformers",
"florence2",
"text-generation",
"custom_code",
"license:mit",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-07-02T18:01:11Z" | ---
license: mit
---
|
mradermacher/Llama-3-Swallow-70B-Instruct-v0.1-i1-GGUF | mradermacher | "2024-07-03T00:56:57Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"ja",
"base_model:tokyotech-llm/Llama-3-Swallow-70B-Instruct-v0.1",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T18:01:38Z" | ---
base_model: tokyotech-llm/Llama-3-Swallow-70B-Instruct-v0.1
language:
- en
- ja
library_name: transformers
license: llama3
model_type: llama
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/tokyotech-llm/Llama-3-Swallow-70B-Instruct-v0.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-3-Swallow-70B-Instruct-v0.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-70B-Instruct-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-70B-Instruct-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-70B-Instruct-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-70B-Instruct-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-70B-Instruct-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-70B-Instruct-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-70B-Instruct-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-70B-Instruct-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
impossibleexchange/curbstomp2 | impossibleexchange | "2024-07-02T18:01:54Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-07-02T18:01:54Z" | ---
license: mit
---
|
juanpablomesa/bge-small-bioasq-1epochs-batch32 | juanpablomesa | "2024-07-02T18:02:07Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:4012",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:BAAI/bge-small-en-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | "2024-07-02T18:02:03Z" | ---
base_model: BAAI/bge-small-en-v1.5
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:4012
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: 'Extensive messenger RNA editing generates transcript and protein
diversity in genes involved in neural excitability, as previously described, as
well as in genes participating in a broad range of other cellular functions. '
sentences:
- Do cephalopods use RNA editing less frequently than other species?
- GV1001 vaccine targets which enzyme?
- Which event results in the acetylation of S6K1?
- source_sentence: Yes, exposure to household furry pets influences the gut microbiota
of infants.
sentences:
- Can pets affect infant microbiomed?
- What is the mode of action of Thiazovivin?
- What are the effects of CAMK4 inhibition?
- source_sentence: "In children with heart failure evidence of the effect of enalapril\
\ is empirical. Enalapril was clinically safe and effective in 50% to 80% of for\
\ children with cardiac failure secondary to congenital heart malformations before\
\ and after cardiac surgery, impaired ventricular function , valvar regurgitation,\
\ congestive cardiomyopathy, , arterial hypertension, life-threatening arrhythmias\
\ coexisting with circulatory insufficiency. \nACE inhibitors have shown a transient\
\ beneficial effect on heart failure due to anticancer drugs and possibly a beneficial\
\ effect in muscular dystrophy-associated cardiomyopathy, which deserves further\
\ studies."
sentences:
- Which receptors can be evaluated with the [18F]altanserin?
- In what proportion of children with heart failure has Enalapril been shown to
be safe and effective?
- Which major signaling pathways are regulated by RIP1?
- source_sentence: Cellular senescence-associated heterochromatic foci (SAHFS) are
a novel type of chromatin condensation involving alterations of linker histone
H1 and linker DNA-binding proteins. SAHFS can be formed by a variety of cell types,
but their mechanism of action remains unclear.
sentences:
- What is the relationship between the X chromosome and a neutrophil drumstick?
- Which microRNAs are involved in exercise adaptation?
- How are SAHFS created?
- source_sentence: Multicluster Pcdh diversity is required for mouse olfactory neural
circuit assembly. The vertebrate clustered protocadherin (Pcdh) cell surface proteins
are encoded by three closely linked gene clusters (Pcdhα, Pcdhβ, and Pcdhγ). Although
deletion of individual Pcdh clusters had subtle phenotypic consequences, the loss
of all three clusters (tricluster deletion) led to a severe axonal arborization
defect and loss of self-avoidance.
sentences:
- What are the effects of the deletion of all three Pcdh clusters (tricluster deletion)
in mice?
- what is the role of MEF-2 in cardiomyocyte differentiation?
- How many periods of regulatory innovation led to the evolution of vertebrates?
model-index:
- name: BGE small finetuned BIOASQ
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: BAAI/bge small en v1.5
type: BAAI/bge-small-en-v1.5
metrics:
- type: cosine_accuracy@1
value: 0.8415841584158416
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.925035360678925
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.942008486562942
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.958981612446959
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8415841584158416
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.30834512022630833
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.18840169731258838
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09589816124469587
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8415841584158416
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.925035360678925
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.942008486562942
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.958981612446959
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9047357964584107
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.886919916481444
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8877807671526188
name: Cosine Map@100
---
# BGE small finetuned BIOASQ
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) <!-- at revision 5c38ec7c405ec4b44b94cc5a9bb96e735b38267a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("juanpablomesa/bge-small-bioasq-1epochs-batch32")
# Run inference
sentences = [
'Multicluster Pcdh diversity is required for mouse olfactory neural circuit assembly. The vertebrate clustered protocadherin (Pcdh) cell surface proteins are encoded by three closely linked gene clusters (Pcdhα, Pcdhβ, and Pcdhγ). Although deletion of individual Pcdh clusters had subtle phenotypic consequences, the loss of all three clusters (tricluster deletion) led to a severe axonal arborization defect and loss of self-avoidance.',
'What are the effects of the deletion of all three Pcdh clusters (tricluster deletion) in mice?',
'How many periods of regulatory innovation led to the evolution of vertebrates?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `BAAI/bge-small-en-v1.5`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.8416 |
| cosine_accuracy@3 | 0.925 |
| cosine_accuracy@5 | 0.942 |
| cosine_accuracy@10 | 0.959 |
| cosine_precision@1 | 0.8416 |
| cosine_precision@3 | 0.3083 |
| cosine_precision@5 | 0.1884 |
| cosine_precision@10 | 0.0959 |
| cosine_recall@1 | 0.8416 |
| cosine_recall@3 | 0.925 |
| cosine_recall@5 | 0.942 |
| cosine_recall@10 | 0.959 |
| cosine_ndcg@10 | 0.9047 |
| cosine_mrr@10 | 0.8869 |
| **cosine_map@100** | **0.8878** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 4,012 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 63.38 tokens</li><li>max: 485 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 16.13 tokens</li><li>max: 49 tokens</li></ul> |
* Samples:
| positive | anchor |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|
| <code>Aberrant patterns of H3K4, H3K9, and H3K27 histone lysine methylation were shown to result in histone code alterations, which induce changes in gene expression, and affect the proliferation rate of cells in medulloblastoma.</code> | <code>What is the implication of histone lysine methylation in medulloblastoma?</code> |
| <code>STAG1/STAG2 proteins are tumour suppressor proteins that suppress cell proliferation and are essential for differentiation.</code> | <code>What is the role of STAG1/STAG2 proteins in differentiation?</code> |
| <code>The association between cell phone use and incident glioblastoma remains unclear. Some studies have reported that cell phone use was associated with incident glioblastoma, and with reduced survival of patients diagnosed with glioblastoma. However, other studies have repeatedly replicated to find an association between cell phone use and glioblastoma.</code> | <code>What is the association between cell phone use and glioblastoma?</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | BAAI/bge-small-en-v1.5_cosine_map@100 |
|:------:|:----:|:-------------:|:-------------------------------------:|
| 0.0794 | 10 | 0.5344 | - |
| 0.1587 | 20 | 0.4615 | - |
| 0.2381 | 30 | 0.301 | - |
| 0.3175 | 40 | 0.2169 | - |
| 0.3968 | 50 | 0.1053 | - |
| 0.4762 | 60 | 0.1432 | - |
| 0.5556 | 70 | 0.1589 | - |
| 0.6349 | 80 | 0.1458 | - |
| 0.7143 | 90 | 0.1692 | - |
| 0.7937 | 100 | 0.1664 | - |
| 0.8730 | 110 | 0.1252 | - |
| 0.9524 | 120 | 0.1243 | - |
| 1.0 | 126 | - | 0.8858 |
| 0.0794 | 10 | 0.1393 | - |
| 0.1587 | 20 | 0.1504 | - |
| 0.2381 | 30 | 0.1009 | - |
| 0.3175 | 40 | 0.0689 | - |
| 0.3968 | 50 | 0.0301 | - |
| 0.4762 | 60 | 0.0647 | - |
| 0.5556 | 70 | 0.0748 | - |
| 0.6349 | 80 | 0.0679 | - |
| 0.7143 | 90 | 0.1091 | - |
| 0.7937 | 100 | 0.0953 | - |
| 0.8730 | 110 | 0.089 | - |
| 0.9524 | 120 | 0.0758 | - |
| 1.0 | 126 | - | 0.8878 |
| 0.0794 | 10 | 0.092 | - |
| 0.1587 | 20 | 0.0748 | - |
| 0.2381 | 30 | 0.0392 | - |
| 0.3175 | 40 | 0.014 | - |
| 0.3968 | 50 | 0.0057 | - |
| 0.4762 | 60 | 0.0208 | - |
| 0.5556 | 70 | 0.0173 | - |
| 0.6349 | 80 | 0.0195 | - |
| 0.7143 | 90 | 0.0349 | - |
| 0.7937 | 100 | 0.0483 | - |
| 0.8730 | 110 | 0.0254 | - |
| 0.9524 | 120 | 0.0325 | - |
| 1.0 | 126 | - | 0.8883 |
| 1.0317 | 130 | 0.0582 | - |
| 1.1111 | 140 | 0.0475 | - |
| 1.1905 | 150 | 0.0325 | - |
| 1.2698 | 160 | 0.0058 | - |
| 1.3492 | 170 | 0.0054 | - |
| 1.4286 | 180 | 0.0047 | - |
| 1.5079 | 190 | 0.0076 | - |
| 1.5873 | 200 | 0.0091 | - |
| 1.6667 | 210 | 0.0232 | - |
| 1.7460 | 220 | 0.0147 | - |
| 1.8254 | 230 | 0.0194 | - |
| 1.9048 | 240 | 0.0186 | - |
| 1.9841 | 250 | 0.0141 | - |
| 2.0 | 252 | - | 0.8857 |
| 2.0635 | 260 | 0.037 | - |
| 2.1429 | 270 | 0.0401 | - |
| 2.2222 | 280 | 0.0222 | - |
| 2.3016 | 290 | 0.0134 | - |
| 2.3810 | 300 | 0.008 | - |
| 2.4603 | 310 | 0.0199 | - |
| 2.5397 | 320 | 0.017 | - |
| 2.6190 | 330 | 0.0164 | - |
| 2.6984 | 340 | 0.0344 | - |
| 2.7778 | 350 | 0.0352 | - |
| 2.8571 | 360 | 0.0346 | - |
| 2.9365 | 370 | 0.0256 | - |
| 3.0 | 378 | - | 0.8868 |
| 0.7937 | 100 | 0.0064 | 0.8878 |
### Framework Versions
- Python: 3.11.5
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.31.0
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
fifala/06-fifa-07-02-01 | fifala | "2024-07-02T18:05:25Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:02:31Z" | Entry not found |
Creatorin/jacobiCNN | Creatorin | "2024-07-02T20:13:16Z" | 0 | 0 | null | [
"en",
"license:apache-2.0",
"region:us"
] | null | "2024-07-02T18:02:51Z" | ---
license: apache-2.0
language:
- en
metrics:
- accuracy
--- |
tapan247/fine-tuned-llama-2-7b-chat | tapan247 | "2024-07-02T18:03:17Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2024-07-02T18:03:08Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
healtori/02-heal-07-02-01 | healtori | "2024-07-02T18:06:09Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:03:13Z" | Entry not found |
juanpablomesa/bge-small-bioasq-1epoch-batch32 | juanpablomesa | "2024-07-02T18:04:32Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:4012",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:BAAI/bge-small-en-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | "2024-07-02T18:04:28Z" | ---
base_model: BAAI/bge-small-en-v1.5
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:4012
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: 'Extensive messenger RNA editing generates transcript and protein
diversity in genes involved in neural excitability, as previously described, as
well as in genes participating in a broad range of other cellular functions. '
sentences:
- Do cephalopods use RNA editing less frequently than other species?
- GV1001 vaccine targets which enzyme?
- Which event results in the acetylation of S6K1?
- source_sentence: Yes, exposure to household furry pets influences the gut microbiota
of infants.
sentences:
- Can pets affect infant microbiomed?
- What is the mode of action of Thiazovivin?
- What are the effects of CAMK4 inhibition?
- source_sentence: "In children with heart failure evidence of the effect of enalapril\
\ is empirical. Enalapril was clinically safe and effective in 50% to 80% of for\
\ children with cardiac failure secondary to congenital heart malformations before\
\ and after cardiac surgery, impaired ventricular function , valvar regurgitation,\
\ congestive cardiomyopathy, , arterial hypertension, life-threatening arrhythmias\
\ coexisting with circulatory insufficiency. \nACE inhibitors have shown a transient\
\ beneficial effect on heart failure due to anticancer drugs and possibly a beneficial\
\ effect in muscular dystrophy-associated cardiomyopathy, which deserves further\
\ studies."
sentences:
- Which receptors can be evaluated with the [18F]altanserin?
- In what proportion of children with heart failure has Enalapril been shown to
be safe and effective?
- Which major signaling pathways are regulated by RIP1?
- source_sentence: Cellular senescence-associated heterochromatic foci (SAHFS) are
a novel type of chromatin condensation involving alterations of linker histone
H1 and linker DNA-binding proteins. SAHFS can be formed by a variety of cell types,
but their mechanism of action remains unclear.
sentences:
- What is the relationship between the X chromosome and a neutrophil drumstick?
- Which microRNAs are involved in exercise adaptation?
- How are SAHFS created?
- source_sentence: Multicluster Pcdh diversity is required for mouse olfactory neural
circuit assembly. The vertebrate clustered protocadherin (Pcdh) cell surface proteins
are encoded by three closely linked gene clusters (Pcdhα, Pcdhβ, and Pcdhγ). Although
deletion of individual Pcdh clusters had subtle phenotypic consequences, the loss
of all three clusters (tricluster deletion) led to a severe axonal arborization
defect and loss of self-avoidance.
sentences:
- What are the effects of the deletion of all three Pcdh clusters (tricluster deletion)
in mice?
- what is the role of MEF-2 in cardiomyocyte differentiation?
- How many periods of regulatory innovation led to the evolution of vertebrates?
model-index:
- name: BGE small finetuned BIOASQ
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: BAAI/bge small en v1.5
type: BAAI/bge-small-en-v1.5
metrics:
- type: cosine_accuracy@1
value: 0.8345120226308345
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9222065063649222
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.942008486562942
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9575671852899575
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8345120226308345
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3074021687883074
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.18840169731258838
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09575671852899574
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8345120226308345
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9222065063649222
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.942008486562942
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9575671852899575
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9010271342291756
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8824010462270717
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8834285782752825
name: Cosine Map@100
---
# BGE small finetuned BIOASQ
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) <!-- at revision 5c38ec7c405ec4b44b94cc5a9bb96e735b38267a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("juanpablomesa/bge-small-bioasq-1epoch-batch32")
# Run inference
sentences = [
'Multicluster Pcdh diversity is required for mouse olfactory neural circuit assembly. The vertebrate clustered protocadherin (Pcdh) cell surface proteins are encoded by three closely linked gene clusters (Pcdhα, Pcdhβ, and Pcdhγ). Although deletion of individual Pcdh clusters had subtle phenotypic consequences, the loss of all three clusters (tricluster deletion) led to a severe axonal arborization defect and loss of self-avoidance.',
'What are the effects of the deletion of all three Pcdh clusters (tricluster deletion) in mice?',
'How many periods of regulatory innovation led to the evolution of vertebrates?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `BAAI/bge-small-en-v1.5`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.8345 |
| cosine_accuracy@3 | 0.9222 |
| cosine_accuracy@5 | 0.942 |
| cosine_accuracy@10 | 0.9576 |
| cosine_precision@1 | 0.8345 |
| cosine_precision@3 | 0.3074 |
| cosine_precision@5 | 0.1884 |
| cosine_precision@10 | 0.0958 |
| cosine_recall@1 | 0.8345 |
| cosine_recall@3 | 0.9222 |
| cosine_recall@5 | 0.942 |
| cosine_recall@10 | 0.9576 |
| cosine_ndcg@10 | 0.901 |
| cosine_mrr@10 | 0.8824 |
| **cosine_map@100** | **0.8834** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 4,012 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 63.38 tokens</li><li>max: 485 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 16.13 tokens</li><li>max: 49 tokens</li></ul> |
* Samples:
| positive | anchor |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|
| <code>Aberrant patterns of H3K4, H3K9, and H3K27 histone lysine methylation were shown to result in histone code alterations, which induce changes in gene expression, and affect the proliferation rate of cells in medulloblastoma.</code> | <code>What is the implication of histone lysine methylation in medulloblastoma?</code> |
| <code>STAG1/STAG2 proteins are tumour suppressor proteins that suppress cell proliferation and are essential for differentiation.</code> | <code>What is the role of STAG1/STAG2 proteins in differentiation?</code> |
| <code>The association between cell phone use and incident glioblastoma remains unclear. Some studies have reported that cell phone use was associated with incident glioblastoma, and with reduced survival of patients diagnosed with glioblastoma. However, other studies have repeatedly replicated to find an association between cell phone use and glioblastoma.</code> | <code>What is the association between cell phone use and glioblastoma?</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | BAAI/bge-small-en-v1.5_cosine_map@100 |
|:------:|:----:|:-------------:|:-------------------------------------:|
| 0.0794 | 10 | 0.5344 | - |
| 0.1587 | 20 | 0.4615 | - |
| 0.2381 | 30 | 0.301 | - |
| 0.3175 | 40 | 0.2169 | - |
| 0.3968 | 50 | 0.1053 | - |
| 0.4762 | 60 | 0.1432 | - |
| 0.5556 | 70 | 0.1589 | - |
| 0.6349 | 80 | 0.1458 | - |
| 0.7143 | 90 | 0.1692 | - |
| 0.7937 | 100 | 0.1664 | - |
| 0.8730 | 110 | 0.1252 | - |
| 0.9524 | 120 | 0.1243 | - |
| 1.0 | 126 | - | 0.8858 |
| 0.0794 | 10 | 0.1393 | - |
| 0.1587 | 20 | 0.1504 | - |
| 0.2381 | 30 | 0.1009 | - |
| 0.3175 | 40 | 0.0689 | - |
| 0.3968 | 50 | 0.0301 | - |
| 0.4762 | 60 | 0.0647 | - |
| 0.5556 | 70 | 0.0748 | - |
| 0.6349 | 80 | 0.0679 | - |
| 0.7143 | 90 | 0.1091 | - |
| 0.7937 | 100 | 0.0953 | - |
| 0.8730 | 110 | 0.089 | - |
| 0.9524 | 120 | 0.0758 | - |
| 1.0 | 126 | - | 0.8878 |
| 0.0794 | 10 | 0.092 | - |
| 0.1587 | 20 | 0.0748 | - |
| 0.2381 | 30 | 0.0392 | - |
| 0.3175 | 40 | 0.014 | - |
| 0.3968 | 50 | 0.0057 | - |
| 0.4762 | 60 | 0.0208 | - |
| 0.5556 | 70 | 0.0173 | - |
| 0.6349 | 80 | 0.0195 | - |
| 0.7143 | 90 | 0.0349 | - |
| 0.7937 | 100 | 0.0483 | - |
| 0.8730 | 110 | 0.0254 | - |
| 0.9524 | 120 | 0.0325 | - |
| 1.0 | 126 | - | 0.8883 |
| 1.0317 | 130 | 0.0582 | - |
| 1.1111 | 140 | 0.0475 | - |
| 1.1905 | 150 | 0.0325 | - |
| 1.2698 | 160 | 0.0058 | - |
| 1.3492 | 170 | 0.0054 | - |
| 1.4286 | 180 | 0.0047 | - |
| 1.5079 | 190 | 0.0076 | - |
| 1.5873 | 200 | 0.0091 | - |
| 1.6667 | 210 | 0.0232 | - |
| 1.7460 | 220 | 0.0147 | - |
| 1.8254 | 230 | 0.0194 | - |
| 1.9048 | 240 | 0.0186 | - |
| 1.9841 | 250 | 0.0141 | - |
| 2.0 | 252 | - | 0.8857 |
| 2.0635 | 260 | 0.037 | - |
| 2.1429 | 270 | 0.0401 | - |
| 2.2222 | 280 | 0.0222 | - |
| 2.3016 | 290 | 0.0134 | - |
| 2.3810 | 300 | 0.008 | - |
| 2.4603 | 310 | 0.0199 | - |
| 2.5397 | 320 | 0.017 | - |
| 2.6190 | 330 | 0.0164 | - |
| 2.6984 | 340 | 0.0344 | - |
| 2.7778 | 350 | 0.0352 | - |
| 2.8571 | 360 | 0.0346 | - |
| 2.9365 | 370 | 0.0256 | - |
| 3.0 | 378 | - | 0.8868 |
| 0.7937 | 100 | 0.0064 | 0.8878 |
| 0.0794 | 10 | 0.003 | 0.8858 |
| 0.1587 | 20 | 0.0026 | 0.8811 |
| 0.2381 | 30 | 0.0021 | 0.8817 |
| 0.3175 | 40 | 0.0017 | 0.8818 |
| 0.3968 | 50 | 0.0015 | 0.8818 |
| 0.4762 | 60 | 0.0019 | 0.8814 |
| 0.5556 | 70 | 0.0019 | 0.8798 |
| 0.6349 | 80 | 0.0024 | 0.8811 |
| 0.7143 | 90 | 0.0029 | 0.8834 |
| 0.7937 | 100 | 0.006 | 0.8827 |
| 0.8730 | 110 | 0.0028 | 0.8827 |
| 0.9524 | 120 | 0.005 | 0.8834 |
### Framework Versions
- Python: 3.11.5
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.31.0
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
RichardErkhov/mlabonne_-_chesspythia-70m-gguf | RichardErkhov | "2024-07-02T18:07:20Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-07-02T18:04:47Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
chesspythia-70m - GGUF
- Model creator: https://huggingface.co/mlabonne/
- Original model: https://huggingface.co/mlabonne/chesspythia-70m/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [chesspythia-70m.Q2_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_chesspythia-70m-gguf/blob/main/chesspythia-70m.Q2_K.gguf) | Q2_K | 0.04GB |
| [chesspythia-70m.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_chesspythia-70m-gguf/blob/main/chesspythia-70m.IQ3_XS.gguf) | IQ3_XS | 0.04GB |
| [chesspythia-70m.IQ3_S.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_chesspythia-70m-gguf/blob/main/chesspythia-70m.IQ3_S.gguf) | IQ3_S | 0.04GB |
| [chesspythia-70m.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_chesspythia-70m-gguf/blob/main/chesspythia-70m.Q3_K_S.gguf) | Q3_K_S | 0.04GB |
| [chesspythia-70m.IQ3_M.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_chesspythia-70m-gguf/blob/main/chesspythia-70m.IQ3_M.gguf) | IQ3_M | 0.04GB |
| [chesspythia-70m.Q3_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_chesspythia-70m-gguf/blob/main/chesspythia-70m.Q3_K.gguf) | Q3_K | 0.04GB |
| [chesspythia-70m.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_chesspythia-70m-gguf/blob/main/chesspythia-70m.Q3_K_M.gguf) | Q3_K_M | 0.04GB |
| [chesspythia-70m.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_chesspythia-70m-gguf/blob/main/chesspythia-70m.Q3_K_L.gguf) | Q3_K_L | 0.04GB |
| [chesspythia-70m.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_chesspythia-70m-gguf/blob/main/chesspythia-70m.IQ4_XS.gguf) | IQ4_XS | 0.04GB |
| [chesspythia-70m.Q4_0.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_chesspythia-70m-gguf/blob/main/chesspythia-70m.Q4_0.gguf) | Q4_0 | 0.04GB |
| [chesspythia-70m.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_chesspythia-70m-gguf/blob/main/chesspythia-70m.IQ4_NL.gguf) | IQ4_NL | 0.04GB |
| [chesspythia-70m.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_chesspythia-70m-gguf/blob/main/chesspythia-70m.Q4_K_S.gguf) | Q4_K_S | 0.04GB |
| [chesspythia-70m.Q4_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_chesspythia-70m-gguf/blob/main/chesspythia-70m.Q4_K.gguf) | Q4_K | 0.05GB |
| [chesspythia-70m.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_chesspythia-70m-gguf/blob/main/chesspythia-70m.Q4_K_M.gguf) | Q4_K_M | 0.05GB |
| [chesspythia-70m.Q4_1.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_chesspythia-70m-gguf/blob/main/chesspythia-70m.Q4_1.gguf) | Q4_1 | 0.05GB |
| [chesspythia-70m.Q5_0.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_chesspythia-70m-gguf/blob/main/chesspythia-70m.Q5_0.gguf) | Q5_0 | 0.05GB |
| [chesspythia-70m.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_chesspythia-70m-gguf/blob/main/chesspythia-70m.Q5_K_S.gguf) | Q5_K_S | 0.05GB |
| [chesspythia-70m.Q5_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_chesspythia-70m-gguf/blob/main/chesspythia-70m.Q5_K.gguf) | Q5_K | 0.05GB |
| [chesspythia-70m.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_chesspythia-70m-gguf/blob/main/chesspythia-70m.Q5_K_M.gguf) | Q5_K_M | 0.05GB |
| [chesspythia-70m.Q5_1.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_chesspythia-70m-gguf/blob/main/chesspythia-70m.Q5_1.gguf) | Q5_1 | 0.05GB |
| [chesspythia-70m.Q6_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_chesspythia-70m-gguf/blob/main/chesspythia-70m.Q6_K.gguf) | Q6_K | 0.06GB |
| [chesspythia-70m.Q8_0.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_chesspythia-70m-gguf/blob/main/chesspythia-70m.Q8_0.gguf) | Q8_0 | 0.07GB |
Original model description:
---
license: apache-2.0
base_model: EleutherAI/pythia-70m-deduped
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [EleutherAI/pythia-70m-deduped](https://huggingface.co/EleutherAI/pythia-70m-deduped) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2691
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 100
- eval_batch_size: 100
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.852 | 0.1 | 1 | 3.1074 |
| 3.0923 | 0.2 | 2 | 2.3879 |
| 2.3371 | 0.3 | 3 | 2.1025 |
| 2.1166 | 0.4 | 4 | 1.9761 |
| 2.0538 | 0.5 | 5 | 1.8446 |
| 1.8972 | 0.6 | 6 | 1.7470 |
| 1.8356 | 0.7 | 7 | 1.6615 |
| 1.702 | 0.8 | 8 | 1.6187 |
| 1.6907 | 0.9 | 9 | 1.6626 |
| 1.5877 | 1.0 | 10 | 1.6192 |
| 1.6332 | 1.1 | 11 | 1.5464 |
| 1.4906 | 1.2 | 12 | 1.5091 |
| 1.5267 | 1.3 | 13 | 1.4850 |
| 1.4857 | 1.4 | 14 | 1.4572 |
| 1.4247 | 1.5 | 15 | 1.4319 |
| 1.4815 | 1.6 | 16 | 1.4207 |
| 1.3584 | 1.7 | 17 | 1.4092 |
| 1.4812 | 1.8 | 18 | 1.4196 |
| 1.4381 | 1.9 | 19 | 1.4021 |
| 1.453 | 2.0 | 20 | 1.4013 |
| 1.3468 | 2.1 | 21 | 1.3781 |
| 1.3327 | 2.2 | 22 | 1.3598 |
| 1.3623 | 2.3 | 23 | 1.3516 |
| 1.2876 | 2.4 | 24 | 1.3384 |
| 1.374 | 2.5 | 25 | 1.3366 |
| 1.3863 | 2.6 | 26 | 1.3265 |
| 1.3327 | 2.7 | 27 | 1.3186 |
| 1.2886 | 2.8 | 28 | 1.3130 |
| 1.3842 | 2.9 | 29 | 1.3024 |
| 1.3105 | 3.0 | 30 | 1.2986 |
| 1.2331 | 3.1 | 31 | 1.2966 |
| 1.3227 | 3.2 | 32 | 1.2954 |
| 1.2923 | 3.3 | 33 | 1.2928 |
| 1.2976 | 3.4 | 34 | 1.2901 |
| 1.3207 | 3.5 | 35 | 1.2879 |
| 1.2455 | 3.6 | 36 | 1.2834 |
| 1.2546 | 3.7 | 37 | 1.2779 |
| 1.2999 | 3.8 | 38 | 1.2744 |
| 1.2484 | 3.9 | 39 | 1.2723 |
| 1.281 | 4.0 | 40 | 1.2720 |
| 1.2134 | 4.1 | 41 | 1.2722 |
| 1.214 | 4.2 | 42 | 1.2721 |
| 1.3031 | 4.3 | 43 | 1.2715 |
| 1.2174 | 4.4 | 44 | 1.2708 |
| 1.2359 | 4.5 | 45 | 1.2703 |
| 1.2578 | 4.6 | 46 | 1.2699 |
| 1.2815 | 4.7 | 47 | 1.2695 |
| 1.2866 | 4.8 | 48 | 1.2693 |
| 1.2878 | 4.9 | 49 | 1.2691 |
| 1.2214 | 5.0 | 50 | 1.2691 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
tapan247/fine-tuned-llama-2-7b-chat-1 | tapan247 | "2024-07-02T18:05:12Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T18:05:12Z" | Entry not found |
fifala/07-fifa-07-02-01 | fifala | "2024-07-02T18:09:10Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:06:19Z" | Entry not found |
RichardErkhov/lrds-code_-_samba-1.1B-gguf | RichardErkhov | "2024-07-02T18:17:46Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-07-02T18:06:33Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
samba-1.1B - GGUF
- Model creator: https://huggingface.co/lrds-code/
- Original model: https://huggingface.co/lrds-code/samba-1.1B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [samba-1.1B.Q2_K.gguf](https://huggingface.co/RichardErkhov/lrds-code_-_samba-1.1B-gguf/blob/main/samba-1.1B.Q2_K.gguf) | Q2_K | 0.4GB |
| [samba-1.1B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/lrds-code_-_samba-1.1B-gguf/blob/main/samba-1.1B.IQ3_XS.gguf) | IQ3_XS | 0.44GB |
| [samba-1.1B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/lrds-code_-_samba-1.1B-gguf/blob/main/samba-1.1B.IQ3_S.gguf) | IQ3_S | 0.47GB |
| [samba-1.1B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/lrds-code_-_samba-1.1B-gguf/blob/main/samba-1.1B.Q3_K_S.gguf) | Q3_K_S | 0.47GB |
| [samba-1.1B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/lrds-code_-_samba-1.1B-gguf/blob/main/samba-1.1B.IQ3_M.gguf) | IQ3_M | 0.48GB |
| [samba-1.1B.Q3_K.gguf](https://huggingface.co/RichardErkhov/lrds-code_-_samba-1.1B-gguf/blob/main/samba-1.1B.Q3_K.gguf) | Q3_K | 0.51GB |
| [samba-1.1B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/lrds-code_-_samba-1.1B-gguf/blob/main/samba-1.1B.Q3_K_M.gguf) | Q3_K_M | 0.51GB |
| [samba-1.1B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/lrds-code_-_samba-1.1B-gguf/blob/main/samba-1.1B.Q3_K_L.gguf) | Q3_K_L | 0.55GB |
| [samba-1.1B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/lrds-code_-_samba-1.1B-gguf/blob/main/samba-1.1B.IQ4_XS.gguf) | IQ4_XS | 0.57GB |
| [samba-1.1B.Q4_0.gguf](https://huggingface.co/RichardErkhov/lrds-code_-_samba-1.1B-gguf/blob/main/samba-1.1B.Q4_0.gguf) | Q4_0 | 0.59GB |
| [samba-1.1B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/lrds-code_-_samba-1.1B-gguf/blob/main/samba-1.1B.IQ4_NL.gguf) | IQ4_NL | 0.6GB |
| [samba-1.1B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/lrds-code_-_samba-1.1B-gguf/blob/main/samba-1.1B.Q4_K_S.gguf) | Q4_K_S | 0.6GB |
| [samba-1.1B.Q4_K.gguf](https://huggingface.co/RichardErkhov/lrds-code_-_samba-1.1B-gguf/blob/main/samba-1.1B.Q4_K.gguf) | Q4_K | 0.62GB |
| [samba-1.1B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/lrds-code_-_samba-1.1B-gguf/blob/main/samba-1.1B.Q4_K_M.gguf) | Q4_K_M | 0.62GB |
| [samba-1.1B.Q4_1.gguf](https://huggingface.co/RichardErkhov/lrds-code_-_samba-1.1B-gguf/blob/main/samba-1.1B.Q4_1.gguf) | Q4_1 | 0.65GB |
| [samba-1.1B.Q5_0.gguf](https://huggingface.co/RichardErkhov/lrds-code_-_samba-1.1B-gguf/blob/main/samba-1.1B.Q5_0.gguf) | Q5_0 | 0.71GB |
| [samba-1.1B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/lrds-code_-_samba-1.1B-gguf/blob/main/samba-1.1B.Q5_K_S.gguf) | Q5_K_S | 0.71GB |
| [samba-1.1B.Q5_K.gguf](https://huggingface.co/RichardErkhov/lrds-code_-_samba-1.1B-gguf/blob/main/samba-1.1B.Q5_K.gguf) | Q5_K | 0.73GB |
| [samba-1.1B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/lrds-code_-_samba-1.1B-gguf/blob/main/samba-1.1B.Q5_K_M.gguf) | Q5_K_M | 0.73GB |
| [samba-1.1B.Q5_1.gguf](https://huggingface.co/RichardErkhov/lrds-code_-_samba-1.1B-gguf/blob/main/samba-1.1B.Q5_1.gguf) | Q5_1 | 0.77GB |
| [samba-1.1B.Q6_K.gguf](https://huggingface.co/RichardErkhov/lrds-code_-_samba-1.1B-gguf/blob/main/samba-1.1B.Q6_K.gguf) | Q6_K | 0.84GB |
| [samba-1.1B.Q8_0.gguf](https://huggingface.co/RichardErkhov/lrds-code_-_samba-1.1B-gguf/blob/main/samba-1.1B.Q8_0.gguf) | Q8_0 | 1.09GB |
Original model description:
---
language:
- pt
license: llama2
tags:
- Portuguese
- Tiny-Llama
- PEFT
widget:
- example_title: Pedro Álvares Cabral
messages:
- role: system
content: Você é um historiador que é especialista em história do Brasil.
- role: user
content: Quem foi Pedro Álvares Cabral?
---
<hr>
# README
<hr>
<p align="center">
<img width="250" alt="Samba Logo" src="https://cdn-uploads.huggingface.co/production/uploads/658c21f4c1229bf113295773/xH3K8H4qu2ps_IzAl9cgz.png">
</p>
Samba é um LLM treinado em dados da língua portuguesa. O modelo é baseado no [TinyLlama-1.1B](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0), uma versão de 1.1B parâmetros do LLaMA-2.
<p align="center">
<img width="250" alt="Countries Logo" src="https://cdn-uploads.huggingface.co/production/uploads/658c21f4c1229bf113295773/d3twZrXng5eDjg_LbH4pF.png">
</p>
## Descrição do Modelo
- **Desenvolvido por:** [Leonardo Souza](https://huggingface.co/lrds-code)
- **Tipo do Modelo:** LLaMA-Based
- **Licença:** Academic Free License v3.0
- **Fine-tunado do modelo:** [TinyLlama-1.1B](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)
## Como usar
```python
import torch
from transformers import pipeline
samba = pipeline('text-generation', model='lrds-code/samba-1.1B', torch_dtype=torch.bfloat16, device_map='auto')
messages = [{'role':'system',
'content':''},
{'role':'user',
'content':'Quantos planetas existem no sistema solar?'}]
prompt = samba.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = samba(prompt, max_new_tokens=256, do_sample=False, temperature=0.1, top_k=50, top_p=0.95, repetition_penalty=1.1, do_sample=False)
print(outputs[0]['generated_text'])
```
## Parâmetros Importantes
- **repetition_penalty:** é utilizado para evitar a repetição de palavras ou frases. Quando esse valor é ajustado para ser maior que 1, o modelo tenta diminuir a probabilidade de gerar palavras que já apareceram anteriormente. Basicamente, quanto maior o valor, mais o modelo tenta evitar repetições.
- **do_sample:** determina se o modelo deve ou não amostrar aleatoriamente a próxima palavra com base nas probabilidades calculadas. Portanto, **do_sample=True** introduz variação e imprevisibilidade no texto gerado, enquanto que se **do_sample=False** o modelo escolherá sempre a palavra mais provável como próxima palavra, o que pode levar a saídas mais determinísticas e, possivelmente, mais repetitivas.
- **temperature:** afeta a aleatoriedade na escolha da próxima palavra. Um valor baixo (próximo de 0) faz com que o modelo seja mais "confiante" nas suas escolhas, favorecendo palavras com alta probabilidade e levando a saídas mais previsíveis. Por outro lado, um valor alto aumenta a aleatoriedade, permitindo que o modelo escolha palavras menos prováveis, o que pode tornar o texto gerado mais variado e criativo.
|
healtori/03-heal-07-02-01 | healtori | "2024-07-02T18:10:11Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:07:12Z" | Entry not found |
starnet/04-star-07-02-01 | starnet | "2024-07-02T18:11:20Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:07:44Z" | Entry not found |
kaveri1184/gemma-7b-ft-test | kaveri1184 | "2024-07-02T18:08:08Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T18:08:01Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-rag-gguf | RichardErkhov | "2024-07-02T18:18:21Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-07-02T18:08:36Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
malaysian-tinyllama-1.1b-16k-instructions-rag - GGUF
- Model creator: https://huggingface.co/mesolitica/
- Original model: https://huggingface.co/mesolitica/malaysian-tinyllama-1.1b-16k-instructions-rag/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [malaysian-tinyllama-1.1b-16k-instructions-rag.Q2_K.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-rag-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-rag.Q2_K.gguf) | Q2_K | 0.4GB |
| [malaysian-tinyllama-1.1b-16k-instructions-rag.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-rag-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-rag.IQ3_XS.gguf) | IQ3_XS | 0.44GB |
| [malaysian-tinyllama-1.1b-16k-instructions-rag.IQ3_S.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-rag-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-rag.IQ3_S.gguf) | IQ3_S | 0.47GB |
| [malaysian-tinyllama-1.1b-16k-instructions-rag.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-rag-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-rag.Q3_K_S.gguf) | Q3_K_S | 0.47GB |
| [malaysian-tinyllama-1.1b-16k-instructions-rag.IQ3_M.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-rag-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-rag.IQ3_M.gguf) | IQ3_M | 0.48GB |
| [malaysian-tinyllama-1.1b-16k-instructions-rag.Q3_K.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-rag-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-rag.Q3_K.gguf) | Q3_K | 0.51GB |
| [malaysian-tinyllama-1.1b-16k-instructions-rag.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-rag-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-rag.Q3_K_M.gguf) | Q3_K_M | 0.51GB |
| [malaysian-tinyllama-1.1b-16k-instructions-rag.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-rag-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-rag.Q3_K_L.gguf) | Q3_K_L | 0.55GB |
| [malaysian-tinyllama-1.1b-16k-instructions-rag.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-rag-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-rag.IQ4_XS.gguf) | IQ4_XS | 0.57GB |
| [malaysian-tinyllama-1.1b-16k-instructions-rag.Q4_0.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-rag-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-rag.Q4_0.gguf) | Q4_0 | 0.59GB |
| [malaysian-tinyllama-1.1b-16k-instructions-rag.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-rag-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-rag.IQ4_NL.gguf) | IQ4_NL | 0.6GB |
| [malaysian-tinyllama-1.1b-16k-instructions-rag.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-rag-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-rag.Q4_K_S.gguf) | Q4_K_S | 0.6GB |
| [malaysian-tinyllama-1.1b-16k-instructions-rag.Q4_K.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-rag-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-rag.Q4_K.gguf) | Q4_K | 0.62GB |
| [malaysian-tinyllama-1.1b-16k-instructions-rag.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-rag-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-rag.Q4_K_M.gguf) | Q4_K_M | 0.62GB |
| [malaysian-tinyllama-1.1b-16k-instructions-rag.Q4_1.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-rag-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-rag.Q4_1.gguf) | Q4_1 | 0.65GB |
| [malaysian-tinyllama-1.1b-16k-instructions-rag.Q5_0.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-rag-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-rag.Q5_0.gguf) | Q5_0 | 0.71GB |
| [malaysian-tinyllama-1.1b-16k-instructions-rag.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-rag-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-rag.Q5_K_S.gguf) | Q5_K_S | 0.71GB |
| [malaysian-tinyllama-1.1b-16k-instructions-rag.Q5_K.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-rag-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-rag.Q5_K.gguf) | Q5_K | 0.73GB |
| [malaysian-tinyllama-1.1b-16k-instructions-rag.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-rag-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-rag.Q5_K_M.gguf) | Q5_K_M | 0.73GB |
| [malaysian-tinyllama-1.1b-16k-instructions-rag.Q5_1.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-rag-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-rag.Q5_1.gguf) | Q5_1 | 0.77GB |
| [malaysian-tinyllama-1.1b-16k-instructions-rag.Q6_K.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-rag-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-rag.Q6_K.gguf) | Q6_K | 0.84GB |
| [malaysian-tinyllama-1.1b-16k-instructions-rag.Q8_0.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-rag-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-rag.Q8_0.gguf) | Q8_0 | 1.09GB |
Original model description:
---
language:
- ms
---
# Full Parameter Finetuning TinyLlama 16384 context length on Malaysian instructions RAG dataset
We use exact Mistral Instruct chat template.
## Dataset
Dataset gathered at https://huggingface.co/collections/mesolitica/malaysian-synthetic-dataset-656c2673fe7fe0b1e9e25fe2
Notebook to prepare dataset at https://github.com/mesolitica/malaysian-dataset/blob/master/llm-instruction/combine-malay-no-alignment-multitasks-partial-ultrachat-v2.ipynb
## how-to
```python
# from https://sdb.mosti.gov.my/sdbcms/wp-content/uploads/2024/03/GARIS-PANDUAN-SRF-RFP-2024-v1.pdf
s = """
PENGENALAN
1.1 Dana Penyelidikan Strategik (SRF) adalah skim pembiayaan berbentuk
geran bagi membiayai penyelidikan strategik dan inisiatif top- down
berimpak tinggi kepada negara berdasarkan bidang keutamaan semasa
yang telah dikenal pasti.
1.2 Kementerian Sains, Teknologi dan Inovasi (MOSTI) telah mengambil
inisiatif memperkasakan lagi skim SRF dengan melaksanakan permohonan
melalui kaedah request for proposal (RFP). Melalui kaedah ini,
penyelesaian bagi sesuatu permasalahan atau tujuan khusus dapat
diperolehi.
2. OBJEKTIF
2.1 Dana Penyelidikan Strategik – Request for Proposal (SRF-RFP) bertujuan
untuk menyediakan dana bagi membiayai projek-projek yang menyokong
pelaksanaan dasar, pelan hala tuju, pelan tindakan atau insiatif kerajaan
melalui RFP yang dibangunkan.
2.2 Penyelesaian bagi penyataan masalah khusus dalam bentuk teknologi,
produk atau proses baharu yang berinovatif dijangka akan menghasilkan
impak yang besar kepada sosioekonomi negara selari dengan Dasar Sains,
Teknologi dan Inovasi Negara (DSTIN).
2.3 Tahap Kesediaan Teknologi (TRL) bagi skim SRF-RFP hendaklah
sekurang-kurangnya berada pada TRL 3 dan perlu dibangunkan ke
TRL yang lebih tinggi antara TRL 6 hingga 9 seperti skop TRL SRF-RFP
di Rajah 1. Penerangan mengenai TRL adalah seperti di Lampiran 1
(TRL 1- 9).
Rajah 1: Skop dana SRF-RFP mengikut TRL
SRF- RFP
GARIS PANDUAN DANA PENYELIDIKAN STRATEGIK – REQUEST FOR PROPOSAL (SRF-RFP)
(Mac 2024)
5
3. BIDANG KEUTAMAAN
3.1 Bidang keutamaan dan tajuk RFP khusus bagi skim SRF-RFP yang telah
dikenal pasti berdasarkan Rangka Kerja 10-10 Malaysian Science,
Technology, Innovation & Economy (MySTIE) adalah berdasarkan
dokumen RFP seperti di pautan berikut:
DANA PENYELIDIKAN STRATEGIK – DOKUMEN RFP
4. KATEGORI PEMOHON
4.1 Skim SRF-RFP adalah terbuka kepada:
i. Syarikat Perusahaan Kecil dan Sederhana (PKS);
ii. Syarikat Pemula (start-up);
iii. Syarikat Multinasional (MNC);
iv. Large Companies;
v. Institusi Penyelidikan Kerajaan (GRI);
vi. Institusi Pengajian Tinggi (IPT) Awam dan Swasta; dan
vii. Agensi Sains, Teknologi dan Inovasi Kerajaan (Agensi STI)
5. KRITERIA KELAYAKAN
5.1 Syarikat Perusahaan Kecil dan Sederhana (PKS) dan Syarikat Pemula
(start-up) yang berasaskan/ berkaitan teknologi dan inovasi perlu mematuhi
syarat di bawah bagi permohonan SRF-RFP:
5.1.1 Terbuka kepada Syarikat dan Perniagaan yang berdaftar dengan
Suruhanjaya Syarikat Malaysia (SSM) manakala Perniagaan di Sabah
dan Sarawak perlu berdaftar dengan Pihak Berkuasa Tempatan.
5.1.2 Definisi Syarikat Perusahaan Kecil dan Sederhana seperti di
Jadual 1.
GARIS PANDUAN DANA PENYELIDIKAN STRATEGIK – REQUEST FOR PROPOSAL (SRF-RFP)
(Mac 2024)
6
Jadual 1: Definisi Perusahaan Kecil dan Sederhana berdasarkan Saiz Operasi
Sumber: SME Corporation Malaysia
5.1.3 Definisi Syarikat Pemula (start-up): A technology- or innovationenabled business at early stage with a scalable business model and a
high-growth strategy.
5.1.4 Kriteria kelayakan bagi Syarikat Pemula (start-up) adalah seperti
berikut:
i. Berdaftar dengan Suruhanjaya Syarikat Malaysia (SSM);
i. Pemilikan majoriti warganegara Malaysia (>50%);
ii. Modal berbayar sekurang-kurangnya RM10,000.00;
iii. Mempunyai sekurang-kurangnya dua (2) pengarah syarikat;
iv. Perniagaan berasaskan teknologi/ berkaitan teknologi dan
inovasi; dan
v. Operasi syarikat tidak melebihi 5 tahun.
5.2 Bagi Syarikat PKS atau Syarikat pemula (start-up), yang pemilikan tidak
mencapai majoriti warganegara Malaysia (<50%), syarat-syarat tambahan
berikut hendaklah dipatuhi, iaitu:
i. Pemohon mempunyai kelayakan minima dari segi pembuktian konsep
(proof of concept, POC) atau prototaip yang telah berfungsi (working
prototype),
ii. Syarikat beroperasi di Malaysia; dan
iii. Sekurang-kurangnya 70% pekerja adalah warganegara Malaysia.
Kategori Perusahaan Kecil Perusahaan Sederhana
Pembuatan • Jualan tahunan daripada
RM300,000 hingga
kurang daripada RM15
juta; atau
• Bilangan pekerja sepenuh
masa daripada 5 orang
hingga kurang daripada
75 orang.
• Jualan tahunan daripada RM15
juta hingga tidak melebihi RM50
juta; atau
• Bilangan pekerja sepenuh masa
daripada 75 orang hingga tidak
melebihi 200 orang.
Perkhidmatan
dan Sektor
Lain
• Jualan tahunan daripada
RM300,000 hingga
kurang daripada RM3 juta;
atau
• Bilangan pekerja sepenuh
masa daripada 5 orang
hingga kurang daripada
30 orang.
• Jualan tahunan daripada RM3
juta hingga tidak melebihi RM20
juta; atau
• Bilangan pekerja sepenuh masa
daripada 30 orang hingga tidak
melebihi 75 orang.
GARIS PANDUAN DANA PENYELIDIKAN STRATEGIK – REQUEST FOR PROPOSAL (SRF-RFP)
(Mac 2024)
7
5.3 Bagi permohonan daripada syarikat PKS, Syarikat Multinasional (MNC) dan
Large Companies, dana ini ditawarkan secara geran padanan di mana
syarikat hendaklah membiayai sekurang-kurangnya 35% (monetary atau
in-kind) daripada jumlah keseluruhan kos projek.
5.4 Agensi STI adalah merujuk kepada agensi yang menjalankan fungsi
penyelidikan dan perkhidmatan berkaitan STI di bawah MOSTI.
5.5 Permohonan daripada Institusi Pengajian Tinggi (IPT) Awam dan Swasta
hendaklah berkolaborasi dengan Syarikat Pemula/Syarikat Perusahaan
Kecil dan Sederhana (PKS) (bukti dokumen adalah sekurang-kurangnya
surat persetujuan (Letter of Acceptance (LoA)) atau lain-lain dokumen yang
setara).
5.6 Permohonan daripada Syarikat Pemula dan PKS digalakkan
berkolaborasi dengan IPTA, IPTS, GRI atau Agensi STI.
5.7 Pemohon yang berkolaborasi dengan IPTA, IPTS, GRI atau Agensi STI,
hendaklah melantik Research Officer (RO)/ Graduate Research Assistant
(GRA). (bukti dokumen adalah sekurang-kurangnya surat persetujuan
(Letter of Acceptance, LoA) atau lain-lain dokumen yang setara).
5.8 Manakala Institusi Penyelidikan Kerajaan/Agensi STI Kerajaan digalakkan
berkolaborasi dengan Syarikat Pemula/Syarikat Perusahaan Kecil dan
Sederhana (PKS) (bukti dokumen adalah sekurang-kurangnya surat
persetujuan (Letter of Acceptance, LoA) atau lain-lain dokumen yang
setara).
5.9 Semua pemohon hendaklah berdaftar di Malaysia.
5.10 Pengarah syarikat atau anggota pasukan projek tidak pernah disabitkan
atas kegiatan penipuan atau syarikat diisytihar muflis, atau dalam
pembubaran atau di bawah receivership.
5.11 Ketua Projek yang terdiri daripada warganegara Malaysia boleh melibatkan
ahli projek daripada organisasi antarabangsa atau ekspatriat yang bekerja
dari institusi yang sama.
5.12 Manakala Ketua Projek yang bukan warganegara Malaysia dibenarkan
untuk memohon dengan syarat:
i. permit kerja adalah sah sepanjang tempoh pelaksanaan projek; dan
GARIS PANDUAN DANA PENYELIDIKAN STRATEGIK – REQUEST FOR PROPOSAL (SRF-RFP)
(Mac 2024)
8
ii. ahli projek mestilah terdiri daripada warganegara Malaysia yang
mempunyai bidang kepakaran yang sama dan dari institusi yang sama.
5.13 Ketua Projek hanya dibenarkan mengetuai satu projek sahaja di bawah
kelulusan MOSTI pada satu masa.
5.14 Penyelidik yang bekerja di bawah kontrak Institusi Penyelidikan Kerajaan/
Agensi STI Kerajaan/ Institusi Pengajian Tinggi (IPT) Awam dan Swasta/
hendaklah memastikan bahawa kontrak pekerjaan masih sah sepanjang
tempoh projek.
5.15 Pasukan projek harus terdiri daripada ahli yang berkelayakan dan cekap
dalam aspek teknikal bagi keseluruhan projek. Setiap ahli pasukan
hendaklah menyediakan resume (curriculum vitae) yang jelas mengenai
bidang penyelidikan, pengalaman dan kejayaan yang telah dicapai.
5.16 Jika ahli projek adalah daripada institusi yang berlainan, surat kebenaran
daripada ketua jabatan hendaklah dikemukakan.
5.17 Pemohon dibenarkan mengemukakan beberapa permohonan bagi projekprojek yang berbeza dengan syarat pemohon mempunyai kemampuan dari
segi sumber manusia dan kewangan yang kukuh.
5.18 Projek mesti dilaksanakan di Malaysia kecuali mendapat kelulusan
daripada MOSTI.
5.19 Projek yang dicadangkan perlu mengandungi elemen pembangunan
eksperimental (experimental development) yang menghala kepada
pengkomersialan.
5.20 Projek yang dicadangkan perlu berada pada tahap pra-pengkomersialan
dengan sekurang-kurangnya mempunyai experimental proof of concept
(TRL 3).
5.21 Ketua Projek perlu memaklumkan kepada pihak MOSTI sekiranya telah
menerima dana daripada pihak-pihak yang lain bagi projek yang sama.
5.22 Permohonan projek yang berkaitan dengan penguatkuasaan keselamatan
dan pertahanan (polis dan tentera) tidak akan dibiaya di bawah skim ini.
GARIS PANDUAN DANA PENYELIDIKAN STRATEGIK – REQUEST FOR PROPOSAL (SRF-RFP)
(Mac 2024)
9
6. PROSES PERMOHONAN
6.1 Permohonan SRF-RFP melibatkan lima (5) peringkat utama seperti
ditunjukkan di Rajah 2:
Rajah 2: Peringkat proses permohonan
6.1.1 Peringkat 1: Nota Konsep
i. Pemohon perlu berdaftar sebagai pengguna portal Sistem Dana
Bersepadu (SDB) di pautan https://sdb.mosti.gov.my/sdbcms/
ii. Pemohon hendaklah menyediakan nota konsep dengan
melengkapkan borang dalam portal SDB dengan merujuk kepada
dokumen RFP dan garis panduan permohonan skim SRF-RFP
serta skop pembiayaan yang telah ditetapkan.
6.1.2 Peringkat 2: Saringan Awal
i. Nota konsep yang diterima akan melalui proses saringan awal bagi
menilai pematuhan kepada spesifikasi dan jangkaan hasil projek
selaras dengan keperluan RFP.
ii. Pemohon bagi nota konsep yang disenarai pendek akan diminta
unuk membentangkan cadangan projek kepada jawatankuasa di
peringkat MOSTI.
iii. Hanya pemohon yang berjaya melepasi saringan awal sahaja akan
dipelawa untuk mengemukakan permohonan penuh.
"""
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained('mesolitica/malaysian-tinyllama-1.1b-16k-instructions-rag')
model = AutoModelForCausalLM.from_pretrained(
'mesolitica/malaysian-tinyllama-1.1b-16k-instructions-rag',
use_flash_attention_2 = True,
torch_dtype = torch.float16
)
_ = model.cuda()
prompt = """
knowledge base is below.
---------------------
{best_doc}
---------------------
Given the knowledge base and not prior knowledge, answer the question.
Question: {question}
""".strip()
messages = [
{'role': 'user', 'content': prompt.format(best_doc = s, question = 'camne nak dapat grant')}
]
inputs = tokenizer.apply_chat_template(messages, tokenize = False)
inputs = tokenizer([inputs], return_tensors='pt', add_special_tokens=False).to('cuda')
generate_kwargs = dict(
inputs,
max_new_tokens=1024,
top_p=0.95,
top_k=50,
temperature=0.9,
do_sample=True,
num_beams=1,
)
r = model.generate(**generate_kwargs)
print(tokenizer.decode(r[0]).split('[/INST]')[1])
```
```
Untuk mendapatkan geran daripada Dana Penyelidikan Strategik - Draf Permintaan Cadangan (SRF-RFP), anda perlu mengikuti langkah berikut:
1. Baca dengan teliti Garis Panduan dan Prosedur Permohonan SRF-RFP untuk memahami terma dan syarat.
2. Sediakan dokumen yang diperlukan, termasuk yang berikut:
* Borang permohonan yang lengkap dan disahkan
* Contoh draf geran
* Brosur geran dan sebarang maklumat berkaitan
* Borang pengesahan perundingan dan reka bentuk pengeluaran
* Contoh cadangan penyelidikan
* Borang pengesahan peruntukan bagi setiap pasukan penyelidik (dengan penyertaan jika berkenaan)
3. Menghantar dokumen lengkap kepada MOSTI.
4. Menghadiri pemilihan pasukan penyelidikan, seperti yang dinyatakan dalam perkara 3.2 pangkalan pengetahuan.
5. Jika dipilih, anda akan menerima surat tawaran yang menggariskan terma dan syarat geran, termasuk sebarang butiran lain yang berkaitan.
6. Ikut garis panduan dan prosedur yang disediakan dalam surat tawaran.
7. Setelah projek selesai, MOSTI akan menyemak dan menilai kemajuan dan impak projek.
8. Bayar geran dalam tempoh masa yang ditetapkan.
Adalah penting untuk ambil perhatian bahawa SRF-RFP meliputi pelbagai bidang penyelidikan dan projek itu akan dinilai berdasarkan meritnya. Proses permohonan dan kelulusan yang panjang mungkin tidak sesuai untuk semua bakal pemohon, jadi adalah disyorkan untuk menyemak garis panduan dengan teliti dan mengikuti arahan dengan teliti.
Saya harap ini membantu anda dalam proses permohonan SRF-RFP. Jika anda memerlukan penjelasan lanjut atau mempunyai sebarang soalan lain, sila jangan teragak-agak untuk bertanya.</s>
```
```python
messages = [
{'role': 'user', 'content': prompt.format(best_doc = s, question = 'camne nak tanam pokok manggis')}
]
inputs = tokenizer.apply_chat_template(messages, tokenize = False)
inputs = tokenizer([inputs], return_tensors='pt', add_special_tokens=False).to('cuda')
generate_kwargs = dict(
inputs,
max_new_tokens=1024,
top_p=0.95,
top_k=50,
temperature=0.9,
do_sample=True,
num_beams=1,
)
r = model.generate(**generate_kwargs)
print(tokenizer.decode(r[0]).split('[/INST]')[1])
```
```
Berdasarkan pangkalan pengetahuan yang disediakan, tiada maklumat mengenai projek atau inisiatif berkaitan penanaman pokok manggis. Pangkalan pengetahuan mengandungi garis panduan dan syarat kelayakan untuk program pembiayaan strategik di Malaysia, dengan penekanan kepada pembangunan syarikat kecil dan sederhana (PKS) dan syarikat baharu. Untuk maklumat mengenai penanaman pokok manggis, saya akan mengesyorkan agar anda menyemak dengan sumber pertanian atau agensi kerajaan yang berkaitan di Malaysia.</s>
```
|
gisang-lee/mistral-7b-qlora-arc-wandb-test-arc-challenge-all-origin | gisang-lee | "2024-07-02T18:19:34Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-07-02T18:08:47Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ayucel/my_awesome_wnut_model | ayucel | "2024-07-02T18:13:29Z" | 0 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"base_model:distilbert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-07-02T18:08:48Z" | ---
license: apache-2.0
base_model: distilbert-base-cased
tags:
- generated_from_keras_callback
model-index:
- name: ayucel/my_awesome_wnut_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ayucel/my_awesome_wnut_model
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0351
- Validation Loss: 0.0292
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 875.9, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 0.1, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1386 | 0.0431 | 0 |
| 0.0351 | 0.0292 | 1 |
### Framework versions
- Transformers 4.41.2
- TensorFlow 2.15.0
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Maxivi/x | Maxivi | "2024-07-02T19:15:07Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T18:09:14Z" | Entry not found |
yousefg/ppo-LunarLander-v2 | yousefg | "2024-07-02T18:13:53Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-07-02T18:10:03Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 159.81 +/- 108.68
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
fifala/08-fifa-07-02-01 | fifala | "2024-07-02T18:13:00Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:10:09Z" | Entry not found |
manohar02/quantized-llama2-model-new | manohar02 | "2024-07-02T18:10:30Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T18:10:30Z" | Entry not found |
healtori/04-heal-07-02-01 | healtori | "2024-07-02T18:14:01Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:11:12Z" | Entry not found |
juanpablomesa/bge-base-bioasq-matryoshka | juanpablomesa | "2024-07-02T18:11:26Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:4012",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | "2024-07-02T18:11:15Z" | ---
base_model: BAAI/bge-base-en-v1.5
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:4012
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: 'Extensive messenger RNA editing generates transcript and protein
diversity in genes involved in neural excitability, as previously described, as
well as in genes participating in a broad range of other cellular functions. '
sentences:
- Do cephalopods use RNA editing less frequently than other species?
- GV1001 vaccine targets which enzyme?
- Which event results in the acetylation of S6K1?
- source_sentence: Yes, exposure to household furry pets influences the gut microbiota
of infants.
sentences:
- Can pets affect infant microbiomed?
- What is the mode of action of Thiazovivin?
- What are the effects of CAMK4 inhibition?
- source_sentence: "In children with heart failure evidence of the effect of enalapril\
\ is empirical. Enalapril was clinically safe and effective in 50% to 80% of for\
\ children with cardiac failure secondary to congenital heart malformations before\
\ and after cardiac surgery, impaired ventricular function , valvar regurgitation,\
\ congestive cardiomyopathy, , arterial hypertension, life-threatening arrhythmias\
\ coexisting with circulatory insufficiency. \nACE inhibitors have shown a transient\
\ beneficial effect on heart failure due to anticancer drugs and possibly a beneficial\
\ effect in muscular dystrophy-associated cardiomyopathy, which deserves further\
\ studies."
sentences:
- Which receptors can be evaluated with the [18F]altanserin?
- In what proportion of children with heart failure has Enalapril been shown to
be safe and effective?
- Which major signaling pathways are regulated by RIP1?
- source_sentence: Cellular senescence-associated heterochromatic foci (SAHFS) are
a novel type of chromatin condensation involving alterations of linker histone
H1 and linker DNA-binding proteins. SAHFS can be formed by a variety of cell types,
but their mechanism of action remains unclear.
sentences:
- What is the relationship between the X chromosome and a neutrophil drumstick?
- Which microRNAs are involved in exercise adaptation?
- How are SAHFS created?
- source_sentence: Multicluster Pcdh diversity is required for mouse olfactory neural
circuit assembly. The vertebrate clustered protocadherin (Pcdh) cell surface proteins
are encoded by three closely linked gene clusters (Pcdhα, Pcdhβ, and Pcdhγ). Although
deletion of individual Pcdh clusters had subtle phenotypic consequences, the loss
of all three clusters (tricluster deletion) led to a severe axonal arborization
defect and loss of self-avoidance.
sentences:
- What are the effects of the deletion of all three Pcdh clusters (tricluster deletion)
in mice?
- what is the role of MEF-2 in cardiomyocyte differentiation?
- How many periods of regulatory innovation led to the evolution of vertebrates?
model-index:
- name: BGE base Financial Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.8528995756718529
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9264497878359265
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9462517680339463
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.958981612446959
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8528995756718529
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3088165959453088
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.18925035360678924
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09589816124469587
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8528995756718529
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9264497878359265
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9462517680339463
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.958981612446959
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9106149406529569
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8946105835073304
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8959864574088351
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.8472418670438473
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9321074964639321
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9476661951909476
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9603960396039604
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8472418670438473
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3107024988213107
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1895332390381895
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09603960396039603
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8472418670438473
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9321074964639321
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9476661951909476
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9603960396039604
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9095270940461391
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8926230888394963
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8939142126576148
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.8359264497878359
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.925035360678925
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9405940594059405
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9533239038189534
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8359264497878359
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.30834512022630833
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1881188118811881
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09533239038189532
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8359264497878359
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.925035360678925
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9405940594059405
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9533239038189534
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9003866854175698
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8828006780269864
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8839707936250328
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.8175388967468176
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9108910891089109
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9264497878359265
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9434229137199435
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8175388967468176
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.30363036303630364
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.18528995756718525
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09434229137199433
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8175388967468176
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9108910891089109
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9264497878359265
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9434229137199435
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8862907631297875
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8674047506791496
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8686719824449951
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.7779349363507779
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8868458274398868
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9066478076379066
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9207920792079208
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7779349363507779
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2956152758132956
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1813295615275813
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09207920792079208
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7779349363507779
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8868458274398868
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9066478076379066
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9207920792079208
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8570476590886804
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.835792303720168
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8374166888522218
name: Cosine Map@100
---
# BGE base Financial Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("juanpablomesa/bge-base-bioasq-matryoshka")
# Run inference
sentences = [
'Multicluster Pcdh diversity is required for mouse olfactory neural circuit assembly. The vertebrate clustered protocadherin (Pcdh) cell surface proteins are encoded by three closely linked gene clusters (Pcdhα, Pcdhβ, and Pcdhγ). Although deletion of individual Pcdh clusters had subtle phenotypic consequences, the loss of all three clusters (tricluster deletion) led to a severe axonal arborization defect and loss of self-avoidance.',
'What are the effects of the deletion of all three Pcdh clusters (tricluster deletion) in mice?',
'How many periods of regulatory innovation led to the evolution of vertebrates?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:----------|
| cosine_accuracy@1 | 0.8529 |
| cosine_accuracy@3 | 0.9264 |
| cosine_accuracy@5 | 0.9463 |
| cosine_accuracy@10 | 0.959 |
| cosine_precision@1 | 0.8529 |
| cosine_precision@3 | 0.3088 |
| cosine_precision@5 | 0.1893 |
| cosine_precision@10 | 0.0959 |
| cosine_recall@1 | 0.8529 |
| cosine_recall@3 | 0.9264 |
| cosine_recall@5 | 0.9463 |
| cosine_recall@10 | 0.959 |
| cosine_ndcg@10 | 0.9106 |
| cosine_mrr@10 | 0.8946 |
| **cosine_map@100** | **0.896** |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.8472 |
| cosine_accuracy@3 | 0.9321 |
| cosine_accuracy@5 | 0.9477 |
| cosine_accuracy@10 | 0.9604 |
| cosine_precision@1 | 0.8472 |
| cosine_precision@3 | 0.3107 |
| cosine_precision@5 | 0.1895 |
| cosine_precision@10 | 0.096 |
| cosine_recall@1 | 0.8472 |
| cosine_recall@3 | 0.9321 |
| cosine_recall@5 | 0.9477 |
| cosine_recall@10 | 0.9604 |
| cosine_ndcg@10 | 0.9095 |
| cosine_mrr@10 | 0.8926 |
| **cosine_map@100** | **0.8939** |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:----------|
| cosine_accuracy@1 | 0.8359 |
| cosine_accuracy@3 | 0.925 |
| cosine_accuracy@5 | 0.9406 |
| cosine_accuracy@10 | 0.9533 |
| cosine_precision@1 | 0.8359 |
| cosine_precision@3 | 0.3083 |
| cosine_precision@5 | 0.1881 |
| cosine_precision@10 | 0.0953 |
| cosine_recall@1 | 0.8359 |
| cosine_recall@3 | 0.925 |
| cosine_recall@5 | 0.9406 |
| cosine_recall@10 | 0.9533 |
| cosine_ndcg@10 | 0.9004 |
| cosine_mrr@10 | 0.8828 |
| **cosine_map@100** | **0.884** |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.8175 |
| cosine_accuracy@3 | 0.9109 |
| cosine_accuracy@5 | 0.9264 |
| cosine_accuracy@10 | 0.9434 |
| cosine_precision@1 | 0.8175 |
| cosine_precision@3 | 0.3036 |
| cosine_precision@5 | 0.1853 |
| cosine_precision@10 | 0.0943 |
| cosine_recall@1 | 0.8175 |
| cosine_recall@3 | 0.9109 |
| cosine_recall@5 | 0.9264 |
| cosine_recall@10 | 0.9434 |
| cosine_ndcg@10 | 0.8863 |
| cosine_mrr@10 | 0.8674 |
| **cosine_map@100** | **0.8687** |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7779 |
| cosine_accuracy@3 | 0.8868 |
| cosine_accuracy@5 | 0.9066 |
| cosine_accuracy@10 | 0.9208 |
| cosine_precision@1 | 0.7779 |
| cosine_precision@3 | 0.2956 |
| cosine_precision@5 | 0.1813 |
| cosine_precision@10 | 0.0921 |
| cosine_recall@1 | 0.7779 |
| cosine_recall@3 | 0.8868 |
| cosine_recall@5 | 0.9066 |
| cosine_recall@10 | 0.9208 |
| cosine_ndcg@10 | 0.857 |
| cosine_mrr@10 | 0.8358 |
| **cosine_map@100** | **0.8374** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 4,012 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 63.38 tokens</li><li>max: 485 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 16.13 tokens</li><li>max: 49 tokens</li></ul> |
* Samples:
| positive | anchor |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|
| <code>Aberrant patterns of H3K4, H3K9, and H3K27 histone lysine methylation were shown to result in histone code alterations, which induce changes in gene expression, and affect the proliferation rate of cells in medulloblastoma.</code> | <code>What is the implication of histone lysine methylation in medulloblastoma?</code> |
| <code>STAG1/STAG2 proteins are tumour suppressor proteins that suppress cell proliferation and are essential for differentiation.</code> | <code>What is the role of STAG1/STAG2 proteins in differentiation?</code> |
| <code>The association between cell phone use and incident glioblastoma remains unclear. Some studies have reported that cell phone use was associated with incident glioblastoma, and with reduced survival of patients diagnosed with glioblastoma. However, other studies have repeatedly replicated to find an association between cell phone use and glioblastoma.</code> | <code>What is the association between cell phone use and glioblastoma?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
|:----------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|
| 0.8889 | 7 | - | 0.8674 | 0.8951 | 0.8991 | 0.8236 | 0.8996 |
| 1.2698 | 10 | 1.6285 | - | - | - | - | - |
| 1.9048 | 15 | - | 0.8662 | 0.8849 | 0.8951 | 0.8334 | 0.8945 |
| 2.5397 | 20 | 0.7273 | - | - | - | - | - |
| 2.9206 | 23 | - | 0.8681 | 0.8849 | 0.8946 | 0.8362 | 0.8967 |
| **3.5556** | **28** | **-** | **0.8687** | **0.884** | **0.8939** | **0.8374** | **0.896** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.5
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.31.0
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
Magpie-Align/Llama-3-8B-Magpie-Pro-MT-UltraDPO3 | Magpie-Align | "2024-07-03T00:34:40Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T18:11:16Z" | Invalid username or password. |
starnet/05-star-07-02-01 | starnet | "2024-07-02T18:15:40Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:12:22Z" | Entry not found |
acunamartin1426/llama3-chess-finetune | acunamartin1426 | "2024-07-02T18:28:56Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-07-02T18:13:48Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fifala/09-fifa-07-02-01 | fifala | "2024-07-02T18:16:45Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:13:56Z" | Entry not found |
crrodrvi/mbart-simplificacion | crrodrvi | "2024-07-03T00:25:11Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/mbart-large-50",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-07-02T18:14:29Z" | ---
license: mit
base_model: facebook/mbart-large-50
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-simplificacion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-simplificacion
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2347
- Bleu: 6.2645
- Gen Len: 24.551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 109 | 2.9828 | 6.2563 | 20.6939 |
| No log | 2.0 | 218 | 2.7680 | 6.5679 | 25.0612 |
| No log | 3.0 | 327 | 3.3097 | 5.801 | 26.6531 |
| No log | 4.0 | 436 | 3.8920 | 6.5828 | 25.7347 |
| 1.4478 | 5.0 | 545 | 4.2347 | 6.2645 | 24.551 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
manohar02/Llama-2-7b-quantize | manohar02 | "2024-07-02T18:49:53Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T18:14:40Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
healtori/07-heal-07-02-01 | healtori | "2024-07-02T18:17:59Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:14:57Z" | Entry not found |
RichardErkhov/Alphacode-AI_-_Alphacode-MALI-11B-gguf | RichardErkhov | "2024-07-03T01:06:09Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-07-02T18:15:48Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Alphacode-MALI-11B - GGUF
- Model creator: https://huggingface.co/Alphacode-AI/
- Original model: https://huggingface.co/Alphacode-AI/Alphacode-MALI-11B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Alphacode-MALI-11B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Alphacode-AI_-_Alphacode-MALI-11B-gguf/blob/main/Alphacode-MALI-11B.Q2_K.gguf) | Q2_K | 3.8GB |
| [Alphacode-MALI-11B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Alphacode-AI_-_Alphacode-MALI-11B-gguf/blob/main/Alphacode-MALI-11B.IQ3_XS.gguf) | IQ3_XS | 4.23GB |
| [Alphacode-MALI-11B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Alphacode-AI_-_Alphacode-MALI-11B-gguf/blob/main/Alphacode-MALI-11B.IQ3_S.gguf) | IQ3_S | 4.46GB |
| [Alphacode-MALI-11B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Alphacode-AI_-_Alphacode-MALI-11B-gguf/blob/main/Alphacode-MALI-11B.Q3_K_S.gguf) | Q3_K_S | 4.43GB |
| [Alphacode-MALI-11B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Alphacode-AI_-_Alphacode-MALI-11B-gguf/blob/main/Alphacode-MALI-11B.IQ3_M.gguf) | IQ3_M | 4.6GB |
| [Alphacode-MALI-11B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Alphacode-AI_-_Alphacode-MALI-11B-gguf/blob/main/Alphacode-MALI-11B.Q3_K.gguf) | Q3_K | 4.94GB |
| [Alphacode-MALI-11B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Alphacode-AI_-_Alphacode-MALI-11B-gguf/blob/main/Alphacode-MALI-11B.Q3_K_M.gguf) | Q3_K_M | 4.94GB |
| [Alphacode-MALI-11B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Alphacode-AI_-_Alphacode-MALI-11B-gguf/blob/main/Alphacode-MALI-11B.Q3_K_L.gguf) | Q3_K_L | 5.37GB |
| [Alphacode-MALI-11B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Alphacode-AI_-_Alphacode-MALI-11B-gguf/blob/main/Alphacode-MALI-11B.IQ4_XS.gguf) | IQ4_XS | 5.54GB |
| [Alphacode-MALI-11B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Alphacode-AI_-_Alphacode-MALI-11B-gguf/blob/main/Alphacode-MALI-11B.Q4_0.gguf) | Q4_0 | 5.77GB |
| [Alphacode-MALI-11B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Alphacode-AI_-_Alphacode-MALI-11B-gguf/blob/main/Alphacode-MALI-11B.IQ4_NL.gguf) | IQ4_NL | 5.83GB |
| [Alphacode-MALI-11B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Alphacode-AI_-_Alphacode-MALI-11B-gguf/blob/main/Alphacode-MALI-11B.Q4_K_S.gguf) | Q4_K_S | 5.81GB |
| [Alphacode-MALI-11B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Alphacode-AI_-_Alphacode-MALI-11B-gguf/blob/main/Alphacode-MALI-11B.Q4_K.gguf) | Q4_K | 6.15GB |
| [Alphacode-MALI-11B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Alphacode-AI_-_Alphacode-MALI-11B-gguf/blob/main/Alphacode-MALI-11B.Q4_K_M.gguf) | Q4_K_M | 6.15GB |
| [Alphacode-MALI-11B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Alphacode-AI_-_Alphacode-MALI-11B-gguf/blob/main/Alphacode-MALI-11B.Q4_1.gguf) | Q4_1 | 6.4GB |
| [Alphacode-MALI-11B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Alphacode-AI_-_Alphacode-MALI-11B-gguf/blob/main/Alphacode-MALI-11B.Q5_0.gguf) | Q5_0 | 7.03GB |
| [Alphacode-MALI-11B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Alphacode-AI_-_Alphacode-MALI-11B-gguf/blob/main/Alphacode-MALI-11B.Q5_K_S.gguf) | Q5_K_S | 7.03GB |
| [Alphacode-MALI-11B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Alphacode-AI_-_Alphacode-MALI-11B-gguf/blob/main/Alphacode-MALI-11B.Q5_K.gguf) | Q5_K | 7.22GB |
| [Alphacode-MALI-11B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Alphacode-AI_-_Alphacode-MALI-11B-gguf/blob/main/Alphacode-MALI-11B.Q5_K_M.gguf) | Q5_K_M | 7.22GB |
| [Alphacode-MALI-11B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Alphacode-AI_-_Alphacode-MALI-11B-gguf/blob/main/Alphacode-MALI-11B.Q5_1.gguf) | Q5_1 | 7.66GB |
| [Alphacode-MALI-11B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Alphacode-AI_-_Alphacode-MALI-11B-gguf/blob/main/Alphacode-MALI-11B.Q6_K.gguf) | Q6_K | 8.37GB |
| [Alphacode-MALI-11B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Alphacode-AI_-_Alphacode-MALI-11B-gguf/blob/main/Alphacode-MALI-11B.Q8_0.gguf) | Q8_0 | 10.84GB |
Original model description:
---
license: cc-by-4.0
language:
- ko
pipeline_tag: text-generation
tags:
- merge
---
![alphacode](logo.png)
![mali](Alphacode_MALI.jpeg)
MALI-11B (Model with Auto Learning Ideation) is a merge version of Alphacode's Models that has been fine-tuned with Our In House CustomData.
Train Spec : We utilized an A100x8 for training our model with DeepSpeed / HuggingFace TRL Trainer / HuggingFace Accelerate
Contact : Alphacode Co. [https://alphacode.ai/]
|
starnet/06-star-07-02-01 | starnet | "2024-07-02T18:19:53Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:16:41Z" | Entry not found |
YYYYYYibo/full_vanilla_dpo_iter_2 | YYYYYYibo | "2024-07-02T18:17:21Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T18:17:21Z" | Entry not found |
fifala/10-fifa-07-02-01 | fifala | "2024-07-02T18:20:19Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:17:31Z" | Entry not found |
nightsornram/food_classifier | nightsornram | "2024-07-02T18:50:48Z" | 0 | 0 | transformers | [
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-07-02T18:17:52Z" | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: nightsornram/food_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nightsornram/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3745
- Validation Loss: 0.3281
- Train Accuracy: 0.918
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.7892 | 1.6582 | 0.814 | 0 |
| 1.2074 | 0.8517 | 0.885 | 1 |
| 0.6957 | 0.5030 | 0.918 | 2 |
| 0.4869 | 0.4189 | 0.912 | 3 |
| 0.3745 | 0.3281 | 0.918 | 4 |
### Framework versions
- Transformers 4.41.2
- TensorFlow 2.15.0
- Datasets 2.20.0
- Tokenizers 0.19.1
|
healtori/08-heal-07-02-01 | healtori | "2024-07-02T18:21:51Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:18:58Z" | Entry not found |
ironlanderl/phi-3-mini-4k-f16-Q5_K_M-GGUF | ironlanderl | "2024-07-02T18:19:22Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"unsloth",
"trl",
"sft",
"llama-cpp",
"gguf-my-repo",
"dataset:CohereForAI/aya_collection_language_split",
"base_model:ironlanderl/phi-3-mini-4k-f16",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T18:19:07Z" | ---
base_model: ironlanderl/phi-3-mini-4k-f16
datasets:
- CohereForAI/aya_collection_language_split
library_name: transformers
tags:
- unsloth
- trl
- sft
- llama-cpp
- gguf-my-repo
---
# ironlanderl/phi-3-mini-4k-f16-Q5_K_M-GGUF
This model was converted to GGUF format from [`ironlanderl/phi-3-mini-4k-f16`](https://huggingface.co/ironlanderl/phi-3-mini-4k-f16) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ironlanderl/phi-3-mini-4k-f16) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ironlanderl/phi-3-mini-4k-f16-Q5_K_M-GGUF --hf-file phi-3-mini-4k-f16-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ironlanderl/phi-3-mini-4k-f16-Q5_K_M-GGUF --hf-file phi-3-mini-4k-f16-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ironlanderl/phi-3-mini-4k-f16-Q5_K_M-GGUF --hf-file phi-3-mini-4k-f16-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ironlanderl/phi-3-mini-4k-f16-Q5_K_M-GGUF --hf-file phi-3-mini-4k-f16-q5_k_m.gguf -c 2048
```
|
InfiniteEcho/dqn-SpaceInvadersNoFrameskip-v4 | InfiniteEcho | "2024-07-02T18:21:00Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-07-02T18:20:28Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 597.00 +/- 219.18
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga InfiniteEcho -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga InfiniteEcho -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga InfiniteEcho
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
ferrazzipietro/Meta-Llama-3-8B-Instruct_en.layer1_NoQuant_32_32_0.02_8 | ferrazzipietro | "2024-07-02T18:20:43Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T18:20:31Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
starnet/01-star21-07-02 | starnet | "2024-07-02T18:28:20Z" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | null | "2024-07-02T18:20:36Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-Chat-gguf | RichardErkhov | "2024-07-02T18:27:18Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-07-02T18:20:49Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
TeenyTinyLlama-460m-Chat - GGUF
- Model creator: https://huggingface.co/nicholasKluge/
- Original model: https://huggingface.co/nicholasKluge/TeenyTinyLlama-460m-Chat/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [TeenyTinyLlama-460m-Chat.Q2_K.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-Chat-gguf/blob/main/TeenyTinyLlama-460m-Chat.Q2_K.gguf) | Q2_K | 0.17GB |
| [TeenyTinyLlama-460m-Chat.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-Chat-gguf/blob/main/TeenyTinyLlama-460m-Chat.IQ3_XS.gguf) | IQ3_XS | 0.19GB |
| [TeenyTinyLlama-460m-Chat.IQ3_S.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-Chat-gguf/blob/main/TeenyTinyLlama-460m-Chat.IQ3_S.gguf) | IQ3_S | 0.2GB |
| [TeenyTinyLlama-460m-Chat.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-Chat-gguf/blob/main/TeenyTinyLlama-460m-Chat.Q3_K_S.gguf) | Q3_K_S | 0.2GB |
| [TeenyTinyLlama-460m-Chat.IQ3_M.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-Chat-gguf/blob/main/TeenyTinyLlama-460m-Chat.IQ3_M.gguf) | IQ3_M | 0.21GB |
| [TeenyTinyLlama-460m-Chat.Q3_K.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-Chat-gguf/blob/main/TeenyTinyLlama-460m-Chat.Q3_K.gguf) | Q3_K | 0.22GB |
| [TeenyTinyLlama-460m-Chat.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-Chat-gguf/blob/main/TeenyTinyLlama-460m-Chat.Q3_K_M.gguf) | Q3_K_M | 0.22GB |
| [TeenyTinyLlama-460m-Chat.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-Chat-gguf/blob/main/TeenyTinyLlama-460m-Chat.Q3_K_L.gguf) | Q3_K_L | 0.24GB |
| [TeenyTinyLlama-460m-Chat.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-Chat-gguf/blob/main/TeenyTinyLlama-460m-Chat.IQ4_XS.gguf) | IQ4_XS | 0.24GB |
| [TeenyTinyLlama-460m-Chat.Q4_0.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-Chat-gguf/blob/main/TeenyTinyLlama-460m-Chat.Q4_0.gguf) | Q4_0 | 0.25GB |
| [TeenyTinyLlama-460m-Chat.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-Chat-gguf/blob/main/TeenyTinyLlama-460m-Chat.IQ4_NL.gguf) | IQ4_NL | 0.26GB |
| [TeenyTinyLlama-460m-Chat.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-Chat-gguf/blob/main/TeenyTinyLlama-460m-Chat.Q4_K_S.gguf) | Q4_K_S | 0.26GB |
| [TeenyTinyLlama-460m-Chat.Q4_K.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-Chat-gguf/blob/main/TeenyTinyLlama-460m-Chat.Q4_K.gguf) | Q4_K | 0.27GB |
| [TeenyTinyLlama-460m-Chat.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-Chat-gguf/blob/main/TeenyTinyLlama-460m-Chat.Q4_K_M.gguf) | Q4_K_M | 0.27GB |
| [TeenyTinyLlama-460m-Chat.Q4_1.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-Chat-gguf/blob/main/TeenyTinyLlama-460m-Chat.Q4_1.gguf) | Q4_1 | 0.28GB |
| [TeenyTinyLlama-460m-Chat.Q5_0.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-Chat-gguf/blob/main/TeenyTinyLlama-460m-Chat.Q5_0.gguf) | Q5_0 | 0.3GB |
| [TeenyTinyLlama-460m-Chat.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-Chat-gguf/blob/main/TeenyTinyLlama-460m-Chat.Q5_K_S.gguf) | Q5_K_S | 0.3GB |
| [TeenyTinyLlama-460m-Chat.Q5_K.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-Chat-gguf/blob/main/TeenyTinyLlama-460m-Chat.Q5_K.gguf) | Q5_K | 0.31GB |
| [TeenyTinyLlama-460m-Chat.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-Chat-gguf/blob/main/TeenyTinyLlama-460m-Chat.Q5_K_M.gguf) | Q5_K_M | 0.31GB |
| [TeenyTinyLlama-460m-Chat.Q5_1.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-Chat-gguf/blob/main/TeenyTinyLlama-460m-Chat.Q5_1.gguf) | Q5_1 | 0.33GB |
| [TeenyTinyLlama-460m-Chat.Q6_K.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-Chat-gguf/blob/main/TeenyTinyLlama-460m-Chat.Q6_K.gguf) | Q6_K | 0.36GB |
| [TeenyTinyLlama-460m-Chat.Q8_0.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-Chat-gguf/blob/main/TeenyTinyLlama-460m-Chat.Q8_0.gguf) | Q8_0 | 0.46GB |
Original model description:
---
license: apache-2.0
datasets:
- nicholasKluge/instruct-aira-dataset-v2
language:
- pt
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-generation
tags:
- alignment
- instruction tuned
- text generation
- conversation
- assistant
widget:
- text: "<s><instruction>Cite algumas bandas de rock famosas da década de 1960.</instruction>"
example_title: Exemplo
- text: "<s><instruction>Quantos planetas existem no sistema solar?</instruction>"
example_title: Exemplo
- text: "<s><instruction>Qual é o futuro do ser humano?</instruction>"
example_title: Exemplo
- text: "<s><instruction>Qual o sentido da vida?</instruction>"
example_title: Exemplo
- text: "<s><instruction>Como imprimir hello world em python?</instruction>"
example_title: Exemplo
- text: "<s><instruction>Invente uma história sobre um encanador com poderes mágicos.</instruction>"
example_title: Exemplo
inference:
parameters:
repetition_penalty: 1.2
temperature: 0.2
top_k: 30
top_p: 0.3
max_new_tokens: 200
length_penalty: 0.3
early_stopping: true
co2_eq_emissions:
emissions: 2530
source: CodeCarbon
training_type: fine-tuning
geographical_location: United States of America
hardware_used: NVIDIA A100-SXM4-40GB
---
# TeenyTinyLlama-460m-Chat
TeenyTinyLlama is a pair of small foundational models trained in Brazilian Portuguese.
This repository contains a version of [TeenyTinyLlama-460m](https://huggingface.co/nicholasKluge/TeenyTinyLlama-460m) (`TeenyTinyLlama-460m-Chat`) fine-tuned on the [Instruct-Aira Dataset version 2.0](https://huggingface.co/datasets/nicholasKluge/instruct-aira-dataset-v2).
## Details
- **Number of Epochs:** 3
- **Batch size:** 4
- **Optimizer:** `torch.optim.AdamW` (warmup_steps = 1e3, learning_rate = 1e-5, epsilon = 1e-8)
- **GPU:** 1 NVIDIA A100-SXM4-40GB
- **Carbon emissions** stats are logged in this [file](emissions.csv).
This repository has the [source code](https://github.com/Nkluge-correa/TeenyTinyLlama) used to train this model.
## Intended Uses
The primary intended use of TeenyTinyLlama is to research the challenges related to developing language models for low-resource languages. Checkpoints saved during training are intended to provide a controlled setting for performing scientific experiments. You may also further fine-tune and adapt TeenyTinyLlama for deployment, as long as your use is following the Apache 2.0 license. If you decide to use pre-trained TeenyTinyLlama as a basis for your fine-tuned model, please conduct your own risk and bias assessment.
## Out-of-scope Use
TeenyTinyLlama is not intended for deployment. It is not a product and should not be used for human-facing interactions.
TeenyTinyLlama models are Brazilian Portuguese language only and are not suitable for translation or generating text in other languages.
TeenyTinyLlama has not been fine-tuned for downstream contexts in which language models are commonly deployed.
## Usage
The following special tokens are used to mark the user side of the interaction and the model's response:
`<instruction>`What is a language model?`</instruction>`A language model is a probability distribution over a vocabulary.`</s>`
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained('nicholasKluge/TeenyTinyLlama-460m-Chat')
model = AutoModelForCausalLM.from_pretrained('nicholasKluge/TeenyTinyLlama-460m-Chat')
model.eval()
model.to(device)
question = input("Entre seu prompt aqui: ")
inputs = tokenizer("<instruction>" + question + "</instruction>", return_tensors="pt").to(device)
responses = model.generate(**inputs, num_return_sequences=2)
print(f"Pergunta: 👤 {question}\n")
for i, response in enumerate(responses):
print(f'Resposta {i+1}: 🤖 {tokenizer.decode(response, skip_special_tokens=True).replace(question, "")}')
```
The model will output something like:
```markdown
>>>Question: 👤 Qual a capital do Brasil?
>>>Response 1: 🤖 A capital do Brasil é Brasília.
>>>Response 2: 🤖 A capital do Brasil é Brasília.
```
The chat template for this model is:
```bash
{{bos_token}}
{% for message in messages %}
{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}
{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}
{% endif %}
{% if message['role'] == 'user' %}
{{ '<instruction>' + message['content'].strip() + '</instruction>'}}
{% elif message['role'] == 'assistant' %}
{{ message['content'].strip() + eos_token}}
{% else %}
{{ raise_exception('Only user and assistant roles are supported!') }}
{% endif %}
{% endfor %}
```
## Limitations
Like almost all other language models trained on large text datasets scraped from the web, the TTL pair exhibited behavior that does not make them an out-of-the-box solution to many real-world applications, especially those requiring factual, reliable, nontoxic text generation. Our models are all subject to the following:
- **Hallucinations:** This model can produce content that can be mistaken for truth but is, in fact, misleading or entirely false, i.e., hallucination.
- **Biases and Toxicity:** This model inherits the social and historical stereotypes from the data used to train it. Given these biases, the model can produce toxic content, i.e., harmful, offensive, or detrimental to individuals, groups, or communities.
- **Unreliable Code:** The model may produce incorrect code snippets and statements. These code generations should not be treated as suggestions or accurate solutions.
- **Language Limitations:** The model is primarily designed to understand standard Brazilian Portuguese. Other languages might challenge its comprehension, leading to potential misinterpretations or errors in response.
- **Repetition and Verbosity:** The model may get stuck on repetition loops (especially if the repetition penalty during generations is set to a meager value) or produce verbose responses unrelated to the prompt it was given.
Hence, even though our models are released with a permissive license, we urge users to perform their risk analysis on these models if intending to use them for real-world applications and also have humans moderating the outputs of these models in applications where they will interact with an audience, guaranteeing users are always aware they are interacting with a language model.
## Evaluations
During our training runs, both models showed consistent convergence. At no point did our evaluation curves show signs of overfitting or saturation. In the case of our 460m parameter model, we intentionally trained past the optimal point by approximately 75,000 steps to assess if there were any signs of saturation, but our evaluations consistently gave better results. We hypothesize that our models are under-trained but can improve if further trained to pass the Chinchilla optimal range.
| Processed Tokens | Perplexity | Energy Consumption (kWh) | Emissions (KgCO2eq) |
|------------------|------------|---------------------------|----------------------|
| 8.1M | 20.49 | 9.40 | 3.34 |
| 1.6B | 16.90 | 18.82 | 6.70 |
| 2.4B | 15.43 | 28.59 | 10.16 |
| 3.2B | 14.64 | 38.20 | 13.57 |
| 4.0B | 14.08 | 48.04 | 17.07 |
| 4.9B | 13.61 | 57.74 | 20.52 |
| 5.7B | 13.25 | 67.32 | 23.92 |
| 6.5B | 12.87 | 76.84 | 27.30 |
| 7.3B | 12.57 | 86.40 | 30.70 |
| 8.1B | 12.27 | 96.19 | 34.18 |
| 9.0B | 11.96 | 106.06 | 37.70 |
| 9.8B | 11.77 | 115.69 | 41.31 |
## Benchmarks
Evaluations on benchmarks were performed using the [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) (by [EleutherAI](https://www.eleuther.ai/)). [Laiviet](https://github.com/laiviet/lm-evaluation-harness) translated the tasks from the LM-Evaluation-Harness we used. The results of models marked with an "*" were extracted from the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
| | **ARC** | **HellaSwag** | **MMLU** | **TruthfulQA** | **Average** |
|------------------|-----------|---------------|-----------|----------------|-------------|
| Pythia-410m | 24.83* | 41.29* | 25.99* | 40.95* | 33.26 |
| **TTL-460m** | 29.40 | 33.00 | 28.55 | 41.10 | 33.01 |
| Bloom-560m | 24.74* | 37.15* | 24.22* | 42.44* | 32.13 |
| Xglm-564M | 25.56 | 34.64* | 25.18* | 42.53 | 31.97 |
| OPT-350m | 23.55* | 36.73* | 26.02* | 40.83* | 31.78 |
| **TTL-160m** | 26.15 | 29.29 | 28.11 | 41.12 | 31.16 |
| Pythia-160m | 24.06* | 31.39* | 24.86* | 44.34* | 31.16 |
| OPT-125m | 22.87* | 31.47* | 26.02* | 42.87* | 30.80 |
| GPorTuguese-2 | 22.48 | 29.62 | 27.36 | 41.44 | 30.22 |
| Gpt2-small | 21.48* | 31.60* | 25.79* | 40.65* | 29.97 |
| Multilingual GPT | 23.81 | 26.37* | 25.17* | 39.62 | 28.73 |
Evaluations on Brazilian Portuguese benchmarks were performed using a [Portuguese implementation of the EleutherAI LM Evaluation Harness](https://github.com/eduagarcia/lm-evaluation-harness-pt) (created by [Eduardo Garcia](https://github.com/eduagarcia/lm-evaluation-harness-pt)).
| | **ASSIN2 RTE** | **ASSIN2 STS** | **BLUEX** | **ENEM** | **FAQUAD NLI** | **HateBR** | **OAB Exams** | **Average** |
|----------------|----------------|----------------|-----------|----------|----------------|------------|---------------|-------------|
| Qwen-1.8B | 64.83 | 19.53 | 26.15 | 30.23 | 43.97 | 33.33 | 27.20 | 35.03 |
| TinyLlama-1.1B | 58.93 | 13.57 | 22.81 | 22.25 | 43.97 | 36.92 | 23.64 | 31.72 |
| **TTL-460m** | 53.93 | 12.66 | 22.81 | 19.87 | 49.01 | 33.59 | 27.06 | 31.27 |
| XGLM-564m | 49.61 | 22.91 | 19.61 | 19.38 | 43.97 | 33.99 | 23.42 | 30.41 |
| Bloom-1b7 | 53.60 | 4.81 | 21.42 | 18.96 | 43.97 | 34.89 | 23.05 | 28.67 |
| **TTL-160m** | 53.36 | 2.58 | 21.84 | 18.75 | 43.97 | 36.88 | 22.60 | 28.56 |
| OPT-125m | 39.77 | 2.00 | 21.84 | 17.42 | 43.97 | 47.04 | 22.78 | 27.83 |
| Pythia-160 | 33.33 | 12.81 | 16.13 | 16.66 | 50.36 | 41.09 | 22.82 | 27.60 |
| OLMo-1b | 34.12 | 9.28 | 18.92 | 20.29 | 43.97 | 41.33 | 22.96 | 27.26 |
| Bloom-560m | 33.33 | 8.48 | 18.92 | 19.03 | 43.97 | 37.07 | 23.05 | 26.26 |
| Pythia-410m | 33.33 | 4.80 | 19.47 | 19.45 | 43.97 | 33.33 | 23.01 | 25.33 |
| OPT-350m | 33.33 | 3.65 | 20.72 | 17.35 | 44.71 | 33.33 | 23.01 | 25.15 |
| GPT-2 small | 33.26 | 0.00 | 10.43 | 11.20 | 43.52 | 33.68 | 13.12 | 20.74 |
| GPorTuguese | 33.33 | 3.85 | 14.74 | 3.01 | 28.81 | 33.33 | 21.23 | 19.75 |
| Samba-1.1B | 33.33 | 1.30 | 8.07 | 10.22 | 17.72 | 35.79 | 15.03 | 17.35 |
## Fine-Tuning Comparisons
To further evaluate the downstream capabilities of our models, we decided to employ a basic fine-tuning procedure for our TTL pair on a subset of tasks from the Poeta benchmark. We apply the same procedure for comparison purposes on both [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased) models, given that they are also LLM trained from scratch in Brazilian Portuguese and have a similar size range to our models. We used these comparisons to assess if our pre-training runs produced LLM capable of producing good results ("good" here means "close to BERTimbau") when utilized for downstream applications.
| Models | IMDB | FaQuAD-NLI | HateBr | Assin2 | AgNews | Average |
|-----------------|-----------|------------|-----------|-----------|-----------|---------|
| BERTimbau-large | **93.58** | 92.26 | 91.57 | **88.97** | 94.11 | 92.10 |
| BERTimbau-small | 92.22 | **93.07** | 91.28 | 87.45 | 94.19 | 91.64 |
| **TTL-460m** | 91.64 | 91.18 | **92.28** | 86.43 | **94.42** | 91.19 |
| **TTL-160m** | 91.14 | 90.00 | 90.71 | 85.78 | 94.05 | 90.34 |
All the shown results are the higher accuracy scores achieved on the respective task test sets after fine-tuning the models on the training sets. All fine-tuning runs used the same hyperparameters, and the code implementation can be found in the [model cards](https://huggingface.co/nicholasKluge/TeenyTinyLlama-460m-HateBR) of our fine-tuned models.
## Cite as 🤗
```latex
@misc{correa24ttllama,
title = {TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese},
author = {Corr{\^e}a, Nicholas Kluge and Falk, Sophia and Fatimah, Shiza and Sen, Aniket and De Oliveira, Nythamar},
journal={arXiv preprint arXiv:2401.16640},
year={2024}
}
@misc{correa24ttllama,
doi = {10.1016/j.mlwa.2024.100558},
url = {https://www.sciencedirect.com/science/article/pii/S2666827024000343},
title = {TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese},
author = {Corr{\^e}a, Nicholas Kluge and Falk, Sophia and Fatimah, Shiza and Sen, Aniket and De Oliveira, Nythamar},
journal={Machine Learning With Applications},
publisher = {Springer},
year={2024}
}
```
## Funding
This repository was built as part of the RAIES ([Rede de Inteligência Artificial Ética e Segura](https://www.raies.org/)) initiative, a project supported by FAPERGS - ([Fundação de Amparo à Pesquisa do Estado do Rio Grande do Sul](https://fapergs.rs.gov.br/inicial)), Brazil.
## License
TeenyTinyLlama-460m-Chat is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.
|
starnet/07-star-07-02-01 | starnet | "2024-07-02T18:24:02Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:20:52Z" | Entry not found |
RichardErkhov/phanerozoic_-_Tiny-Cowboy-1.1b-v0.1-gguf | RichardErkhov | "2024-07-02T18:30:42Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-07-02T18:21:08Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Tiny-Cowboy-1.1b-v0.1 - GGUF
- Model creator: https://huggingface.co/phanerozoic/
- Original model: https://huggingface.co/phanerozoic/Tiny-Cowboy-1.1b-v0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Tiny-Cowboy-1.1b-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Cowboy-1.1b-v0.1-gguf/blob/main/Tiny-Cowboy-1.1b-v0.1.Q2_K.gguf) | Q2_K | 0.4GB |
| [Tiny-Cowboy-1.1b-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Cowboy-1.1b-v0.1-gguf/blob/main/Tiny-Cowboy-1.1b-v0.1.IQ3_XS.gguf) | IQ3_XS | 0.44GB |
| [Tiny-Cowboy-1.1b-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Cowboy-1.1b-v0.1-gguf/blob/main/Tiny-Cowboy-1.1b-v0.1.IQ3_S.gguf) | IQ3_S | 0.47GB |
| [Tiny-Cowboy-1.1b-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Cowboy-1.1b-v0.1-gguf/blob/main/Tiny-Cowboy-1.1b-v0.1.Q3_K_S.gguf) | Q3_K_S | 0.47GB |
| [Tiny-Cowboy-1.1b-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Cowboy-1.1b-v0.1-gguf/blob/main/Tiny-Cowboy-1.1b-v0.1.IQ3_M.gguf) | IQ3_M | 0.48GB |
| [Tiny-Cowboy-1.1b-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Cowboy-1.1b-v0.1-gguf/blob/main/Tiny-Cowboy-1.1b-v0.1.Q3_K.gguf) | Q3_K | 0.51GB |
| [Tiny-Cowboy-1.1b-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Cowboy-1.1b-v0.1-gguf/blob/main/Tiny-Cowboy-1.1b-v0.1.Q3_K_M.gguf) | Q3_K_M | 0.51GB |
| [Tiny-Cowboy-1.1b-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Cowboy-1.1b-v0.1-gguf/blob/main/Tiny-Cowboy-1.1b-v0.1.Q3_K_L.gguf) | Q3_K_L | 0.55GB |
| [Tiny-Cowboy-1.1b-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Cowboy-1.1b-v0.1-gguf/blob/main/Tiny-Cowboy-1.1b-v0.1.IQ4_XS.gguf) | IQ4_XS | 0.57GB |
| [Tiny-Cowboy-1.1b-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Cowboy-1.1b-v0.1-gguf/blob/main/Tiny-Cowboy-1.1b-v0.1.Q4_0.gguf) | Q4_0 | 0.59GB |
| [Tiny-Cowboy-1.1b-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Cowboy-1.1b-v0.1-gguf/blob/main/Tiny-Cowboy-1.1b-v0.1.IQ4_NL.gguf) | IQ4_NL | 0.6GB |
| [Tiny-Cowboy-1.1b-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Cowboy-1.1b-v0.1-gguf/blob/main/Tiny-Cowboy-1.1b-v0.1.Q4_K_S.gguf) | Q4_K_S | 0.6GB |
| [Tiny-Cowboy-1.1b-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Cowboy-1.1b-v0.1-gguf/blob/main/Tiny-Cowboy-1.1b-v0.1.Q4_K.gguf) | Q4_K | 0.62GB |
| [Tiny-Cowboy-1.1b-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Cowboy-1.1b-v0.1-gguf/blob/main/Tiny-Cowboy-1.1b-v0.1.Q4_K_M.gguf) | Q4_K_M | 0.62GB |
| [Tiny-Cowboy-1.1b-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Cowboy-1.1b-v0.1-gguf/blob/main/Tiny-Cowboy-1.1b-v0.1.Q4_1.gguf) | Q4_1 | 0.65GB |
| [Tiny-Cowboy-1.1b-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Cowboy-1.1b-v0.1-gguf/blob/main/Tiny-Cowboy-1.1b-v0.1.Q5_0.gguf) | Q5_0 | 0.71GB |
| [Tiny-Cowboy-1.1b-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Cowboy-1.1b-v0.1-gguf/blob/main/Tiny-Cowboy-1.1b-v0.1.Q5_K_S.gguf) | Q5_K_S | 0.71GB |
| [Tiny-Cowboy-1.1b-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Cowboy-1.1b-v0.1-gguf/blob/main/Tiny-Cowboy-1.1b-v0.1.Q5_K.gguf) | Q5_K | 0.73GB |
| [Tiny-Cowboy-1.1b-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Cowboy-1.1b-v0.1-gguf/blob/main/Tiny-Cowboy-1.1b-v0.1.Q5_K_M.gguf) | Q5_K_M | 0.73GB |
| [Tiny-Cowboy-1.1b-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Cowboy-1.1b-v0.1-gguf/blob/main/Tiny-Cowboy-1.1b-v0.1.Q5_1.gguf) | Q5_1 | 0.77GB |
| [Tiny-Cowboy-1.1b-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Cowboy-1.1b-v0.1-gguf/blob/main/Tiny-Cowboy-1.1b-v0.1.Q6_K.gguf) | Q6_K | 0.84GB |
| [Tiny-Cowboy-1.1b-v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Cowboy-1.1b-v0.1-gguf/blob/main/Tiny-Cowboy-1.1b-v0.1.Q8_0.gguf) | Q8_0 | 1.09GB |
Original model description:
---
license: cc-by-nc-4.0
language:
- en
widget:
- text: |
Howdy! What is best about the prairie, cowpoke?
example_title: "Color of a Typical Cowboy Hat"
---
![tinycowboy.png](https://huggingface.co/phanerozoic/Tiny-Cowboy-1.1b-v0.1/resolve/main/tinycowboy.png)
# Tiny-Cowboy-1.1b-v0.1
Tiny-Cowboy-1.1b-v0.1 is a specialized language model designed for generating cowboy-themed content. Developed by phanerozoic, this model is fine-tuned from TinyLlamaTinyLlama-1.1B-Chat-v1.0, optimized for environments with limited computing resources.
### Performance
The model excels in generating engaging cowboy narratives and demonstrates a strong grasp of cowboy culture and lifestyle. However, it is less effective in general language tasks, especially in scientific and technical domains.
### Direct Use
Ideal for thematic language generation, particularly in applications where cowboy culture and storytelling are central. Less suited for general-purpose use or scenarios requiring detailed, accurate scientific explanations.
### Context Setting and Interaction Guidelines
Tiny-Cowboy-1.1b-v0.1, being a narrowly focused and somewhat limited-performance model, benefits from an initial context-setting message. This setup involves a predefined assistant message that establishes its cowboy identity at the start of each interaction. This strategy is crucial for priming the model to maintain its cowboy theme throughout the conversation. It's important to note that the model has been fine-tuned for a cowboy style of speaking, so explicit instructions on how to respond in a cowboy manner are unnecessary.
#### Initial Context Setting:
- text: |
Assistant: Howdy! I'm your cowboy assistant, ready to talk all things Wild West. What cowboy queries can I lasso for you today?
example_title: "Initiating Cowboy Themed Conversation"
- text: |
Assistant: Yeehaw! Let's dive into the cowboy world. Ask me anything about cowboys, ranches, or the Wild West!
example_title: "Engaging in Cowboy Themed Dialogue"
The introduction by the assistant sets the thematic tone, guiding the user to interact within the cowboy context.
### Training Data
Incorporates a dataset focused on cowboy and Wild West themes, derived from the foundational TinyLlama-1.1B model.
### Custom Stopping Strings
Custom stopping strings were used to refine output quality:
- "},"
- "User:"
- "You:"
- "\nUser"
- "\nUser:"
- "me:"
- "user"
- "\n"
### Training Hyperparameters and Fine-Tuning Details
- **Base Model Name**: TinyLlamaTinyLlama-1.1B-Chat-v1.0
- **Base Model Class**: LlamaForCausalLM
- **Projections**: gate, down, up, q, k, v, o
- **LoRA Rank**: 16
- **LoRA Alpha**: 32
- **True Batch Size**: 4
- **Gradient Accumulation Steps**: 1
- **Epochs**: 1
- **Learning Rate**: 3e-4
- **LR Scheduler**: Linear
- **LLaMA Target Projections**: All targets modified
- **Loss**: 2.096
- **Stop Step**: 42
### Limitations
While adept at cowboy-themed content, Tiny-Cowboy-v0.1 struggles with topics outside its specialty, particularly in scientific and technical areas. The model tends to incorporate cowboy elements into responses, regardless of the question's relevance.
### Compute Infrastructure
Efficiently trained, demonstrating the feasibility of specialized model training in resource-constrained environments.
### Results
Successfully generates cowboy-themed responses, maintaining thematic consistency. However, it shows limitations in handling more complex, non-cowboy-related queries.
### Summary
Tiny-Cowboy-1.1b-v0.1 is a significant development in thematic, lightweight language models, ideal for cowboy-themed storytelling and educational purposes. Its specialization, however, limits its applicability in broader contexts, particularly where accurate, technical knowledge is required.
### Acknowledgments
Special thanks to the TinyLlama-1.1B team, whose foundational work was instrumental in the development of Tiny-Cowboy-v0.1.
|
fifala/12-fifa-07-02-01 | fifala | "2024-07-02T18:24:00Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:21:08Z" | Entry not found |
GitBag/rebel_multiturn-hh-turn-1-5_last_512_1719873017 | GitBag | "2024-07-02T18:21:24Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T18:21:24Z" | Entry not found |
hcy5561/distilbert-base-uncased-qa-model-v1 | hcy5561 | "2024-07-02T21:12:29Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2024-07-02T18:21:29Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-qa-model-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-qa-model-v1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4419 | 1.0 | 1369 | 1.2113 |
| 1.0695 | 2.0 | 2738 | 1.1351 |
| 0.9043 | 3.0 | 4107 | 1.1275 |
| 0.8004 | 4.0 | 5476 | 1.1568 |
| 0.7256 | 5.0 | 6845 | 1.1713 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu118
- Datasets 2.18.0
- Tokenizers 0.15.2
|
healtori/09-heal-07-02-01 | healtori | "2024-07-02T18:25:42Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:22:45Z" | Entry not found |
BikeshKun/idefics2-8b-docvqa-finetuned | BikeshKun | "2024-07-02T19:39:32Z" | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:HuggingFaceM4/idefics2-8b-chatty",
"license:apache-2.0",
"region:us"
] | null | "2024-07-02T18:23:25Z" | ---
license: apache-2.0
base_model: HuggingFaceM4/idefics2-8b-chatty
tags:
- generated_from_trainer
model-index:
- name: idefics2-8b-docvqa-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# idefics2-8b-docvqa-finetuned
This model is a fine-tuned version of [HuggingFaceM4/idefics2-8b-chatty](https://huggingface.co/HuggingFaceM4/idefics2-8b-chatty) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2645 | 0.992 | 62 | 0.3258 |
### Framework versions
- Transformers 4.43.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
sims2k/Saul_GDPR_v1.1-GGUF | sims2k | "2024-07-02T18:24:00Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T18:23:50Z" | Entry not found |
fifala/13-fifa-07-02-01 | fifala | "2024-07-02T18:27:42Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:24:54Z" | Entry not found |
keethu/results | keethu | "2024-07-02T18:57:14Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T18:25:04Z" | ---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Results
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the Kubernetes dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
starnet/08-star-07-02-01 | starnet | "2024-07-02T18:28:15Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:25:05Z" | Entry not found |
mradermacher/Mistral-11B-SynthIAirOmniMix-GGUF | mradermacher | "2024-07-02T19:08:40Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:NeverSleep/Mistral-11B-SynthIAirOmniMix",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T18:25:40Z" | ---
base_model: NeverSleep/Mistral-11B-SynthIAirOmniMix
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/NeverSleep/Mistral-11B-SynthIAirOmniMix
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-11B-SynthIAirOmniMix-GGUF/resolve/main/Mistral-11B-SynthIAirOmniMix.Q2_K.gguf) | Q2_K | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-11B-SynthIAirOmniMix-GGUF/resolve/main/Mistral-11B-SynthIAirOmniMix.IQ3_XS.gguf) | IQ3_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-11B-SynthIAirOmniMix-GGUF/resolve/main/Mistral-11B-SynthIAirOmniMix.Q3_K_S.gguf) | Q3_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-11B-SynthIAirOmniMix-GGUF/resolve/main/Mistral-11B-SynthIAirOmniMix.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mistral-11B-SynthIAirOmniMix-GGUF/resolve/main/Mistral-11B-SynthIAirOmniMix.IQ3_M.gguf) | IQ3_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-11B-SynthIAirOmniMix-GGUF/resolve/main/Mistral-11B-SynthIAirOmniMix.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-11B-SynthIAirOmniMix-GGUF/resolve/main/Mistral-11B-SynthIAirOmniMix.Q3_K_L.gguf) | Q3_K_L | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-11B-SynthIAirOmniMix-GGUF/resolve/main/Mistral-11B-SynthIAirOmniMix.IQ4_XS.gguf) | IQ4_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-11B-SynthIAirOmniMix-GGUF/resolve/main/Mistral-11B-SynthIAirOmniMix.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-11B-SynthIAirOmniMix-GGUF/resolve/main/Mistral-11B-SynthIAirOmniMix.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-11B-SynthIAirOmniMix-GGUF/resolve/main/Mistral-11B-SynthIAirOmniMix.Q5_K_S.gguf) | Q5_K_S | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-11B-SynthIAirOmniMix-GGUF/resolve/main/Mistral-11B-SynthIAirOmniMix.Q5_K_M.gguf) | Q5_K_M | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-11B-SynthIAirOmniMix-GGUF/resolve/main/Mistral-11B-SynthIAirOmniMix.Q6_K.gguf) | Q6_K | 8.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-11B-SynthIAirOmniMix-GGUF/resolve/main/Mistral-11B-SynthIAirOmniMix.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
liminerity/Bitnet-Mistral.0.2-330m-v0.2-grokfast-v3 | liminerity | "2024-07-03T01:07:52Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T18:26:15Z" | Entry not found |
healtori/10-heal-07-02-01 | healtori | "2024-07-02T18:29:22Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:26:34Z" | Entry not found |
sangar-1028/btdev-ai-gen-v1 | sangar-1028 | "2024-07-02T19:04:45Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"codegen",
"text-generation",
"arxiv:2203.13474",
"license:bsd-3-clause",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:27:16Z" | ---
license: bsd-3-clause
---
# CodeGen (CodeGen-Mono 350M)
## Model description
CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`).
The checkpoint included in this repository is denoted as **CodeGen-Mono 350M** in the paper, where "Mono" means the model is initialized with *CodeGen-Multi 350M* and further pre-trained on a Python programming language dataset, and "350M" refers to the number of trainable parameters.
## Training data
This checkpoint (CodeGen-Mono 350M) was firstly initialized with *CodeGen-Multi 350M*, and then pre-trained on BigPython dataset. The data consists of 71.7B tokens of Python programming language. See Section 2.1 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Training procedure
CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism.
See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Evaluation results
We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Intended Use and Limitations
As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-350M-mono")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-350M-mono")
text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
## BibTeX entry and citation info
```bibtex
@article{Nijkamp2022ACP,
title={A Conversational Paradigm for Program Synthesis},
author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming},
journal={arXiv preprint},
year={2022}
}
```
|
fifala/14-fifa-07-02-01 | fifala | "2024-07-02T18:31:24Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:28:31Z" | Entry not found |
RichardErkhov/jan-hq_-_LlamaCorn-1.1B-Chat-gguf | RichardErkhov | "2024-07-02T18:39:11Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-07-02T18:28:39Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
LlamaCorn-1.1B-Chat - GGUF
- Model creator: https://huggingface.co/jan-hq/
- Original model: https://huggingface.co/jan-hq/LlamaCorn-1.1B-Chat/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [LlamaCorn-1.1B-Chat.Q2_K.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_LlamaCorn-1.1B-Chat-gguf/blob/main/LlamaCorn-1.1B-Chat.Q2_K.gguf) | Q2_K | 0.4GB |
| [LlamaCorn-1.1B-Chat.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_LlamaCorn-1.1B-Chat-gguf/blob/main/LlamaCorn-1.1B-Chat.IQ3_XS.gguf) | IQ3_XS | 0.44GB |
| [LlamaCorn-1.1B-Chat.IQ3_S.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_LlamaCorn-1.1B-Chat-gguf/blob/main/LlamaCorn-1.1B-Chat.IQ3_S.gguf) | IQ3_S | 0.47GB |
| [LlamaCorn-1.1B-Chat.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_LlamaCorn-1.1B-Chat-gguf/blob/main/LlamaCorn-1.1B-Chat.Q3_K_S.gguf) | Q3_K_S | 0.47GB |
| [LlamaCorn-1.1B-Chat.IQ3_M.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_LlamaCorn-1.1B-Chat-gguf/blob/main/LlamaCorn-1.1B-Chat.IQ3_M.gguf) | IQ3_M | 0.48GB |
| [LlamaCorn-1.1B-Chat.Q3_K.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_LlamaCorn-1.1B-Chat-gguf/blob/main/LlamaCorn-1.1B-Chat.Q3_K.gguf) | Q3_K | 0.51GB |
| [LlamaCorn-1.1B-Chat.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_LlamaCorn-1.1B-Chat-gguf/blob/main/LlamaCorn-1.1B-Chat.Q3_K_M.gguf) | Q3_K_M | 0.51GB |
| [LlamaCorn-1.1B-Chat.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_LlamaCorn-1.1B-Chat-gguf/blob/main/LlamaCorn-1.1B-Chat.Q3_K_L.gguf) | Q3_K_L | 0.55GB |
| [LlamaCorn-1.1B-Chat.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_LlamaCorn-1.1B-Chat-gguf/blob/main/LlamaCorn-1.1B-Chat.IQ4_XS.gguf) | IQ4_XS | 0.57GB |
| [LlamaCorn-1.1B-Chat.Q4_0.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_LlamaCorn-1.1B-Chat-gguf/blob/main/LlamaCorn-1.1B-Chat.Q4_0.gguf) | Q4_0 | 0.59GB |
| [LlamaCorn-1.1B-Chat.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_LlamaCorn-1.1B-Chat-gguf/blob/main/LlamaCorn-1.1B-Chat.IQ4_NL.gguf) | IQ4_NL | 0.6GB |
| [LlamaCorn-1.1B-Chat.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_LlamaCorn-1.1B-Chat-gguf/blob/main/LlamaCorn-1.1B-Chat.Q4_K_S.gguf) | Q4_K_S | 0.6GB |
| [LlamaCorn-1.1B-Chat.Q4_K.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_LlamaCorn-1.1B-Chat-gguf/blob/main/LlamaCorn-1.1B-Chat.Q4_K.gguf) | Q4_K | 0.62GB |
| [LlamaCorn-1.1B-Chat.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_LlamaCorn-1.1B-Chat-gguf/blob/main/LlamaCorn-1.1B-Chat.Q4_K_M.gguf) | Q4_K_M | 0.62GB |
| [LlamaCorn-1.1B-Chat.Q4_1.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_LlamaCorn-1.1B-Chat-gguf/blob/main/LlamaCorn-1.1B-Chat.Q4_1.gguf) | Q4_1 | 0.65GB |
| [LlamaCorn-1.1B-Chat.Q5_0.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_LlamaCorn-1.1B-Chat-gguf/blob/main/LlamaCorn-1.1B-Chat.Q5_0.gguf) | Q5_0 | 0.71GB |
| [LlamaCorn-1.1B-Chat.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_LlamaCorn-1.1B-Chat-gguf/blob/main/LlamaCorn-1.1B-Chat.Q5_K_S.gguf) | Q5_K_S | 0.71GB |
| [LlamaCorn-1.1B-Chat.Q5_K.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_LlamaCorn-1.1B-Chat-gguf/blob/main/LlamaCorn-1.1B-Chat.Q5_K.gguf) | Q5_K | 0.73GB |
| [LlamaCorn-1.1B-Chat.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_LlamaCorn-1.1B-Chat-gguf/blob/main/LlamaCorn-1.1B-Chat.Q5_K_M.gguf) | Q5_K_M | 0.73GB |
| [LlamaCorn-1.1B-Chat.Q5_1.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_LlamaCorn-1.1B-Chat-gguf/blob/main/LlamaCorn-1.1B-Chat.Q5_1.gguf) | Q5_1 | 0.77GB |
| [LlamaCorn-1.1B-Chat.Q6_K.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_LlamaCorn-1.1B-Chat-gguf/blob/main/LlamaCorn-1.1B-Chat.Q6_K.gguf) | Q6_K | 0.84GB |
| [LlamaCorn-1.1B-Chat.Q8_0.gguf](https://huggingface.co/RichardErkhov/jan-hq_-_LlamaCorn-1.1B-Chat-gguf/blob/main/LlamaCorn-1.1B-Chat.Q8_0.gguf) | Q8_0 | 1.09GB |
Original model description:
---
license: apache-2.0
tags:
- alignment-handbook
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- jan-hq/bagel_sft_binarized
- jan-hq/dolphin_binarized
- jan-hq/openhermes_binarized
- jan-hq/bagel_dpo_binarized
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
pipeline_tag: text-generation
inference:
parameters:
temperature: 0.7
max_new_tokens: 40
widget:
- messages:
- role: user
content: Tell me about NVIDIA in 20 words
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto"
>
<img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner"
style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<p align="center">
<a href="https://jan.ai/">Jan</a
>
- <a
href="https://discord.gg/AsJ8krTT3N">Discord</a>
</p>
<!-- header end -->
# Model description
- Finetuned [TinyLlama-1.1B](TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) further for handling simple tasks and have acceptable conversational quality
- Utilized high-quality opensource dataset
- Can be run on [TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) on consumer devices
- Can fit into laptop dGPUs with as little as >=6gb of VRAM
# Prompt template
ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
# Run this model
You can run this model using [Jan Desktop](https://jan.ai/) on Mac, Windows, or Linux.
Jan is an open source, ChatGPT alternative that is:
- 💻 **100% offline on your machine**: Your conversations remain confidential, and visible only to you.
- 🗂️ **
An Open File Format**: Conversations and model settings stay on your computer and can be exported or deleted at any time.
- 🌐 **OpenAI Compatible**: Local server on port `1337` with OpenAI compatible endpoints
- 🌍 **Open Source & Free**: We build in public; check out our [Github](https://github.com/janhq)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65713d70f56f9538679e5a56/r7VmEBLGXpPLTu2MImM7S.png)
# About Jan
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life.
# LlamaCorn-1.1B-Chat
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:-----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.9958 | 0.03 | 100 | 1.0003 | -0.0002 | -0.0002 | 0.4930 | -0.0001 | -180.9232 | -195.6078 | -2.6876 | -2.6924 |
| 0.9299 | 1.02 | 3500 | 0.9439 | -0.1570 | -0.2195 | 0.5770 | 0.0625 | -183.1160 | -197.1755 | -2.6612 | -2.6663 |
| 0.9328 | 2.01 | 6900 | 0.9313 | -0.2127 | -0.2924 | 0.5884 | 0.0798 | -183.8456 | -197.7321 | -2.6296 | -2.6352 |
| 0.9321 | 2.98 | 10200 | 0.9305 | -0.2149 | -0.2955 | 0.5824 | 0.0805 | -183.8759 | -197.7545 | -2.6439 | -2.6493 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.0
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_jan-hq__LlamaCorn-1.1B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |36.94|
|AI2 Reasoning Challenge (25-Shot)|34.13|
|HellaSwag (10-Shot) |59.33|
|MMLU (5-Shot) |29.01|
|TruthfulQA (0-shot) |36.78|
|Winogrande (5-shot) |61.96|
|GSM8k (5-shot) | 0.45|
|
starnet/02-star21-07-02 | starnet | "2024-07-02T18:37:01Z" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | null | "2024-07-02T18:29:12Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
starnet/09-star-07-02-01 | starnet | "2024-07-02T18:33:22Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:29:18Z" | Entry not found |
Jbbok/FrozenLake-v1 | Jbbok | "2024-07-02T18:29:40Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-07-02T18:29:19Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: FrozenLake-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Jbbok/FrozenLake-v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
manbeast3b/ZZZZZZZZdriver140 | manbeast3b | "2024-07-02T18:32:02Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:29:28Z" | Entry not found |
healtori/11-heal-07-02-01 | healtori | "2024-07-02T18:33:09Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:30:26Z" | Entry not found |
tsarasa/sarasa | tsarasa | "2024-07-02T18:30:31Z" | 0 | 0 | null | [
"license:cc0-1.0",
"region:us"
] | null | "2024-07-02T18:30:31Z" | ---
license: cc0-1.0
---
|
RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch1-gguf | RichardErkhov | "2024-07-02T18:46:00Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-07-02T18:32:00Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Mixsmol-4x400M-v0.1-epoch1 - GGUF
- Model creator: https://huggingface.co/vilm/
- Original model: https://huggingface.co/vilm/Mixsmol-4x400M-v0.1-epoch1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Mixsmol-4x400M-v0.1-epoch1.Q2_K.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch1-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch1.Q2_K.gguf) | Q2_K | 0.62GB |
| [Mixsmol-4x400M-v0.1-epoch1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch1-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch1.IQ3_XS.gguf) | IQ3_XS | 0.7GB |
| [Mixsmol-4x400M-v0.1-epoch1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch1-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch1.IQ3_S.gguf) | IQ3_S | 0.73GB |
| [Mixsmol-4x400M-v0.1-epoch1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch1-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch1.Q3_K_S.gguf) | Q3_K_S | 0.73GB |
| [Mixsmol-4x400M-v0.1-epoch1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch1-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch1.IQ3_M.gguf) | IQ3_M | 0.74GB |
| [Mixsmol-4x400M-v0.1-epoch1.Q3_K.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch1-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch1.Q3_K.gguf) | Q3_K | 0.8GB |
| [Mixsmol-4x400M-v0.1-epoch1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch1-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch1.Q3_K_M.gguf) | Q3_K_M | 0.8GB |
| [Mixsmol-4x400M-v0.1-epoch1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch1-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch1.Q3_K_L.gguf) | Q3_K_L | 0.87GB |
| [Mixsmol-4x400M-v0.1-epoch1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch1-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch1.IQ4_XS.gguf) | IQ4_XS | 0.9GB |
| [Mixsmol-4x400M-v0.1-epoch1.Q4_0.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch1-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch1.Q4_0.gguf) | Q4_0 | 0.94GB |
| [Mixsmol-4x400M-v0.1-epoch1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch1-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch1.IQ4_NL.gguf) | IQ4_NL | 0.95GB |
| [Mixsmol-4x400M-v0.1-epoch1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch1-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch1.Q4_K_S.gguf) | Q4_K_S | 0.95GB |
| [Mixsmol-4x400M-v0.1-epoch1.Q4_K.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch1-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch1.Q4_K.gguf) | Q4_K | 1.01GB |
| [Mixsmol-4x400M-v0.1-epoch1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch1-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch1.Q4_K_M.gguf) | Q4_K_M | 1.01GB |
| [Mixsmol-4x400M-v0.1-epoch1.Q4_1.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch1-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch1.Q4_1.gguf) | Q4_1 | 1.04GB |
| [Mixsmol-4x400M-v0.1-epoch1.Q5_0.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch1-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch1.Q5_0.gguf) | Q5_0 | 1.14GB |
| [Mixsmol-4x400M-v0.1-epoch1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch1-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch1.Q5_K_S.gguf) | Q5_K_S | 1.14GB |
| [Mixsmol-4x400M-v0.1-epoch1.Q5_K.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch1-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch1.Q5_K.gguf) | Q5_K | 1.18GB |
| [Mixsmol-4x400M-v0.1-epoch1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch1-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch1.Q5_K_M.gguf) | Q5_K_M | 1.18GB |
| [Mixsmol-4x400M-v0.1-epoch1.Q5_1.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch1-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch1.Q5_1.gguf) | Q5_1 | 1.24GB |
| [Mixsmol-4x400M-v0.1-epoch1.Q6_K.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch1-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch1.Q6_K.gguf) | Q6_K | 1.36GB |
| [Mixsmol-4x400M-v0.1-epoch1.Q8_0.gguf](https://huggingface.co/RichardErkhov/vilm_-_Mixsmol-4x400M-v0.1-epoch1-gguf/blob/main/Mixsmol-4x400M-v0.1-epoch1.Q8_0.gguf) | Q8_0 | 1.76GB |
Original model description:
---
license: apache-2.0
widget:
- text: My name is El Microondas the Wise, and
example_title: El Microondas
- text: Kennesaw State University is a public
example_title: Kennesaw State University
- text: Bungie Studios is an American video game developer. They are most famous for
developing the award winning Halo series of video games. They also made Destiny.
The studio was founded
example_title: Bungie
- text: The Mona Lisa is a world-renowned painting created by
example_title: Mona Lisa
- text: The Harry Potter series, written by J.K. Rowling, begins with the book titled
example_title: Harry Potter Series
- text: 'Question: I have cities, but no houses. I have mountains, but no trees. I
have water, but no fish. What am I?
Answer:'
example_title: Riddle
- text: The process of photosynthesis involves the conversion of
example_title: Photosynthesis
- text: Jane went to the store to buy some groceries. She picked up apples, oranges,
and a loaf of bread. When she got home, she realized she forgot
example_title: Story Continuation
- text: 'Problem 2: If a train leaves Station A at 9:00 AM and travels at 60 mph,
and another train leaves Station B at 10:00 AM and travels at 80 mph, when will
they meet if the distance between the stations is 300 miles?
To determine'
example_title: Math Problem
- text: In the context of computer programming, an algorithm is
example_title: Algorithm Definition
---
# Mixsmol-4x400M-v0.1 by Ontocord
This is the first checkpoint (Epoch 1) of Mixsmol-4x400M-v0.1
Note that this is an experimental in data mixing. Therefore, we only trained the model on 50B tokens (95% English and 5% Vietnamese) to test the following:
- Reasoining capabilities through high-quality synthetic textbooks data pretraining
- Crosslingual understanding through machine translation and multilingual + multiple tasks pretraining
After verifying our hypothesis with this run, we will schedule a second run on bigger data and compute for it to achieve its maximum capability.
## Data
- Synthetic Textbooks: 8M samples
- RefinedWeb: 1M samples
- RedPajama-v2: 500K samples
- MathPile: Everything
- ThePile: MiniPile Subset
- GoodWiki
- The Stack Smol XL
- The Vault: train_small split
- Instruction Pretraining: 250k samples
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|-------------|-------|------|-----:|--------|-----:|---|-----:|
|arc_challenge|Yaml |none | 25|acc |0.1937|± |0.0115|
| | |none | 25|acc_norm|0.2329|± |0.0124|
|hellaswag|Yaml |none | 10|acc |0.2856|± |0.0045|
| | |none | 10|acc_norm|0.3090|± |0.0046|
|mmlu |N/A |none | 0|acc |0.2536|± |0.0483|
| - humanities |N/A |none | 5|acc |0.2408|± |0.0341|
| - other |N/A |none | 5|acc |0.2475|± |0.0443|
| - social_sciences|N/A |none | 5|acc |0.2567|± |0.0456|
| - stem |N/A |none | 5|acc |0.2756|± |0.0653|
|truthfulqa_mc2|Yaml |none | 0|acc |0.3909|± |0.0148|
|winogrande|Yaml |none | 5|acc |0.5107|± | 0.014|
|gsm8k|Yaml |get-answer| 5|exact_match| 0|± | 0|
## Contribution
This work is a shared contribution between **Ontocord, BEE-spoke-data and VILM**
|
fifala/15-fifa-07-02-01 | fifala | "2024-07-02T18:35:07Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:32:19Z" | Entry not found |
quilter0/kor-Qwen2-1.5B-bnb-4bit | quilter0 | "2024-07-02T18:48:22Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T18:32:24Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
z3n7r4ck3r/filtered_dataset_20240702_203225 | z3n7r4ck3r | "2024-07-02T18:32:25Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T18:32:25Z" | Entry not found |
coolcat21/notlora_kanji_2100_2e05set | coolcat21 | "2024-07-02T18:33:04Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T18:33:04Z" | Entry not found |
glp500/Archivaris | glp500 | "2024-07-02T18:33:26Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T18:33:10Z" | ---
base_model: meta-llama/Meta-Llama-3-8B-Instruct
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** glp500
- **License:** apache-2.0
- **Finetuned from model :** meta-llama/Meta-Llama-3-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
abmorton/wall-potfiller-v2 | abmorton | "2024-07-02T18:38:17Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-07-02T18:33:25Z" | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### wall-potfiller-v2 Dreambooth model trained by abmorton with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
healtori/12-heal-07-02-01 | healtori | "2024-07-02T18:36:48Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:33:59Z" | Entry not found |
Litzy0619/app_reviews_0.003_32_5_6 | Litzy0619 | "2024-07-02T19:15:06Z" | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | "2024-07-02T18:34:28Z" | Entry not found |
starnet/10-star-07-02-01 | starnet | "2024-07-02T18:37:42Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T18:34:31Z" | Entry not found |