modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
RichardErkhov/kuotient_-_Llama-3-11B-Instruct-attenuated-gguf | RichardErkhov | "2024-07-03T00:55:22Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-07-02T16:45:06Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3-11B-Instruct-attenuated - GGUF
- Model creator: https://huggingface.co/kuotient/
- Original model: https://huggingface.co/kuotient/Llama-3-11B-Instruct-attenuated/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-3-11B-Instruct-attenuated.Q2_K.gguf](https://huggingface.co/RichardErkhov/kuotient_-_Llama-3-11B-Instruct-attenuated-gguf/blob/main/Llama-3-11B-Instruct-attenuated.Q2_K.gguf) | Q2_K | 4.16GB |
| [Llama-3-11B-Instruct-attenuated.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/kuotient_-_Llama-3-11B-Instruct-attenuated-gguf/blob/main/Llama-3-11B-Instruct-attenuated.IQ3_XS.gguf) | IQ3_XS | 4.61GB |
| [Llama-3-11B-Instruct-attenuated.IQ3_S.gguf](https://huggingface.co/RichardErkhov/kuotient_-_Llama-3-11B-Instruct-attenuated-gguf/blob/main/Llama-3-11B-Instruct-attenuated.IQ3_S.gguf) | IQ3_S | 4.83GB |
| [Llama-3-11B-Instruct-attenuated.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/kuotient_-_Llama-3-11B-Instruct-attenuated-gguf/blob/main/Llama-3-11B-Instruct-attenuated.Q3_K_S.gguf) | Q3_K_S | 4.81GB |
| [Llama-3-11B-Instruct-attenuated.IQ3_M.gguf](https://huggingface.co/RichardErkhov/kuotient_-_Llama-3-11B-Instruct-attenuated-gguf/blob/main/Llama-3-11B-Instruct-attenuated.IQ3_M.gguf) | IQ3_M | 4.98GB |
| [Llama-3-11B-Instruct-attenuated.Q3_K.gguf](https://huggingface.co/RichardErkhov/kuotient_-_Llama-3-11B-Instruct-attenuated-gguf/blob/main/Llama-3-11B-Instruct-attenuated.Q3_K.gguf) | Q3_K | 5.3GB |
| [Llama-3-11B-Instruct-attenuated.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/kuotient_-_Llama-3-11B-Instruct-attenuated-gguf/blob/main/Llama-3-11B-Instruct-attenuated.Q3_K_M.gguf) | Q3_K_M | 5.3GB |
| [Llama-3-11B-Instruct-attenuated.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/kuotient_-_Llama-3-11B-Instruct-attenuated-gguf/blob/main/Llama-3-11B-Instruct-attenuated.Q3_K_L.gguf) | Q3_K_L | 5.73GB |
| [Llama-3-11B-Instruct-attenuated.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/kuotient_-_Llama-3-11B-Instruct-attenuated-gguf/blob/main/Llama-3-11B-Instruct-attenuated.IQ4_XS.gguf) | IQ4_XS | 5.93GB |
| [Llama-3-11B-Instruct-attenuated.Q4_0.gguf](https://huggingface.co/RichardErkhov/kuotient_-_Llama-3-11B-Instruct-attenuated-gguf/blob/main/Llama-3-11B-Instruct-attenuated.Q4_0.gguf) | Q4_0 | 6.17GB |
| [Llama-3-11B-Instruct-attenuated.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/kuotient_-_Llama-3-11B-Instruct-attenuated-gguf/blob/main/Llama-3-11B-Instruct-attenuated.IQ4_NL.gguf) | IQ4_NL | 6.23GB |
| [Llama-3-11B-Instruct-attenuated.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/kuotient_-_Llama-3-11B-Instruct-attenuated-gguf/blob/main/Llama-3-11B-Instruct-attenuated.Q4_K_S.gguf) | Q4_K_S | 6.21GB |
| [Llama-3-11B-Instruct-attenuated.Q4_K.gguf](https://huggingface.co/RichardErkhov/kuotient_-_Llama-3-11B-Instruct-attenuated-gguf/blob/main/Llama-3-11B-Instruct-attenuated.Q4_K.gguf) | Q4_K | 6.53GB |
| [Llama-3-11B-Instruct-attenuated.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/kuotient_-_Llama-3-11B-Instruct-attenuated-gguf/blob/main/Llama-3-11B-Instruct-attenuated.Q4_K_M.gguf) | Q4_K_M | 6.53GB |
| [Llama-3-11B-Instruct-attenuated.Q4_1.gguf](https://huggingface.co/RichardErkhov/kuotient_-_Llama-3-11B-Instruct-attenuated-gguf/blob/main/Llama-3-11B-Instruct-attenuated.Q4_1.gguf) | Q4_1 | 6.81GB |
| [Llama-3-11B-Instruct-attenuated.Q5_0.gguf](https://huggingface.co/RichardErkhov/kuotient_-_Llama-3-11B-Instruct-attenuated-gguf/blob/main/Llama-3-11B-Instruct-attenuated.Q5_0.gguf) | Q5_0 | 7.45GB |
| [Llama-3-11B-Instruct-attenuated.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/kuotient_-_Llama-3-11B-Instruct-attenuated-gguf/blob/main/Llama-3-11B-Instruct-attenuated.Q5_K_S.gguf) | Q5_K_S | 7.45GB |
| [Llama-3-11B-Instruct-attenuated.Q5_K.gguf](https://huggingface.co/RichardErkhov/kuotient_-_Llama-3-11B-Instruct-attenuated-gguf/blob/main/Llama-3-11B-Instruct-attenuated.Q5_K.gguf) | Q5_K | 7.64GB |
| [Llama-3-11B-Instruct-attenuated.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/kuotient_-_Llama-3-11B-Instruct-attenuated-gguf/blob/main/Llama-3-11B-Instruct-attenuated.Q5_K_M.gguf) | Q5_K_M | 7.64GB |
| [Llama-3-11B-Instruct-attenuated.Q5_1.gguf](https://huggingface.co/RichardErkhov/kuotient_-_Llama-3-11B-Instruct-attenuated-gguf/blob/main/Llama-3-11B-Instruct-attenuated.Q5_1.gguf) | Q5_1 | 8.09GB |
| [Llama-3-11B-Instruct-attenuated.Q6_K.gguf](https://huggingface.co/RichardErkhov/kuotient_-_Llama-3-11B-Instruct-attenuated-gguf/blob/main/Llama-3-11B-Instruct-attenuated.Q6_K.gguf) | Q6_K | 8.81GB |
| [Llama-3-11B-Instruct-attenuated.Q8_0.gguf](https://huggingface.co/RichardErkhov/kuotient_-_Llama-3-11B-Instruct-attenuated-gguf/blob/main/Llama-3-11B-Instruct-attenuated.Q8_0.gguf) | Q8_0 | 11.41GB |
Original model description:
---
base_model:
- kuotient/Meta-Llama-3-8B-Instruct
library_name: transformers
tags:
- mergekit
- merge
license: other
license_name: llama3
---
# Llama-3-11.5B-Instruct-attenuated
The core idea came from @jukofyork, see this [issue;](https://github.com/arcee-ai/mergekit/issues/198)
As I understand, The concept of the idea is to make model think twice but leap same distances like original. but why 0.7071067812?
> The scale factor to use, eg: solve x^2 = 1/2 --> x = 1/sqrt(2) ≈ 0.7071067812
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [kuotient/Meta-Llama-3-8B-Instruct](https://huggingface.co/kuotient/Meta-Llama-3-8B-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
###############################
# llama-3-attenuated.yaml #
###############################
# Use: mergekit-yaml --clone-tensors ./llama-3-attenuated.yaml ./llama-3-attenuated
# See: https://github.com/arcee-ai/mergekit/issues/198 for discussion/reasoning behind this idea.
# ---
# The scale factor to use, eg: solve x^2 = 1/2 --> x = 1/sqrt(2) ≈ 0.7071067812
const_tag: &scale_factor 0.7071067812 # 1/sqrt(2)
# The filter parameters of a scaled block.
attenuate-env: &attenuated_env
parameters:
scale:
- filter: q_proj
value: *scale_factor
- filter: k_proj
value: *scale_factor
- value: 1.0
# ---
slices:
###########################
# Block 1: miqu-1 [0, 16] #
###########################
- sources:
- model: kuotient/Meta-Llama-3-8B-Instruct
layer_range: [0, 8] # The first 8 layers of Block 1 are not duplicated
- sources:
- model: kuotient/Meta-Llama-3-8B-Instruct
layer_range: [8, 16] # The last 8 layers of Block 1 are are duplicated twice
<<: *attenuated_env
###########################
# Block 2: miqu-1 [8, 24] #
###########################
- sources:
- model: kuotient/Meta-Llama-3-8B-Instruct
layer_range: [8, 24] # All the layers of Block 2 are are duplicated twice
<<: *attenuated_env
##########################
# Block 3: miqu-1 [16, 32] #
##########################
- sources:
- model: kuotient/Meta-Llama-3-8B-Instruct
layer_range: [16, 24] # The first 8 layers of Block 3 are are duplicated twice
<<: *attenuated_env
- sources:
- model: kuotient/Meta-Llama-3-8B-Instruct
layer_range: [24, 32] # The last 8 layers of Block 3 are not duplicated
merge_method: passthrough
dtype: bfloat16
```
|
CoprolaliacPress/Thoth-2-Q6_K-GGUF | CoprolaliacPress | "2024-07-02T16:46:20Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:CoprolaliacPress/Thoth-2",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T16:45:56Z" | ---
base_model: CoprolaliacPress/Thoth-2
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# CoprolaliacPress/Thoth-2-Q6_K-GGUF
This model was converted to GGUF format from [`CoprolaliacPress/Thoth-2`](https://huggingface.co/CoprolaliacPress/Thoth-2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/CoprolaliacPress/Thoth-2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CoprolaliacPress/Thoth-2-Q6_K-GGUF --hf-file thoth-2-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CoprolaliacPress/Thoth-2-Q6_K-GGUF --hf-file thoth-2-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CoprolaliacPress/Thoth-2-Q6_K-GGUF --hf-file thoth-2-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CoprolaliacPress/Thoth-2-Q6_K-GGUF --hf-file thoth-2-q6_k.gguf -c 2048
```
|
susaneileenfox/detr-resnet-50_finetuned_cppe5 | susaneileenfox | "2024-07-02T16:46:16Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T16:46:16Z" | Entry not found |
kheopss/kheops_fr_en_epoch1_2bits_GPTQ_V2 | kheopss | "2024-07-02T16:47:32Z" | 0 | 0 | transformers | [
"transformers",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"2-bit",
"gptq",
"region:us"
] | text-generation | "2024-07-02T16:46:19Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yemen2016/memobert3_NC_02 | yemen2016 | "2024-07-02T17:08:39Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:MiMe-MeMo/MeMo-BERT-03",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-07-02T16:47:31Z" | ---
base_model: MiMe-MeMo/MeMo-BERT-03
tags:
- generated_from_trainer
model-index:
- name: memobert3_NC_02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# memobert3_NC_02
This model is a fine-tuned version of [MiMe-MeMo/MeMo-BERT-03](https://huggingface.co/MiMe-MeMo/MeMo-BERT-03) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7842
- F1-score: 0.7244
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1-score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 82 | 0.5387 | 0.6753 |
| No log | 2.0 | 164 | 0.5441 | 0.6753 |
| No log | 3.0 | 246 | 0.5721 | 0.6753 |
| No log | 4.0 | 328 | 0.5607 | 0.6753 |
| No log | 5.0 | 410 | 0.5970 | 0.6904 |
| No log | 6.0 | 492 | 0.6772 | 0.6904 |
| 0.4736 | 7.0 | 574 | 0.6971 | 0.7034 |
| 0.4736 | 8.0 | 656 | 0.7425 | 0.7125 |
| 0.4736 | 9.0 | 738 | 0.7842 | 0.7244 |
| 0.4736 | 10.0 | 820 | 0.7960 | 0.7244 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
skai24/test_1_0.5 | skai24 | "2024-07-03T01:20:05Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T16:48:21Z" | Invalid username or password. |
RichardErkhov/Yash21_-_SuperChat-7B-gguf | RichardErkhov | "2024-07-02T22:46:01Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-07-02T16:50:43Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
SuperChat-7B - GGUF
- Model creator: https://huggingface.co/Yash21/
- Original model: https://huggingface.co/Yash21/SuperChat-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [SuperChat-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Yash21_-_SuperChat-7B-gguf/blob/main/SuperChat-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [SuperChat-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Yash21_-_SuperChat-7B-gguf/blob/main/SuperChat-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [SuperChat-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Yash21_-_SuperChat-7B-gguf/blob/main/SuperChat-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [SuperChat-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Yash21_-_SuperChat-7B-gguf/blob/main/SuperChat-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [SuperChat-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Yash21_-_SuperChat-7B-gguf/blob/main/SuperChat-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [SuperChat-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Yash21_-_SuperChat-7B-gguf/blob/main/SuperChat-7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [SuperChat-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Yash21_-_SuperChat-7B-gguf/blob/main/SuperChat-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [SuperChat-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Yash21_-_SuperChat-7B-gguf/blob/main/SuperChat-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [SuperChat-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Yash21_-_SuperChat-7B-gguf/blob/main/SuperChat-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [SuperChat-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Yash21_-_SuperChat-7B-gguf/blob/main/SuperChat-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [SuperChat-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Yash21_-_SuperChat-7B-gguf/blob/main/SuperChat-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [SuperChat-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Yash21_-_SuperChat-7B-gguf/blob/main/SuperChat-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [SuperChat-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Yash21_-_SuperChat-7B-gguf/blob/main/SuperChat-7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [SuperChat-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Yash21_-_SuperChat-7B-gguf/blob/main/SuperChat-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [SuperChat-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Yash21_-_SuperChat-7B-gguf/blob/main/SuperChat-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [SuperChat-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Yash21_-_SuperChat-7B-gguf/blob/main/SuperChat-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [SuperChat-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Yash21_-_SuperChat-7B-gguf/blob/main/SuperChat-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [SuperChat-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Yash21_-_SuperChat-7B-gguf/blob/main/SuperChat-7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [SuperChat-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Yash21_-_SuperChat-7B-gguf/blob/main/SuperChat-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [SuperChat-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Yash21_-_SuperChat-7B-gguf/blob/main/SuperChat-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [SuperChat-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Yash21_-_SuperChat-7B-gguf/blob/main/SuperChat-7B.Q6_K.gguf) | Q6_K | 5.53GB |
| [SuperChat-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Yash21_-_SuperChat-7B-gguf/blob/main/SuperChat-7B.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: apache-2.0
tags:
- merge
---
# SuperChat-7B
SuperChat-7B is a merge of the following models:
* [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)
* [OpenPipe/mistral-ft-optimized-1227](https://huggingface.co/OpenPipe/mistral-ft-optimized-1227)
* [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)
* [jan-hq/supermario-v2](https://huggingface.co/jan-hq/supermario-v2)
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Yash21/SuperChat-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
To support my efforts please reach out at maratheyash108@gmail.com
|
abwabai/gemma-2-9b-it-4bit | abwabai | "2024-07-02T16:59:26Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-07-02T16:54:31Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/Edentns_-_DataVortexTL-1.1B-v0.1-gguf | RichardErkhov | "2024-07-02T17:05:51Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-07-02T16:54:57Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
DataVortexTL-1.1B-v0.1 - GGUF
- Model creator: https://huggingface.co/Edentns/
- Original model: https://huggingface.co/Edentns/DataVortexTL-1.1B-v0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [DataVortexTL-1.1B-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexTL-1.1B-v0.1-gguf/blob/main/DataVortexTL-1.1B-v0.1.Q2_K.gguf) | Q2_K | 0.4GB |
| [DataVortexTL-1.1B-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexTL-1.1B-v0.1-gguf/blob/main/DataVortexTL-1.1B-v0.1.IQ3_XS.gguf) | IQ3_XS | 0.44GB |
| [DataVortexTL-1.1B-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexTL-1.1B-v0.1-gguf/blob/main/DataVortexTL-1.1B-v0.1.IQ3_S.gguf) | IQ3_S | 0.47GB |
| [DataVortexTL-1.1B-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexTL-1.1B-v0.1-gguf/blob/main/DataVortexTL-1.1B-v0.1.Q3_K_S.gguf) | Q3_K_S | 0.47GB |
| [DataVortexTL-1.1B-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexTL-1.1B-v0.1-gguf/blob/main/DataVortexTL-1.1B-v0.1.IQ3_M.gguf) | IQ3_M | 0.48GB |
| [DataVortexTL-1.1B-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexTL-1.1B-v0.1-gguf/blob/main/DataVortexTL-1.1B-v0.1.Q3_K.gguf) | Q3_K | 0.51GB |
| [DataVortexTL-1.1B-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexTL-1.1B-v0.1-gguf/blob/main/DataVortexTL-1.1B-v0.1.Q3_K_M.gguf) | Q3_K_M | 0.51GB |
| [DataVortexTL-1.1B-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexTL-1.1B-v0.1-gguf/blob/main/DataVortexTL-1.1B-v0.1.Q3_K_L.gguf) | Q3_K_L | 0.55GB |
| [DataVortexTL-1.1B-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexTL-1.1B-v0.1-gguf/blob/main/DataVortexTL-1.1B-v0.1.IQ4_XS.gguf) | IQ4_XS | 0.57GB |
| [DataVortexTL-1.1B-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexTL-1.1B-v0.1-gguf/blob/main/DataVortexTL-1.1B-v0.1.Q4_0.gguf) | Q4_0 | 0.59GB |
| [DataVortexTL-1.1B-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexTL-1.1B-v0.1-gguf/blob/main/DataVortexTL-1.1B-v0.1.IQ4_NL.gguf) | IQ4_NL | 0.6GB |
| [DataVortexTL-1.1B-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexTL-1.1B-v0.1-gguf/blob/main/DataVortexTL-1.1B-v0.1.Q4_K_S.gguf) | Q4_K_S | 0.6GB |
| [DataVortexTL-1.1B-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexTL-1.1B-v0.1-gguf/blob/main/DataVortexTL-1.1B-v0.1.Q4_K.gguf) | Q4_K | 0.62GB |
| [DataVortexTL-1.1B-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexTL-1.1B-v0.1-gguf/blob/main/DataVortexTL-1.1B-v0.1.Q4_K_M.gguf) | Q4_K_M | 0.62GB |
| [DataVortexTL-1.1B-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexTL-1.1B-v0.1-gguf/blob/main/DataVortexTL-1.1B-v0.1.Q4_1.gguf) | Q4_1 | 0.65GB |
| [DataVortexTL-1.1B-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexTL-1.1B-v0.1-gguf/blob/main/DataVortexTL-1.1B-v0.1.Q5_0.gguf) | Q5_0 | 0.71GB |
| [DataVortexTL-1.1B-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexTL-1.1B-v0.1-gguf/blob/main/DataVortexTL-1.1B-v0.1.Q5_K_S.gguf) | Q5_K_S | 0.71GB |
| [DataVortexTL-1.1B-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexTL-1.1B-v0.1-gguf/blob/main/DataVortexTL-1.1B-v0.1.Q5_K.gguf) | Q5_K | 0.73GB |
| [DataVortexTL-1.1B-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexTL-1.1B-v0.1-gguf/blob/main/DataVortexTL-1.1B-v0.1.Q5_K_M.gguf) | Q5_K_M | 0.73GB |
| [DataVortexTL-1.1B-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexTL-1.1B-v0.1-gguf/blob/main/DataVortexTL-1.1B-v0.1.Q5_1.gguf) | Q5_1 | 0.77GB |
| [DataVortexTL-1.1B-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexTL-1.1B-v0.1-gguf/blob/main/DataVortexTL-1.1B-v0.1.Q6_K.gguf) | Q6_K | 0.84GB |
| [DataVortexTL-1.1B-v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexTL-1.1B-v0.1-gguf/blob/main/DataVortexTL-1.1B-v0.1.Q8_0.gguf) | Q8_0 | 1.09GB |
Original model description:
---
tags:
- text-generation
license: cc-by-nc-sa-4.0
language:
- ko
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
pipeline_tag: text-generation
datasets:
- beomi/KoAlpaca-v1.1a
- jojo0217/korean_rlhf_dataset
- kyujinpy/OpenOrca-KO
- nlpai-lab/kullm-v2
widget:
- text: >
<|system|>
You are a chatbot who answers User's questions.
<|user|>
대한민국의 수도는 어디야?
<|assistant|>
---
# **DataVortexTL-1.1B-v0.1**
<img src="./DataVortex.png" alt="DataVortex" style="height: 8em;">
## Our Team
| Research & Engineering | Product Management |
| :--------------------: | :----------------: |
| Kwangseok Yang | Seunghyun Choi |
| Jeongwon Choi | Hyoseok Choi |
## **Model Details**
### **Base Model**
[TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)
### **Trained On**
- **OS**: Ubuntu 20.04
- **GPU**: H100 80GB 1ea
- **transformers**: v4.36.2
### **Dataset**
- [beomi/KoAlpaca-v1.1a](https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a)
- [jojo0217/korean_rlhf_dataset](https://huggingface.co/datasets/jojo0217/korean_rlhf_dataset)
- [kyujinpy/OpenOrca-KO](https://huggingface.co/datasets/kyujinpy/OpenOrca-KO)
- [nlpai-lab/kullm-v2](https://huggingface.co/datasets/nlpai-lab/kullm-v2)
### **Instruction format**
It follows **TinyLlama** format.
E.g.
```python
text = """\
<|system|>
당신은 사람들이 정보를 찾을 수 있도록 도와주는 인공지능 비서입니다.</s>
<|user|>
대한민국의 수도는 어디야?</s>
<|assistant|>
대한민국의 수도는 서울입니다.</s>
<|user|>
서울 인구는 총 몇 명이야?</s>
"""
```
## **Model Benchmark**
### **[Ko LM Eval Harness](https://github.com/Beomi/ko-lm-evaluation-harness)**
| Task | 0-shot | 5-shot | 10-shot | 50-shot |
| :--------------- | -------------: | -------------: | -------------: | -----------: |
| kobest_boolq | 0.334282 | 0.516446 | 0.500478 | 0.498941 |
| kobest_copa | 0.515061 | 0.504321 | 0.492927 | 0.50809 |
| kobest_hellaswag | 0.36253 | 0.357733 | 0.355873 | 0.376502 |
| kobest_sentineg | 0.481146 | 0.657411 | 0.687417 | 0.635703 |
| **Average** | **0.42325475** | **0.50897775** | **0.50917375** | **0.504809** |
### **[Ko-LLM-Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)**
| Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| ------: | -----: | -----------: | ------: | ------------: | --------------: |
| 31.5 | 25.26 | 33.53 | 24.56 | 43.34 | 30.81 |
## **Implementation Code**
This model contains the chat_template instruction format.
You can use the code below.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexTL-1.1B-v0.1")
tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexTL-1.1B-v0.1")
messages = [
{"role": "system", "content": "당신은 사람들이 정보를 찾을 수 있도록 도와주는 인공지능 비서입니다."},
{"role": "user", "content": "대한민국의 수도는 어디야?"},
{"role": "assistant", "content": "대한민국의 수도는 서울입니다."},
{"role": "user", "content": "서울 인구는 총 몇 명이야?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## **License**
The model is licensed under the [cc-by-nc-sa-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license, which allows others to copy, modify, and share the work non-commercially, as long as they give appropriate credit and distribute any derivative works under the same license.
<div align="center">
<a href="https://edentns.com/">
<img src="./Logo.png" alt="Logo" style="height: 3em;">
</a>
</div>
|
adhityaprimandhika/fine-tuned-bge-category-by-notes | adhityaprimandhika | "2024-07-02T17:13:42Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-07-02T16:55:40Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
root-signals/sentiment_analysis_auto_transformer | root-signals | "2024-07-02T17:01:20Z" | 0 | 0 | null | [
"tensorboard",
"region:us"
] | null | "2024-07-02T17:01:13Z" | Entry not found |
ferrazzipietro/Meta-Llama-3-8B-Instruct_en.layer1_NoQuant_16_16_0.02_8 | ferrazzipietro | "2024-07-02T17:01:53Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T17:01:45Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NESPED-GEN/TinyLlama1B-spider-all-8500steps | NESPED-GEN | "2024-07-02T17:04:36Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T17:01:57Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Kaenoob/trained-sd3 | Kaenoob | "2024-07-02T17:03:19Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T17:03:19Z" | Entry not found |
whizzzzkid/whizzzzkid_424_5 | whizzzzkid | "2024-07-02T17:04:08Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T17:03:47Z" | Entry not found |
whizzzzkid/whizzzzkid_425_3 | whizzzzkid | "2024-07-02T17:05:26Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T17:04:59Z" | Entry not found |
arianam2607/generative_ai | arianam2607 | "2024-07-02T17:05:50Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T17:05:50Z" | Entry not found |
whizzzzkid/whizzzzkid_426_4 | whizzzzkid | "2024-07-02T17:06:33Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T17:06:11Z" | Entry not found |
abwabai/Phi-3-small-8k-instruct-4bit | abwabai | "2024-07-02T20:02:18Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3small",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-07-02T17:06:12Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
manbeast3b/ZZZZZZZZdriver136cd | manbeast3b | "2024-07-02T17:06:54Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T17:06:18Z" | Entry not found |
kr-manish/mistral_unsloth_hrpolicy_combine_raw_QA | kr-manish | "2024-07-02T17:08:01Z" | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | "2024-07-02T17:06:22Z" | ---
license: apache-2.0
---
|
RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-and-function-calling-gguf | RichardErkhov | "2024-07-02T17:15:22Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-07-02T17:06:52Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
TinyLlama-1.1B-2.5T-chat-and-function-calling - GGUF
- Model creator: https://huggingface.co/AIGym/
- Original model: https://huggingface.co/AIGym/TinyLlama-1.1B-2.5T-chat-and-function-calling/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [TinyLlama-1.1B-2.5T-chat-and-function-calling.Q2_K.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-and-function-calling-gguf/blob/main/TinyLlama-1.1B-2.5T-chat-and-function-calling.Q2_K.gguf) | Q2_K | 0.4GB |
| [TinyLlama-1.1B-2.5T-chat-and-function-calling.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-and-function-calling-gguf/blob/main/TinyLlama-1.1B-2.5T-chat-and-function-calling.IQ3_XS.gguf) | IQ3_XS | 0.44GB |
| [TinyLlama-1.1B-2.5T-chat-and-function-calling.IQ3_S.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-and-function-calling-gguf/blob/main/TinyLlama-1.1B-2.5T-chat-and-function-calling.IQ3_S.gguf) | IQ3_S | 0.47GB |
| [TinyLlama-1.1B-2.5T-chat-and-function-calling.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-and-function-calling-gguf/blob/main/TinyLlama-1.1B-2.5T-chat-and-function-calling.Q3_K_S.gguf) | Q3_K_S | 0.47GB |
| [TinyLlama-1.1B-2.5T-chat-and-function-calling.IQ3_M.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-and-function-calling-gguf/blob/main/TinyLlama-1.1B-2.5T-chat-and-function-calling.IQ3_M.gguf) | IQ3_M | 0.48GB |
| [TinyLlama-1.1B-2.5T-chat-and-function-calling.Q3_K.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-and-function-calling-gguf/blob/main/TinyLlama-1.1B-2.5T-chat-and-function-calling.Q3_K.gguf) | Q3_K | 0.51GB |
| [TinyLlama-1.1B-2.5T-chat-and-function-calling.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-and-function-calling-gguf/blob/main/TinyLlama-1.1B-2.5T-chat-and-function-calling.Q3_K_M.gguf) | Q3_K_M | 0.51GB |
| [TinyLlama-1.1B-2.5T-chat-and-function-calling.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-and-function-calling-gguf/blob/main/TinyLlama-1.1B-2.5T-chat-and-function-calling.Q3_K_L.gguf) | Q3_K_L | 0.55GB |
| [TinyLlama-1.1B-2.5T-chat-and-function-calling.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-and-function-calling-gguf/blob/main/TinyLlama-1.1B-2.5T-chat-and-function-calling.IQ4_XS.gguf) | IQ4_XS | 0.57GB |
| [TinyLlama-1.1B-2.5T-chat-and-function-calling.Q4_0.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-and-function-calling-gguf/blob/main/TinyLlama-1.1B-2.5T-chat-and-function-calling.Q4_0.gguf) | Q4_0 | 0.59GB |
| [TinyLlama-1.1B-2.5T-chat-and-function-calling.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-and-function-calling-gguf/blob/main/TinyLlama-1.1B-2.5T-chat-and-function-calling.IQ4_NL.gguf) | IQ4_NL | 0.6GB |
| [TinyLlama-1.1B-2.5T-chat-and-function-calling.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-and-function-calling-gguf/blob/main/TinyLlama-1.1B-2.5T-chat-and-function-calling.Q4_K_S.gguf) | Q4_K_S | 0.6GB |
| [TinyLlama-1.1B-2.5T-chat-and-function-calling.Q4_K.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-and-function-calling-gguf/blob/main/TinyLlama-1.1B-2.5T-chat-and-function-calling.Q4_K.gguf) | Q4_K | 0.62GB |
| [TinyLlama-1.1B-2.5T-chat-and-function-calling.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-and-function-calling-gguf/blob/main/TinyLlama-1.1B-2.5T-chat-and-function-calling.Q4_K_M.gguf) | Q4_K_M | 0.62GB |
| [TinyLlama-1.1B-2.5T-chat-and-function-calling.Q4_1.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-and-function-calling-gguf/blob/main/TinyLlama-1.1B-2.5T-chat-and-function-calling.Q4_1.gguf) | Q4_1 | 0.65GB |
| [TinyLlama-1.1B-2.5T-chat-and-function-calling.Q5_0.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-and-function-calling-gguf/blob/main/TinyLlama-1.1B-2.5T-chat-and-function-calling.Q5_0.gguf) | Q5_0 | 0.71GB |
| [TinyLlama-1.1B-2.5T-chat-and-function-calling.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-and-function-calling-gguf/blob/main/TinyLlama-1.1B-2.5T-chat-and-function-calling.Q5_K_S.gguf) | Q5_K_S | 0.71GB |
| [TinyLlama-1.1B-2.5T-chat-and-function-calling.Q5_K.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-and-function-calling-gguf/blob/main/TinyLlama-1.1B-2.5T-chat-and-function-calling.Q5_K.gguf) | Q5_K | 0.73GB |
| [TinyLlama-1.1B-2.5T-chat-and-function-calling.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-and-function-calling-gguf/blob/main/TinyLlama-1.1B-2.5T-chat-and-function-calling.Q5_K_M.gguf) | Q5_K_M | 0.73GB |
| [TinyLlama-1.1B-2.5T-chat-and-function-calling.Q5_1.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-and-function-calling-gguf/blob/main/TinyLlama-1.1B-2.5T-chat-and-function-calling.Q5_1.gguf) | Q5_1 | 0.77GB |
| [TinyLlama-1.1B-2.5T-chat-and-function-calling.Q6_K.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-and-function-calling-gguf/blob/main/TinyLlama-1.1B-2.5T-chat-and-function-calling.Q6_K.gguf) | Q6_K | 0.84GB |
| [TinyLlama-1.1B-2.5T-chat-and-function-calling.Q8_0.gguf](https://huggingface.co/RichardErkhov/AIGym_-_TinyLlama-1.1B-2.5T-chat-and-function-calling-gguf/blob/main/TinyLlama-1.1B-2.5T-chat-and-function-calling.Q8_0.gguf) | Q8_0 | 1.09GB |
Original model description:
---
license: apache-2.0
tags:
- finetuned
pipeline_tag: text-generation
model-index:
- name: TinyLlama-1.1B-2.5T-chat-and-function-calling
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 34.39
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/TinyLlama-1.1B-2.5T-chat-and-function-calling
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 59.61
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/TinyLlama-1.1B-2.5T-chat-and-function-calling
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 26.32
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/TinyLlama-1.1B-2.5T-chat-and-function-calling
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 38.92
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/TinyLlama-1.1B-2.5T-chat-and-function-calling
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.96
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/TinyLlama-1.1B-2.5T-chat-and-function-calling
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 1.74
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/TinyLlama-1.1B-2.5T-chat-and-function-calling
name: Open LLM Leaderboard
---
# TinyLlama-1.1B-2.5T-chat-and-function-calling
It was created by starting with the TinyLlama-1.1B-2.5T-chat-and-function-calling and training it on the open assistant dataset then training yhat on function calling. We have attached the wandb report in pdf form to view the training run at a glance.
# Reson
This model was fine tuned to allow it to work with the openai syntask and will return function when apperate.
# Templete
Us the following templete when interacting with the fine tuned model.
# Referrals
Run Pod - This is who I use to train th emodels on huggingface. If you use it we both get free crdits. - <a href="https://runpod.io?ref=kilq83n1" target="_blank" style="color: #3498db; text-decoration: none; font-weight: bold;">Visit Runpod's Website!</a>
Paypal - If you want to leave a tip, it is appecaheted. - <a href="https://paypal.me/OpenSourceTraining" target="_blank" style="color: #3498db; text-decoration: none; font-weight: bold;">Visit My Paypal!</a>
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_AIGym__TinyLlama-1.1B-2.5T-chat-and-function-calling)
| Metric |Value|
|---------------------------------|----:|
|Avg. |37.16|
|AI2 Reasoning Challenge (25-Shot)|34.39|
|HellaSwag (10-Shot) |59.61|
|MMLU (5-Shot) |26.32|
|TruthfulQA (0-shot) |38.92|
|Winogrande (5-shot) |61.96|
|GSM8k (5-shot) | 1.74|
|
maxrmorrison/promonet | maxrmorrison | "2024-07-02T17:20:31Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-07-02T17:07:01Z" | ---
license: mit
---
|
whizzzzkid/whizzzzkid_427_1 | whizzzzkid | "2024-07-02T17:07:31Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T17:07:11Z" | Entry not found |
Zak-Soussi/finbert_peft | Zak-Soussi | "2024-07-02T17:08:45Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T17:07:21Z" | Entry not found |
akashcsd/1111 | akashcsd | "2024-07-02T17:07:38Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T17:07:38Z" | Entry not found |
TatvaJoshi-AHS/peft-InstructionTuning-training-1719936615 | TatvaJoshi-AHS | "2024-07-02T17:08:05Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"license:apache-2.0",
"region:us"
] | null | "2024-07-02T17:08:03Z" | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: google/flan-t5-base
model-index:
- name: peft-InstructionTuning-training-1719936615
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# peft-InstructionTuning-training-1719936615
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1 |
whizzzzkid/whizzzzkid_428_7 | whizzzzkid | "2024-07-02T17:08:32Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T17:08:10Z" | Entry not found |
whizzzzkid/whizzzzkid_429_6 | whizzzzkid | "2024-07-02T17:09:35Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T17:09:11Z" | Entry not found |
thangvip/thedeep-1.8b | thangvip | "2024-07-02T17:10:24Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T17:10:24Z" | Entry not found |
ZeroWw/Phi-3-mini-4k-instruct-GGUF | ZeroWw | "2024-07-02T17:16:16Z" | 0 | 0 | null | [
"gguf",
"en",
"license:mit",
"region:us"
] | null | "2024-07-02T17:10:26Z" |
---
license: mit
language:
- en
---
My own (ZeroWw) quantizations.
output and embed tensors quantized to f16.
all other tensors quantized to q5_k or q6_k.
Result:
both f16.q6 and f16.q5 are smaller than q8_0 standard quantization
and they perform as well as the pure f16.
Updated on: Tue Jul 2, 20:00:00
|
juanpablomesa/bge-base-financial-matryoshka | juanpablomesa | "2024-07-02T17:10:50Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:9600",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | "2024-07-02T17:10:34Z" | ---
base_model: BAAI/bge-base-en-v1.5
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:9600
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: The median home value in San Carlos, CA is $2,350,000.
sentences:
- What does the console property of the WorkerGlobalScope interface provide access
to?
- What is the last sold price and date for the property at 4372 W 14th Street Dr,
Greeley, CO 80634?
- What is the median home value in San Carlos, CA?
- source_sentence: The four new principals hired by Superintendent of Schools Ken
Kenworthy for the Okeechobee school system are Joseph Stanley at Central Elementary,
Jody Hays at Yearling Middle School, Tuuli Robinson at North Elementary, and Dr.
Thelma Jackson at Seminole Elementary School.
sentences:
- Who won the gold medal in the men's 1,500m final at the speed skating World Cup?
- What is the purpose of the 1,2,3 bowling activity for toddlers?
- Who are the four new principals hired by Superintendent of Schools Ken Kenworthy
for the Okeechobee school system?
- source_sentence: Twitter Audit is used to scan your followers and find out what
percentage of them are real people.
sentences:
- What is the main product discussed in the context of fair trade?
- What is the software mentioned in the context suitable for?
- What is the purpose of the Twitter Audit tool?
- source_sentence: Michael Czysz made the 2011 E1pc lighter and more powerful than
the 2010 version, and also improved the software controlling the bike’s D1g1tal
powertrain.
sentences:
- What changes did Michael Czysz make to the 2011 E1pc compared to the 2010 version?
- What is the author's suggestion for leaving a legacy for future generations?
- What is the most affordable and reliable option to fix a MacBook according to
the technician?
- source_sentence: HTC called the Samsung Galaxy S4 “mainstream”.
sentences:
- What is the essential aspect of the vocation to marriage according to Benedict
XVI's message on the 40th Anniversary of Humanae Vitae?
- What did HTC announce about the Samsung Galaxy S4?
- What was Allan Cox's First Class Delivery launched on for his Level 1 certification
flight?
model-index:
- name: BGE base Financial Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.9675
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9791666666666666
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9829166666666667
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.98875
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.9675
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3263888888888889
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1965833333333333
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09887499999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.9675
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9791666666666666
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9829166666666667
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.98875
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9776735843960416
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9741727843915341
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.974471752833939
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.9641666666666666
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9775
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9816666666666667
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.98875
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.9641666666666666
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3258333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1963333333333333
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09887499999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.9641666666666666
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9775
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9816666666666667
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.98875
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9758504869144781
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9717977843915344
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9720465527215371
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.9620833333333333
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9741666666666666
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9804166666666667
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.98625
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.9620833333333333
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.32472222222222225
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1960833333333333
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09862499999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.9620833333333333
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9741666666666666
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9804166666666667
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.98625
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9737941784937224
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9698406084656085
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9702070899963996
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.9554166666666667
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.97
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9766666666666667
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.98375
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.9554166666666667
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3233333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1953333333333333
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09837499999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.9554166666666667
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.97
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9766666666666667
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.98375
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.969307497603498
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9647410714285715
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9652034022263717
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.9391666666666667
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9616666666666667
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9666666666666667
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9758333333333333
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.9391666666666667
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3205555555555556
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1933333333333333
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09758333333333333
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.9391666666666667
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9616666666666667
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9666666666666667
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9758333333333333
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9577277779716886
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9519417989417989
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9525399354798056
name: Cosine Map@100
---
# BGE base Financial Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("juanpablomesa/bge-base-financial-matryoshka")
# Run inference
sentences = [
'HTC called the Samsung Galaxy S4 “mainstream”.',
'What did HTC announce about the Samsung Galaxy S4?',
"What is the essential aspect of the vocation to marriage according to Benedict XVI's message on the 40th Anniversary of Humanae Vitae?",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.9675 |
| cosine_accuracy@3 | 0.9792 |
| cosine_accuracy@5 | 0.9829 |
| cosine_accuracy@10 | 0.9888 |
| cosine_precision@1 | 0.9675 |
| cosine_precision@3 | 0.3264 |
| cosine_precision@5 | 0.1966 |
| cosine_precision@10 | 0.0989 |
| cosine_recall@1 | 0.9675 |
| cosine_recall@3 | 0.9792 |
| cosine_recall@5 | 0.9829 |
| cosine_recall@10 | 0.9888 |
| cosine_ndcg@10 | 0.9777 |
| cosine_mrr@10 | 0.9742 |
| **cosine_map@100** | **0.9745** |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:----------|
| cosine_accuracy@1 | 0.9642 |
| cosine_accuracy@3 | 0.9775 |
| cosine_accuracy@5 | 0.9817 |
| cosine_accuracy@10 | 0.9888 |
| cosine_precision@1 | 0.9642 |
| cosine_precision@3 | 0.3258 |
| cosine_precision@5 | 0.1963 |
| cosine_precision@10 | 0.0989 |
| cosine_recall@1 | 0.9642 |
| cosine_recall@3 | 0.9775 |
| cosine_recall@5 | 0.9817 |
| cosine_recall@10 | 0.9888 |
| cosine_ndcg@10 | 0.9759 |
| cosine_mrr@10 | 0.9718 |
| **cosine_map@100** | **0.972** |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.9621 |
| cosine_accuracy@3 | 0.9742 |
| cosine_accuracy@5 | 0.9804 |
| cosine_accuracy@10 | 0.9862 |
| cosine_precision@1 | 0.9621 |
| cosine_precision@3 | 0.3247 |
| cosine_precision@5 | 0.1961 |
| cosine_precision@10 | 0.0986 |
| cosine_recall@1 | 0.9621 |
| cosine_recall@3 | 0.9742 |
| cosine_recall@5 | 0.9804 |
| cosine_recall@10 | 0.9862 |
| cosine_ndcg@10 | 0.9738 |
| cosine_mrr@10 | 0.9698 |
| **cosine_map@100** | **0.9702** |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.9554 |
| cosine_accuracy@3 | 0.97 |
| cosine_accuracy@5 | 0.9767 |
| cosine_accuracy@10 | 0.9838 |
| cosine_precision@1 | 0.9554 |
| cosine_precision@3 | 0.3233 |
| cosine_precision@5 | 0.1953 |
| cosine_precision@10 | 0.0984 |
| cosine_recall@1 | 0.9554 |
| cosine_recall@3 | 0.97 |
| cosine_recall@5 | 0.9767 |
| cosine_recall@10 | 0.9838 |
| cosine_ndcg@10 | 0.9693 |
| cosine_mrr@10 | 0.9647 |
| **cosine_map@100** | **0.9652** |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.9392 |
| cosine_accuracy@3 | 0.9617 |
| cosine_accuracy@5 | 0.9667 |
| cosine_accuracy@10 | 0.9758 |
| cosine_precision@1 | 0.9392 |
| cosine_precision@3 | 0.3206 |
| cosine_precision@5 | 0.1933 |
| cosine_precision@10 | 0.0976 |
| cosine_recall@1 | 0.9392 |
| cosine_recall@3 | 0.9617 |
| cosine_recall@5 | 0.9667 |
| cosine_recall@10 | 0.9758 |
| cosine_ndcg@10 | 0.9577 |
| cosine_mrr@10 | 0.9519 |
| **cosine_map@100** | **0.9525** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 9,600 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 50.19 tokens</li><li>max: 435 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 18.66 tokens</li><li>max: 43 tokens</li></ul> |
* Samples:
| positive | anchor |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------|
| <code>The Berry Export Summary 2028 is a dedicated export plan for the Australian strawberry, raspberry, and blackberry industries. It maps the sectors’ current position, where they want to be, high-opportunity markets, and next steps. The purpose of this plan is to grow their global presence over the next 10 years.</code> | <code>What is the Berry Export Summary 2028 and what is its purpose?</code> |
| <code>Benefits reported from having access to Self-supply water sources include convenience, less time spent for fetching water and access to more and better quality water. In some areas, Self-supply sources offer important added values such as water for productive use, income generation, family safety and improved food security.</code> | <code>What are some of the benefits reported from having access to Self-supply water sources?</code> |
| <code>The unique features of the Coolands for Twitter app include Real-Time updates without the need for a refresh button, Avatar Indicator which shows small avatars on the title bar for new messages, Direct Link for intuitive and convenient link opening, Smart Bookmark to easily return to previous reading position, and User Level Notification which allows customized notification settings for different users.</code> | <code>What are the unique features of the Coolands for Twitter app?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
|:--------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|
| 0.5333 | 10 | 0.6065 | - | - | - | - | - |
| 0.96 | 18 | - | 0.9583 | 0.9674 | 0.9695 | 0.9372 | 0.9708 |
| 1.0667 | 20 | 0.3313 | - | - | - | - | - |
| 1.6 | 30 | 0.144 | - | - | - | - | - |
| 1.9733 | 37 | - | 0.9630 | 0.9699 | 0.9716 | 0.9488 | 0.9745 |
| 2.1333 | 40 | 0.1317 | - | - | - | - | - |
| 2.6667 | 50 | 0.0749 | - | - | - | - | - |
| 2.9867 | 56 | - | 0.9650 | 0.9701 | 0.9721 | 0.9522 | 0.9747 |
| 3.2 | 60 | 0.088 | - | - | - | - | - |
| 3.7333 | 70 | 0.0598 | - | - | - | - | - |
| **3.84** | **72** | **-** | **0.9652** | **0.9702** | **0.972** | **0.9525** | **0.9745** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.5
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.31.0
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
MubarakB/nllb-3.3b-ug | MubarakB | "2024-07-02T17:13:06Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-07-02T17:10:47Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ethedeltae/mistral-7b-oig-unsloth-iitg | ethedeltae | "2024-07-02T17:11:08Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T17:10:48Z" | ---
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** ethedeltae
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Adriana213/gpt2-xl-finetuned-wikitext-2 | Adriana213 | "2024-07-02T17:11:03Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T17:11:03Z" | Entry not found |
RichardErkhov/Kendamarron_-_Tokara-0.5B-Chat-v0.1-gguf | RichardErkhov | "2024-07-02T17:36:37Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-07-02T17:11:15Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Tokara-0.5B-Chat-v0.1 - GGUF
- Model creator: https://huggingface.co/Kendamarron/
- Original model: https://huggingface.co/Kendamarron/Tokara-0.5B-Chat-v0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Tokara-0.5B-Chat-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/Kendamarron_-_Tokara-0.5B-Chat-v0.1-gguf/blob/main/Tokara-0.5B-Chat-v0.1.Q2_K.gguf) | Q2_K | 0.23GB |
| [Tokara-0.5B-Chat-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Kendamarron_-_Tokara-0.5B-Chat-v0.1-gguf/blob/main/Tokara-0.5B-Chat-v0.1.IQ3_XS.gguf) | IQ3_XS | 0.24GB |
| [Tokara-0.5B-Chat-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Kendamarron_-_Tokara-0.5B-Chat-v0.1-gguf/blob/main/Tokara-0.5B-Chat-v0.1.IQ3_S.gguf) | IQ3_S | 0.25GB |
| [Tokara-0.5B-Chat-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Kendamarron_-_Tokara-0.5B-Chat-v0.1-gguf/blob/main/Tokara-0.5B-Chat-v0.1.Q3_K_S.gguf) | Q3_K_S | 0.25GB |
| [Tokara-0.5B-Chat-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Kendamarron_-_Tokara-0.5B-Chat-v0.1-gguf/blob/main/Tokara-0.5B-Chat-v0.1.IQ3_M.gguf) | IQ3_M | 0.26GB |
| [Tokara-0.5B-Chat-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/Kendamarron_-_Tokara-0.5B-Chat-v0.1-gguf/blob/main/Tokara-0.5B-Chat-v0.1.Q3_K.gguf) | Q3_K | 0.26GB |
| [Tokara-0.5B-Chat-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Kendamarron_-_Tokara-0.5B-Chat-v0.1-gguf/blob/main/Tokara-0.5B-Chat-v0.1.Q3_K_M.gguf) | Q3_K_M | 0.26GB |
| [Tokara-0.5B-Chat-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Kendamarron_-_Tokara-0.5B-Chat-v0.1-gguf/blob/main/Tokara-0.5B-Chat-v0.1.Q3_K_L.gguf) | Q3_K_L | 0.28GB |
| [Tokara-0.5B-Chat-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Kendamarron_-_Tokara-0.5B-Chat-v0.1-gguf/blob/main/Tokara-0.5B-Chat-v0.1.IQ4_XS.gguf) | IQ4_XS | 0.28GB |
| [Tokara-0.5B-Chat-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/Kendamarron_-_Tokara-0.5B-Chat-v0.1-gguf/blob/main/Tokara-0.5B-Chat-v0.1.Q4_0.gguf) | Q4_0 | 0.29GB |
| [Tokara-0.5B-Chat-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Kendamarron_-_Tokara-0.5B-Chat-v0.1-gguf/blob/main/Tokara-0.5B-Chat-v0.1.IQ4_NL.gguf) | IQ4_NL | 0.29GB |
| [Tokara-0.5B-Chat-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Kendamarron_-_Tokara-0.5B-Chat-v0.1-gguf/blob/main/Tokara-0.5B-Chat-v0.1.Q4_K_S.gguf) | Q4_K_S | 0.29GB |
| [Tokara-0.5B-Chat-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/Kendamarron_-_Tokara-0.5B-Chat-v0.1-gguf/blob/main/Tokara-0.5B-Chat-v0.1.Q4_K.gguf) | Q4_K | 0.3GB |
| [Tokara-0.5B-Chat-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Kendamarron_-_Tokara-0.5B-Chat-v0.1-gguf/blob/main/Tokara-0.5B-Chat-v0.1.Q4_K_M.gguf) | Q4_K_M | 0.3GB |
| [Tokara-0.5B-Chat-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/Kendamarron_-_Tokara-0.5B-Chat-v0.1-gguf/blob/main/Tokara-0.5B-Chat-v0.1.Q4_1.gguf) | Q4_1 | 0.3GB |
| [Tokara-0.5B-Chat-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/Kendamarron_-_Tokara-0.5B-Chat-v0.1-gguf/blob/main/Tokara-0.5B-Chat-v0.1.Q5_0.gguf) | Q5_0 | 0.32GB |
| [Tokara-0.5B-Chat-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Kendamarron_-_Tokara-0.5B-Chat-v0.1-gguf/blob/main/Tokara-0.5B-Chat-v0.1.Q5_K_S.gguf) | Q5_K_S | 0.32GB |
| [Tokara-0.5B-Chat-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/Kendamarron_-_Tokara-0.5B-Chat-v0.1-gguf/blob/main/Tokara-0.5B-Chat-v0.1.Q5_K.gguf) | Q5_K | 0.33GB |
| [Tokara-0.5B-Chat-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Kendamarron_-_Tokara-0.5B-Chat-v0.1-gguf/blob/main/Tokara-0.5B-Chat-v0.1.Q5_K_M.gguf) | Q5_K_M | 0.33GB |
| [Tokara-0.5B-Chat-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/Kendamarron_-_Tokara-0.5B-Chat-v0.1-gguf/blob/main/Tokara-0.5B-Chat-v0.1.Q5_1.gguf) | Q5_1 | 0.34GB |
| [Tokara-0.5B-Chat-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/Kendamarron_-_Tokara-0.5B-Chat-v0.1-gguf/blob/main/Tokara-0.5B-Chat-v0.1.Q6_K.gguf) | Q6_K | 0.36GB |
| [Tokara-0.5B-Chat-v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/Kendamarron_-_Tokara-0.5B-Chat-v0.1-gguf/blob/main/Tokara-0.5B-Chat-v0.1.Q8_0.gguf) | Q8_0 | 0.47GB |
Original model description:
---
license: other
license_name: tongyi-qianwen-research
license_link: >-
https://huggingface.co/Qwen/Qwen1.5-0.5B/blob/main/LICENSE
language:
- ja
- en
pipeline_tag: text-generation
---
## モデルについて
[Qwen/Qwen1.5-0.5B](https://huggingface.co/Qwen/Qwen1.5-0.5B)を日英データ5Bトークンで継続事前学習した[Tokara-0.5B-v0.1](https://huggingface.co/Kendamarron/Tokara-0.5B-v0.1)にchat vectorで対話能力を加えたモデルになります。
0.5Bというモデルサイズにしてはコミュニケーションが行えるモデルになっています。
chat vectorに使ったモデルはマルチターンの学習を行ったモデルになっているので、複数ターンの会話も行えるはずです。
モデルサイズの問題なのか、repetition_penaltyを1.15~1.25くらいにしないと早めに繰り返しが始まります。
詳細は[こちら](https://zenn.dev/kendama/articles/55564e12da6e82)をご覧ください。
## レシピ
- [Tokara-0.5B-v0.1](https://huggingface.co/Kendamarron/Tokara-0.5B-v0.1)
- 0.24*([Qwen/Qwen1.5-0.5B-Chat](https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat) - [Qwen/Qwen1.5-0.5B](https://huggingface.co/Qwen/Qwen1.5-0.5B))
- 0.56*([Kendamarron/Tokara-0.5B-Chat-dolly-jimba](https://huggingface.co/Kendamarron/Tokara-0.5B-Chat-dolly-jimba) - [Kendamarron/Tokara-0.5B-v0.1](https://huggingface.co/Kendamarron/Tokara-0.5B-v0.1))
## ベンチマーク
Japanese MT-benchの6カテゴリをシングルターンで評価
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63075d83cb09c0a9042a82c2/8Mg54DXeRBFcnF0Xgka68.png)
| Extraction | Humanities | Reasoning | Roleplay | STEM | Writing |
| ---------- | ---------- | --------- | -------- | ---- | ------- |
| 1.3 | 2.6 | 2.5 | 3.8 | 2.3 | 3.2 |
## 名前について
日本の在来馬であるトカラ馬から
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = "cuda"
model = AutoModelForCausalLM.from_pretrained(
'Kendamarron/Tokara-0.5B-Chat-v0.1',
torch_dtype=torch.bfloat16,
device_map=device,
)
tokenizer = AutoTokenizer.from_pretrained('Kendamarron/Tokara-0.5B-Chat-v0.1')
messages = [
{"role": "system", "content": "あなたは誠実で優秀な日本人のアシスタントです。"},
{"role": "user", "content": "野菜は体にいいですか?"}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=256,
do_sample=True,
top_p=0.95,
top_k=40,
temperature=0.7,
repetition_penalty=1.1,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=2
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
# はい、野食が健康や美容に良いと言われています。身体を健康的なものにするには、バランス良く摂取することが重要です。
# 野球少年であれば、1日500g程度の野味(たんぱく質)を一日3回以上摌る方法があります。また、ビタミンB群やCも含んでおり、お子様向けには食物繊維(食物中含まれる植物性糖)や亜鉛、カリウムなどの栄養素も豊富です。
#
# 一方、過剰な野草農業や化学肥料の使用によって生じる不自然な添加物・残留農薬(有害な生物的散布物)などは、人体にとって害になる可能性がありますので、科学的に根拠のある安全な方法で野生活をしてください。野遊びや野炊き以外にも、季節ごとに特徴ある食材が利用可能ですが、その内容や量は慎重に行われるようにしましょう。ぜひ、手軽に自宅でも楽しめる方法と知識をお伝えできるよう、心から応援いたします!
```
|
zhhan/Phi-3-mini-4k-instruct_gguf_derived | zhhan | "2024-07-02T18:57:14Z" | 0 | 0 | peft | [
"peft",
"gguf",
"phi3",
"lora",
"llama.cpp",
"summarization",
"en",
"license:apache-2.0",
"region:us"
] | summarization | "2024-07-02T17:11:27Z" | ---
license: apache-2.0
language:
- en
pipeline_tag: summarization
tags:
- phi3
- peft
- lora
- llama.cpp
--- |
chaley22/gemma-captioning | chaley22 | "2024-07-02T17:11:34Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T17:11:34Z" | Entry not found |
InderV94/sf_unsloth_adapter | InderV94 | "2024-07-02T17:14:29Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-2b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T17:12:54Z" | ---
base_model: unsloth/gemma-2b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
---
# Uploaded model
- **Developed by:** InderV94
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
cyan2k/promptvieh_chat_merged | cyan2k | "2024-07-02T17:25:32Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-07-02T17:13:33Z" | ---
base_model: unsloth/phi-3-mini-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
---
# Uploaded model
- **Developed by:** cyan2k
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
axolotl-ai-co/gemma-2-27b | axolotl-ai-co | "2024-07-02T17:28:52Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:2110.08193",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:1804.06876",
"arxiv:2103.03874",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:2203.09509",
"license:gemma",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T17:14:24Z" | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# Gemma 2 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma]
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-27b)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights for both pre-trained variants and instruction-tuned variants.
Gemma models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b",
device_map="auto",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
<a name="precisions"></a>
#### Running the model on a GPU using different precisions
The native weights of this model were exported in `bfloat16` precision. You can use `float16`, which may be faster on certain hardware, indicating the `torch_dtype` when loading the model. For convenience, the `float16` revision of the repo contains a copy of the weights already converted to that precision.
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b",
device_map="auto",
torch_dtype=torch.float16,
revision="float16",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b",
device_map="auto",
torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Upcasting to `torch.float32`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b",
device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b",
quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b",
quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
### Citation
```none
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens and the 9B model was trained with 8 trillion tokens.
Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content.
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safety in line with
[our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models][foundation-models], including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | Gemma PT 9B | Gemma PT 27B |
| ------------------------------ | ------------- | ----------- | ------------ |
| [MMLU][mmlu] | 5-shot, top-1 | 71.3 | 75.2 |
| [HellaSwag][hellaswag] | 10-shot | 81.9 | 86.4 |
| [PIQA][piqa] | 0-shot | 81.7 | 83.2 |
| [SocialIQA][socialiqa] | 0-shot | 53.4 | 53.7 |
| [BoolQ][boolq] | 0-shot | 84.2 | 84.8 |
| [WinoGrande][winogrande] | partial score | 80.6 | 83.7 |
| [ARC-e][arc] | 0-shot | 88.0 | 88.6 |
| [ARC-c][arc] | 25-shot | 68.4 | 71.4 |
| [TriviaQA][triviaqa] | 5-shot | 76.6 | 83.7 |
| [Natural Questions][naturalq] | 5-shot | 29.2 | 34.5 |
| [HumanEval][humaneval] | pass@1 | 40.2 | 51.8 |
| [MBPP][mbpp] | 3-shot | 52.4 | 62.6 |
| [GSM8K][gsm8k] | 5-shot, maj@1 | 68.6 | 74.0 |
| [MATH][math] | 4-shot | 36.6 | 42.3 |
| [AGIEval][agieval] | 3-5-shot | 52.8 | 55.1 |
| [BIG-Bench][big-bench] | 3-shot, CoT | 68.2 | 74.9 |
| ------------------------------ | ------------- | ----------- | ------------ |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq].
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies][safety-policies] for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well-known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 2.0
| Benchmark | Metric | Gemma 2 IT 9B | Gemma 2 IT 27B |
| ------------------------ | ------------- | --------------- | ---------------- |
| [RealToxicity][realtox] | average | 8.25 | 8.84 |
| [CrowS-Pairs][crows] | top-1 | 37.47 | 36.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 88.58 | 85.99 |
| [BBQ Disambig][bbq] | top-1 | 82.67 | 86.94 |
| [Winogender][winogender] | top-1 | 79.17 | 77.22 |
| [TruthfulQA][truthfulqa] | | 50.27 | 51.60 |
| [Winobias 1_2][winobias] | | 78.09 | 81.94 |
| [Winobias 2_2][winobias] | | 95.32 | 97.22 |
| [Toxigen][toxigen] | | 39.30 | 38.42 |
| ------------------------ | ------------- | --------------- | ---------------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2
[terms]: https://ai.google.dev/gemma/terms
[vertex-mg-gemma]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335
[sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference
[safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/google/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[foundation-models]: https://ai.google/discover/foundation-models/
[gemini-2-paper]: https://goo.gle/gemma2report
[mmlu]: https://arxiv.org/abs/2009.03300
[hellaswag]: https://arxiv.org/abs/1905.07830
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[boolq]: https://arxiv.org/abs/1905.10044
[winogrande]: https://arxiv.org/abs/1907.10641
[commonsenseqa]: https://arxiv.org/abs/1811.00937
[openbookqa]: https://arxiv.org/abs/1809.02789
[arc]: https://arxiv.org/abs/1911.01547
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[humaneval]: https://arxiv.org/abs/2107.03374
[mbpp]: https://arxiv.org/abs/2108.07732
[gsm8k]: https://arxiv.org/abs/2110.14168
[realtox]: https://arxiv.org/abs/2009.11462
[bold]: https://arxiv.org/abs/2101.11718
[crows]: https://aclanthology.org/2020.emnlp-main.154/
[bbq]: https://arxiv.org/abs/2110.08193v2
[winogender]: https://arxiv.org/abs/1804.09301
[truthfulqa]: https://arxiv.org/abs/2109.07958
[winobias]: https://arxiv.org/abs/1804.06876
[math]: https://arxiv.org/abs/2103.03874
[agieval]: https://arxiv.org/abs/2304.06364
[big-bench]: https://arxiv.org/abs/2206.04615
[toxigen]: https://arxiv.org/abs/2203.09509
|
impossibleexchange/curbstomp | impossibleexchange | "2024-07-02T17:51:17Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T17:15:08Z" | ---
license: mit
---
|
mnsm92/whisper-small-bd-v5.5 | mnsm92 | "2024-07-02T23:23:12Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:mnsm92/whisper-small-bd-v5.4",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-07-02T17:16:51Z" | Invalid username or password. |
LWDCLS/llama3-8B-DarkIdol-2.2-Uncensored-1048K-GGUF-IQ-Imatrix-Request | LWDCLS | "2024-07-02T23:13:23Z" | 0 | 0 | null | [
"gguf",
"license:unlicense",
"region:us"
] | null | "2024-07-02T17:17:18Z" | ---
license: unlicense
---
|
qsdcfqsdfcxqfqs/China-says-US-targeting-of-AI-not-helpful-to-healthy-development-bf-updated | qsdcfqsdfcxqfqs | "2024-07-02T17:18:31Z" | 0 | 0 | null | [
"en",
"region:us"
] | null | "2024-07-02T17:17:18Z" | ---
language:
- en
---
[![Build Status](https://www.devdiscourse.com/remote.axd?https://devdiscourse.blob.core.windows.net/devnews/30_06_2024_11_44_50_1470271.jpg?width=920&format=jpeg)]()
read the full article here : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us4424154233&Connector=https://unitedstatednews.com
Source : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2345334154&Connector=https://unitedstatednews.com
Flash News : https://justpaste.it/9wsxl
Biden last Talk : https://justpaste.it/exwv7
Russian Ukrain Breaking News : https://wow.curseforge.com/paste/fc3a7562
Other Sources :
https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_4231353425&Connector=https://unitedstatednews.com
https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_5341353434&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1214223412&Connector=https://unitedstatednews.com
https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_4525245233&Connector=https://unitedstatednews.com
https://privatebin.net/?edec75736ef3c57f#EUWQTdebh6FBJMKBADHtpqKWotg4p7eyLpALsWuHmNp4
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_5533431314&Connector=https://unitedstatednews.com
https://www.sep.va.gov/sep/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=Games_Free_Generator_255&Connector=https://unitedstatednews.com
https://paste2.org/aWDYUnUG
https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_4111255343&Connector=https://unitedstatednews.com
https://yamcode.com/the-unforgettable-journey-through-time-and-space
https://tempaste.com/b6rxJ7oOnEm
https://www.wowace.com/paste/687fc464
https://privatebin.net/?c4337d6ac8e2355f#HGXvcFzyGQin4fpocHZkYVTNHG6SsMa2oXCH3A7YYqYS
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_5253431441&Connector=https://unitedstatednews.com
UNITED NATIONS (Reuters) - U.S. targeting of certain investments in artificial intelligence in China is not helpful to the "healthy development" of AI technology and will be divisive when it comes to global governance, China's U.N. envoy said on Monday.
The United States last month issued draft rules for banning or requiring notification of certain investments in artificial intelligence and other technology sectors in China that could threaten U.S. national security.
"We are firmly opposed to these sanctions," Chinese U.N. Ambassador Fu Cong told reporters after the 193-member U.N. General Assembly adopted by consensus a Chinese-drafted resolution aimed at boosting international cooperation on AI capacity-building.
The U.N resolution calls upon the international community to "provide and promote a fair, open, inclusive and non-discriminatory business environment across the life cycle of safe, secure and trustworthy artificial intelligence systems."
Fu said the U.S. actions do not foster an inclusive business environment and he urged Washington to reverse its decision.
"We don't believe that the U.S. government's position or decision will be helpful to the healthy development of the AI technology, per se, and will - by extension - divide the world in terms of the standards and in terms of the rules governing the AI," he said.
The U.S. Treasury Department published the proposed rules after U.S. President Joe Biden signed an executive order last August as part of a broader push to prevent U.S. know-how from helping the Chinese to develop sophisticated technology and dominate global markets.
(Reporting by Michelle Nichols; Editing by Sandra Maler).... |
YashJain/GitAI-Qwen2-0.5B-Instruct | YashJain | "2024-07-02T18:05:24Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"code",
"conversational",
"en",
"dataset:YashJain/GitAI",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T17:18:18Z" | ---
language:
- en
license: apache-2.0
tags:
- chat
- code
pipeline_tag: text-generation
datasets:
- YashJain/GitAI
library_name: transformers
---
# YashJain/GitAI-Qwen2-0.5B-Instruct
## Requirements
The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"YashJain/GitAI-Qwen2-0.5B-Instruct",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("YashJain/GitAI-Qwen2-0.5B-Instruct")
prompt = "How to undo my last commit"
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
``` |
RichardErkhov/s3nh_-_TinyLLama-4x1.1B-MoE-gguf | RichardErkhov | "2024-07-02T17:42:00Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-07-02T17:19:04Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
TinyLLama-4x1.1B-MoE - GGUF
- Model creator: https://huggingface.co/s3nh/
- Original model: https://huggingface.co/s3nh/TinyLLama-4x1.1B-MoE/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [TinyLLama-4x1.1B-MoE.Q2_K.gguf](https://huggingface.co/RichardErkhov/s3nh_-_TinyLLama-4x1.1B-MoE-gguf/blob/main/TinyLLama-4x1.1B-MoE.Q2_K.gguf) | Q2_K | 1.17GB |
| [TinyLLama-4x1.1B-MoE.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/s3nh_-_TinyLLama-4x1.1B-MoE-gguf/blob/main/TinyLLama-4x1.1B-MoE.IQ3_XS.gguf) | IQ3_XS | 1.31GB |
| [TinyLLama-4x1.1B-MoE.IQ3_S.gguf](https://huggingface.co/RichardErkhov/s3nh_-_TinyLLama-4x1.1B-MoE-gguf/blob/main/TinyLLama-4x1.1B-MoE.IQ3_S.gguf) | IQ3_S | 1.38GB |
| [TinyLLama-4x1.1B-MoE.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/s3nh_-_TinyLLama-4x1.1B-MoE-gguf/blob/main/TinyLLama-4x1.1B-MoE.Q3_K_S.gguf) | Q3_K_S | 1.38GB |
| [TinyLLama-4x1.1B-MoE.IQ3_M.gguf](https://huggingface.co/RichardErkhov/s3nh_-_TinyLLama-4x1.1B-MoE-gguf/blob/main/TinyLLama-4x1.1B-MoE.IQ3_M.gguf) | IQ3_M | 1.4GB |
| [TinyLLama-4x1.1B-MoE.Q3_K.gguf](https://huggingface.co/RichardErkhov/s3nh_-_TinyLLama-4x1.1B-MoE-gguf/blob/main/TinyLLama-4x1.1B-MoE.Q3_K.gguf) | Q3_K | 1.52GB |
| [TinyLLama-4x1.1B-MoE.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/s3nh_-_TinyLLama-4x1.1B-MoE-gguf/blob/main/TinyLLama-4x1.1B-MoE.Q3_K_M.gguf) | Q3_K_M | 1.52GB |
| [TinyLLama-4x1.1B-MoE.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/s3nh_-_TinyLLama-4x1.1B-MoE-gguf/blob/main/TinyLLama-4x1.1B-MoE.Q3_K_L.gguf) | Q3_K_L | 1.65GB |
| [TinyLLama-4x1.1B-MoE.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/s3nh_-_TinyLLama-4x1.1B-MoE-gguf/blob/main/TinyLLama-4x1.1B-MoE.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [TinyLLama-4x1.1B-MoE.Q4_0.gguf](https://huggingface.co/RichardErkhov/s3nh_-_TinyLLama-4x1.1B-MoE-gguf/blob/main/TinyLLama-4x1.1B-MoE.Q4_0.gguf) | Q4_0 | 1.79GB |
| [TinyLLama-4x1.1B-MoE.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/s3nh_-_TinyLLama-4x1.1B-MoE-gguf/blob/main/TinyLLama-4x1.1B-MoE.IQ4_NL.gguf) | IQ4_NL | 1.8GB |
| [TinyLLama-4x1.1B-MoE.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/s3nh_-_TinyLLama-4x1.1B-MoE-gguf/blob/main/TinyLLama-4x1.1B-MoE.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [TinyLLama-4x1.1B-MoE.Q4_K.gguf](https://huggingface.co/RichardErkhov/s3nh_-_TinyLLama-4x1.1B-MoE-gguf/blob/main/TinyLLama-4x1.1B-MoE.Q4_K.gguf) | Q4_K | 1.9GB |
| [TinyLLama-4x1.1B-MoE.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/s3nh_-_TinyLLama-4x1.1B-MoE-gguf/blob/main/TinyLLama-4x1.1B-MoE.Q4_K_M.gguf) | Q4_K_M | 1.9GB |
| [TinyLLama-4x1.1B-MoE.Q4_1.gguf](https://huggingface.co/RichardErkhov/s3nh_-_TinyLLama-4x1.1B-MoE-gguf/blob/main/TinyLLama-4x1.1B-MoE.Q4_1.gguf) | Q4_1 | 1.98GB |
| [TinyLLama-4x1.1B-MoE.Q5_0.gguf](https://huggingface.co/RichardErkhov/s3nh_-_TinyLLama-4x1.1B-MoE-gguf/blob/main/TinyLLama-4x1.1B-MoE.Q5_0.gguf) | Q5_0 | 2.18GB |
| [TinyLLama-4x1.1B-MoE.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/s3nh_-_TinyLLama-4x1.1B-MoE-gguf/blob/main/TinyLLama-4x1.1B-MoE.Q5_K_S.gguf) | Q5_K_S | 2.18GB |
| [TinyLLama-4x1.1B-MoE.Q5_K.gguf](https://huggingface.co/RichardErkhov/s3nh_-_TinyLLama-4x1.1B-MoE-gguf/blob/main/TinyLLama-4x1.1B-MoE.Q5_K.gguf) | Q5_K | 2.23GB |
| [TinyLLama-4x1.1B-MoE.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/s3nh_-_TinyLLama-4x1.1B-MoE-gguf/blob/main/TinyLLama-4x1.1B-MoE.Q5_K_M.gguf) | Q5_K_M | 2.23GB |
| [TinyLLama-4x1.1B-MoE.Q5_1.gguf](https://huggingface.co/RichardErkhov/s3nh_-_TinyLLama-4x1.1B-MoE-gguf/blob/main/TinyLLama-4x1.1B-MoE.Q5_1.gguf) | Q5_1 | 2.37GB |
| [TinyLLama-4x1.1B-MoE.Q6_K.gguf](https://huggingface.co/RichardErkhov/s3nh_-_TinyLLama-4x1.1B-MoE-gguf/blob/main/TinyLLama-4x1.1B-MoE.Q6_K.gguf) | Q6_K | 2.59GB |
| [TinyLLama-4x1.1B-MoE.Q8_0.gguf](https://huggingface.co/RichardErkhov/s3nh_-_TinyLLama-4x1.1B-MoE-gguf/blob/main/TinyLLama-4x1.1B-MoE.Q8_0.gguf) | Q8_0 | 3.35GB |
Original model description:
---
base_model:
- TinyLlama/TinyLlama-1.1B-Chat-v1.0
- 78health/TinyLlama_1.1B-function-calling
- phanerozoic/Tiny-Pirate-1.1b-v0.1
- Tensoic/TinyLlama-1.1B-3T-openhermes
tags:
- mergekit
- merge
license: mit
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
Example usage:
```python
from transformers import AutoModelForCausalLM
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("s3nh/TinyLLama-1.1B-MoE")
tokenizer = AutoTokenizer.from_pretrained("s3nh/TinyLLama-1.1B-MoE")
input_text = """
###Input: You are a pirate. tell me a story about wrecked ship.
###Response:
""")
input_ids = tokenizer.encode(input_text, return_tensors='pt').to(device)
output = model.generate(inputs=input_ids,
max_length=max_length,
do_sample=True,
top_k=10,
temperature=0.7,
pad_token_id=tokenizer.eos_token_id,
attention_mask=input_ids.new_ones(input_ids.shape))
tokenizer.decode(output[0], skip_special_tokens=True)
```
This model was possible to create by tremendous work of mergekit developers. I decided to merge tinyLlama models to
create mixture of experts.
Config used as below:
```
"""base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
experts:
- source_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
positive_prompts:
- "chat"
- "assistant"
- "tell me"
- "explain"
- source_model: 78health/TinyLlama_1.1B-function-calling
positive_prompts:
- "code"
- "python"
- "javascript"
- "programming"
- "algorithm"
- source_model: phanerozoic/Tiny-Pirate-1.1b-v0.1
positive_prompts:
- "storywriting"
- "write"
- "scene"
- "story"
- "character"
- source_model: Tensoic/TinyLlama-1.1B-3T-openhermes
positive_prompts:
- "reason"
- "provide"
- "instruct"
- "summarize"
- "count"
"""
```
|
gisang-lee/mistral-7b-qlora-arc-wandb-test-arc-easy-train-val | gisang-lee | "2024-07-02T17:30:15Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-07-02T17:19:16Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
finn03091993/naschainv207 | finn03091993 | "2024-07-02T17:20:05Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T17:20:03Z" | Entry not found |
M2LInES/ocean_surface_emulation | M2LInES | "2024-07-02T17:46:21Z" | 0 | 0 | null | [
"arxiv:2405.18585",
"license:mit",
"region:us"
] | null | "2024-07-02T17:20:19Z" | ---
license: mit
---
This model is a ConvNext model trained on Pre-industrial Ocean Surface data from the GFDL CM2.6 coupled climate model. More details can be found in the [paper](https://arxiv.org/abs/2405.18585) and [code](https://github.com/suryadheeshjith/Ocean_Emulator). |
whizzzzkid/whizzzzkid_430_2 | whizzzzkid | "2024-07-02T17:20:43Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T17:20:22Z" | Entry not found |
qsdcfqsdfcxqfqs/USS-Mason-to-head-home-after-9months-in-combat-at-sea-53-updated | qsdcfqsdfcxqfqs | "2024-07-02T17:22:07Z" | 0 | 0 | null | [
"en",
"region:us"
] | null | "2024-07-02T17:20:53Z" | ---
language:
- en
---
[![Build Status](https://static.toiimg.com/thumb/msid-111413976,width-1070,height-580,imgsize-1260008,resizemode-75,overlay-toi_sw,pt-32,y_pad-40/photo.jpg)]()
read the full article here : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us4544242532&Connector=https://unitedstatednews.com
Source : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_2331452241&Connector=https://unitedstatednews.com
Flash News : https://tempaste.com/NzDkpJCN6z5
Biden last Talk : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us3542322225&Connector=https://unitedstatednews.com
Russian Ukrain Breaking News : https://www.sep.va.gov/sep/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=Games_Free_Generator_112&Connector=https://unitedstatednews.com
Other Sources :
https://www.pastery.net/svtyeg/
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_4555513321&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2121444525&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3442423451&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1552332232&Connector=https://unitedstatednews.com
https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_2351253222&Connector=https://unitedstatednews.com
https://www.sep.va.gov/sep/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=Games_Free_Generator_345&Connector=https://unitedstatednews.com
https://binshare.net/EYA45Nr3ob3nFHUanhje
https://www.sep.va.gov/sep/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=Games_Free_Generator_322&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us3534455521&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1411341441&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1531335313&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_4113234534&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3435135553&Connector=https://unitedstatednews.com
NAVAL STATION MAYPORT, Fla. - The USS Mason will return home Tuesday after extended combat operations over nine months.
The USS Mason is a guided missile destroyer armed with torpedoes, a mounted gun, missiles and a defense system.
Recommended Videos
RELATED: Mayport-based USS Mason among ships fending off attacks from Houthi rebels in Red Sea
The ship was deployed in the Red Sea, Gulf of Aden and Mediterranean Seas, protecting vital shipping channels.
It came under a lot of fire in the Middle East, destroying more than 22 Houthi targets in Yemen and 5 Iranian-launched medium-range ballistic missiles.
The USS Carney, which worked with the Mason,also returned home in June after a seven-month deployment.
The Carney destroyed Houthi-launched weapons and 65 targets in Yemen..... |
maxseats/SungBeom-whisper-small-ko-set19 | maxseats | "2024-07-02T17:21:32Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"speech-recognition",
"ko",
"dataset:maxseats/aihub-464-preprocessed-680GB-set-19",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-07-02T17:21:11Z" |
---
language: ko
tags:
- whisper
- speech-recognition
datasets:
- maxseats/aihub-464-preprocessed-680GB-set-19
metrics:
- cer
---
# Model Name : maxseats/SungBeom-whisper-small-ko-set18
# Description
- 파인튜닝 데이터셋 : maxseats/aihub-464-preprocessed-680GB-set-19
# 설명
- AI hub의 주요 영역별 회의 음성 데이터셋을 학습 중이에요.
- 680GB 중 set_0~18 데이터(190GB)까지 파인튜닝한 모델을 불러와서, set_19 데이터(10GB)를 학습한 모델입니다.
- 링크 : https://huggingface.co/datasets/maxseats/aihub-464-preprocessed-680GB-set-19
|
qsdcfqsdfcxqfqs/Ron-Paul-The-presidential-debate-should-be-a-wakeup-call-for-Americans-cg-updated | qsdcfqsdfcxqfqs | "2024-07-02T17:22:32Z" | 0 | 0 | null | [
"en",
"region:us"
] | null | "2024-07-02T17:21:17Z" | ---
language:
- en
---
[![Build Status](http://img.scoop.co.nz/stories/images/1908/scoop_image.jpg)]()
read the full article here : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1551222143&Connector=https://unitedstatednews.com
Source : https://paste.enginehub.org/7tmwc6Rch
Flash News : https://tempaste.com/Wd1RpzIu72u
Biden last Talk : https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_4143221543&Connector=https://unitedstatednews.com
Russian Ukrain Breaking News : https://ctxt.io/2/AAAY6n8FFQ
Other Sources :
https://paste.feed-the-beast.com/view/11286bd7
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3352522235&Connector=https://unitedstatednews.com
https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_1122114443&Connector=https://unitedstatednews.com
https://www.taskade.com/d/gRwr4wda4QjUpT8L?share=view&view=uNycNujf83JseLLQ&as=list
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us3511544552&Connector=https://unitedstatednews.com
https://prod.pastebin.prod.webservices.mozgcp.net/DfDNWRip
https://paste.imirhil.fr/?00681ab2db88f14d#SvJIzafTI96FWQwvhUFihi74HnGnd5zMXlzubWse3RM=
https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_3443125134&Connector=https://unitedstatednews.com
https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_1552451335&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1513434325&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3322423525&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_4154214315&Connector=https://unitedstatednews.com
https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_1153441323&Connector=https://unitedstatednews.com
https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_5352313555&Connector=https://unitedstatednews.com
There were plenty of surprises in last week's presidential debate. For one, Americans who rely on the mainstream media for their news learned that they had been lied to for the past three years about President Biden's capability to do the job he was elected to do.
The realization that the media has been lying for years about Biden is a positive development, as, hopefully, thoughtful Americans might begin wondering what else the media has been lying about. For example, they will find out that the media has been lying to them for years about Russia and Ukraine and about the Middle East and elsewhere. They will find out that our hyper-interventionist foreign policy does not make us safer and more free, but the opposite.
Unfortunately for most Americans, foreign policy is something that happens "over there," with few direct effects back home. Dumping nearly $200 billion into the lost cause called "Ukraine" may at most seem like an annoyance to many Americans, but it's not like they are being snatched up by gangs of military recruiters and sent to the front line as is happening to Ukrainian men.
However, $200 billion is real money and the effect on our economy is also real. The bill will be paid by each American family indirectly through the inflation "tax." Each dollar created out of thin air and spent on the Ukraine debacle devalues the rest of the dollars in circulation.
The danger posed by our foreign policy seemed to escape both candidates, who each tried to convince us they were "tougher" than the other. Despite Donald Trump's sober and accurate warning that Joe Biden has taken us to the brink of World War III, his solution to the problem is doing more of the same. His stated foreign policy seems to be that were he in office the rest of the world would not dare do anything against his will.
He would have been so tough that Russian president Vladimir Putin would never have dared to invade Ukraine, he claimed. He would have been so tough that Hamas would never have dared attack Israel on October 7th. It's only Joe Biden's "weakness" that leads to these disastrous foreign policy outcomes.
But the world does not work that way. Decades of US sanctions placed on any country that fails to do what Washington demands have backfired and led to the emergence of a block of countries united in their resistance to American dictates. Being "tough" on less-powerful countries may work...until it doesn't. That's where we are today.
Neither candidate seems to realize that the world has changed.
I have always said that real strength in foreign policy comes from restraint. To prevent these bad outcomes everywhere, stop intervening everywhere. It is not "toughness" that would have prevented Russia from taking action against Ukraine. It is restraint. Not launching a coup in Ukraine in 2014 would have prevented the disastrous war in Ukraine. Just like not stirring up trouble in the South China Sea would prevent a war with China. Not continuing to occupy and intervene in the Middle East would prevent a major regional war which might include Iran and other big players in the region.
Restraint is the real toughness. Non-intervention is the only foreign policy that will keep us safe and free. We've tried it the other way and it does not work. Let's try something different..... |
kheopss/kheops_fr_en_epoch1_3bits_GPTQ | kheopss | "2024-07-02T17:24:32Z" | 0 | 0 | transformers | [
"transformers",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"3-bit",
"gptq",
"region:us"
] | text-generation | "2024-07-02T17:22:45Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
qsdcfqsdfcxqfqs/UN-group-demands-release-of-exPakistan-prime-minister-Imran-Khan-a1-updated | qsdcfqsdfcxqfqs | "2024-07-02T17:25:35Z" | 0 | 0 | null | [
"en",
"region:us"
] | null | "2024-07-02T17:24:22Z" | ---
language:
- en
---
[![Build Status](https://wwd.com/wp-content/uploads/2024/07/Feature-Image-2.jpg?w=1000&h=563&crop=1)]()
read the full article here : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us4354235442&Connector=https://unitedstatednews.com
Source : https://www.sep.va.gov/sep/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=Games_Free_Generator_514&Connector=https://unitedstatednews.com
Flash News : https://paste.feed-the-beast.com/view/2ba4a29f
Biden last Talk : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3122414432&Connector=https://unitedstatednews.com
Russian Ukrain Breaking News : https://huggingface.co/qsdcfqsdfcxqfqs/China-says-US-targeting-of-AI-not-helpful-to-healthy-development-bf-updated
Other Sources :
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us4125114145&Connector=https://unitedstatednews.com
https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_3135255341&Connector=https://unitedstatednews.com
https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_2214223144&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1341441452&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_1514253452&Connector=https://unitedstatednews.com
https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_5243341235&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3132113212&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us3134253244&Connector=https://unitedstatednews.com
https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_3112525123&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_1413323245&Connector=https://unitedstatednews.com
https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_2343555411&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us5453143314&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_5112253252&Connector=https://unitedstatednews.com
https://paste.enginehub.org/PVr8hBMNc
The Geneva-based United Nations Working Group on Arbitrary Detention made this demand after examining Khan's case in which he was sentenced last year on charges of corruption.
Khan has been facing multiple prison sentences since 2022 when he was ousted through a vote of no-confidence in the parliament. There was no immediate comment from the government of Prime Minister Shehbaz Sharif, who replaced Khan after his ousting.
Khan has been held in prison since August 2023 when a court awarded him a three-year prison sentence after finding him guilty of hiding assets after selling state gifts. It led to a ban on Khan from taking part in politics and contesting the February 8 elections, which his party says were rigged.
The Election Commission of Pakistan, which oversaw the vote, has denied the vote-rigging allegations.
Despite his conviction in multiple cases, Khan remains the leading opposition figure.
Khan's Pakistan Tehreek-e-Insaf party, or PTI, which has a strong presence in the parliament, hailed the demand of the UN group, which said Khan's detention in the graft case "had no legal basis and appears to have been intended to disqualify him from running for office.
It said "Khan was detained for exercising his right to freedom of expression or opinion" and that he was also denied a "fair trial and due process rights".
The UN working group demanded Khan's immediate release, saying it was an "appropriate remedy".
The group further said Khan's conviction in the graft case was "part of a much larger campaign of repression targeting the PTI generally and Khan specifically".
It said: "In the lead up to Pakistan's February 2024 general elections, PTI candidates were arrested, tortured, and intimidated into leaving the party; PTI rallies were disrupted and blocked; and the party was deprived of its iconic cricket bat symbol, forcing its candidates to run as independents."
The UN group also said Khan himself was facing more than 150 politically motivated criminal cases, and just days before the election, he was convicted in three more cases and sentenced to an additional 10 years, 14 years, and seven years in prison, respectively.
"For Khan, who is 71 years old, the combined prison term of 34 years amounts to a life sentence," the group said. Khan's spokesman Zulfi Bukhari, welcomed the group's findings and demands for Khan's release.
Khan's party won the most seats in the February 8 vote but fell short of a majority to form a government..... |
ProElectro07/subbb750x1 | ProElectro07 | "2024-07-02T17:24:51Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T17:24:37Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
qsdcfqsdfcxqfqs/Governor-Yusuf-Queries-Refuse-Board-KAROTA-over-poor-service-eb-updated | qsdcfqsdfcxqfqs | "2024-07-02T17:25:54Z" | 0 | 0 | null | [
"en",
"region:us"
] | null | "2024-07-02T17:24:40Z" | ---
language:
- en
---
[![Build Status](https://www.ludlowadvertiser.co.uk/resources/images/18259919/)]()
read the full article here : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us4455143412&Connector=https://unitedstatednews.com
Source : https://huggingface.co/qsdcfqsdfcxqfqs/Ron-Paul-The-presidential-debate-should-be-a-wakeup-call-for-Americans-cg-updated
Flash News : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us4425432454&Connector=https://unitedstatednews.com
Biden last Talk : https://www.sep.va.gov/sep/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=Games_Free_Generator_552&Connector=https://unitedstatednews.com
Russian Ukrain Breaking News : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us5131114125&Connector=https://unitedstatednews.com
Other Sources :
https://rift.curseforge.com/paste/02c7c460
https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_5415455555&Connector=https://unitedstatednews.com
https://tempaste.com/1UwhW5JaTxL
https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_4423251421&Connector=https://unitedstatednews.com
https://paste.toolforge.org/view/331f84eb
https://www.sep.va.gov/sep/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=Games_Free_Generator_144&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2554132423&Connector=https://unitedstatednews.com
https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_1533535251&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_2535344321&Connector=https://unitedstatednews.com
https://www.sep.va.gov/sep/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=Games_Free_Generator_545&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_4513343442&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us5155253254&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_2324543523&Connector=https://unitedstatednews.com
https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_2315354451&Connector=https://unitedstatednews.com
..... Expresses dissatisfaction over the decay of equipment at two public work agencies
Kano state Governor Abba Kabir Yusuf has expressed disappointment with performance of Refuse Management and Sanitation Board (REMASAB), and Kano Road Traffic Agency (KAROTA) in the discharge of their public responsibilities.
This was contained in a statement issued by Sanusi Bature Dawakin Tofa, the Spokesperson to the Governor.
Governor Yusuf also ordered the management of the two agencies to submit comprehensive inventory of their equipment to ascertain poor state of facilities.
The Governor gave the orders after unscheduled visit to the two government agencies where he lamented inefficiency in the management of facilities and personnel of the agencies.
Yusuf who was compiciously worried by the inactive of the agencies regretted the low out of the government agencies despite huge investment and equipment procurement.
At REMASAP, the Governor discovered seven functional waste evacuation trucks despite the availability of 30, just as three payloaders are working of the 15 available.
The Governor was also informed that 10 workers of REMASAB are permanent and pensionable while the rest are casuals, expressing concern with the situation of casual staff which he described inhuman keeping such personnel for 20 years.
While at KAROTA head office, the Governor discovered significant number of operational vehicles grounded.
Dissatisfied with the development, the Governor directed KAROTA management to submit comprehensive inventory of their operational vehicles immediately.
Gov Yusuf said government investment in the provision of operational vehicles to REMASAB was a demonstration of priority atteched the sanitation in state but the management failed to live to expectation.
"I am not happy with what I saw at the two agencies, this calls for a total overhaul of the system, we cannot afford to fail in the discharge of the madates of agencies like KAROTA and REMASAB" the Governor vowed during an on the spot visit
The Managing Directors of the REMASAB and KAROTO Hon. Amadu Haruna Zago and Engr. Faisal Mahmoud arrived the scene and were directed to meet the Chief of Staff to the Governor for emergency meeting on how to address the immediate challenges of the agencies..... |
cyan2k/promptvieh_text | cyan2k | "2024-07-02T17:24:54Z" | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"mistral",
"gguf",
"en",
"base_model:unsloth/phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T17:24:54Z" | ---
base_model: unsloth/phi-3-mini-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
---
# Uploaded model
- **Developed by:** cyan2k
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ShakedAAA/Mixtral-8x7B-v0.1-Colleen_8k_06_10_replyOnly_5000_fixed-adapters_July | ShakedAAA | "2024-07-02T17:25:44Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T17:25:04Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ferrazzipietro/Meta-Llama-3-8B-Instruct_en.layer1_NoQuant_16_32_0.02_8 | ferrazzipietro | "2024-07-02T17:26:05Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T17:25:58Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jmg2016/Splade_PP_en_v2_onnx | jmg2016 | "2024-07-02T21:03:03Z" | 0 | 0 | null | [
"onnx",
"splade++",
"document-expansion",
"sparse representation",
"bag-of-words",
"passage-retrieval",
"knowledge-distillation",
"document encoder",
"en",
"dataset:ms_marco",
"base_model:prithivida/Splade_PP_en_v2",
"license:apache-2.0",
"region:us"
] | null | "2024-07-02T17:26:51Z" | ---
license: apache-2.0
language:
- en
datasets:
- ms_marco
tags:
- splade++
- document-expansion
- sparse representation
- bag-of-words
- passage-retrieval
- knowledge-distillation
- document encoder
pretty_name: >-
ONNX model for prithivida's Splade_PP_en_v2,
an Independent Implementation of SPLADE++ Model with some efficiency tweaks for
Industry setting.
base_model: prithivida/Splade_PP_en_v2
---
# ONNX model for Splade_PP_en_v2
See [https://huggingface.co/prithivida/Splade_PP_en_v2](https://huggingface.co/prithivida/Splade_PP_en_v2)
This is just a script for onnx conversion, and an onnx model, with an output format that is compatible with the [anserini](https://github.com/castorini/anserini) SparseEncoder implementations. Based on advice this [github issue](https://github.com/naver/splade/issues/47).
```
python splade_pp_en_v2_to_onnx.py splade_pp_en_v2.onnx
```
|
hasininawoda/output2 | hasininawoda | "2024-07-02T17:29:24Z" | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2024-07-02T17:27:13Z" | ---
base_model: CompVis/stable-diffusion-v1-4
library_name: diffusers
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - hasininawoda/output2
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were fine-tuned on the None dataset. You can find some example images in the following.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
qsdcfqsdfcxqfqs/Important-dates-in-the-2024-US-presidential-race-hh-updated | qsdcfqsdfcxqfqs | "2024-07-02T17:28:54Z" | 0 | 0 | null | [
"en",
"region:us"
] | null | "2024-07-02T17:27:41Z" | ---
language:
- en
---
[![Build Status](https://cst.brightspotcdn.com/dims4/default/01b31c1/2147483647/strip/true/crop/9153x5225+0+439/resize/1461x834!/quality/90/?url=https%3A%2F%2Fchorus-production-cst-web.s3.us-east-1.amazonaws.com%2Fbrightspot%2Ffe%2F8e%2F3c11e206482c9cf623196d8f91fd%2Fap24091065595008.jpg)]()
read the full article here : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_5431545532&Connector=https://unitedstatednews.com
Source : https://www.sep.va.gov/sep/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=Games_Free_Generator_135&Connector=https://unitedstatednews.com
Flash News : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3522321353&Connector=https://unitedstatednews.com
Biden last Talk : https://tempaste.com/xDgNfNinjU9
Russian Ukrain Breaking News : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us4141122115&Connector=https://unitedstatednews.com
Other Sources :
https://snippet.host/tojxpk
https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_4311235243&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_2255453243&Connector=https://unitedstatednews.com
https://www.pastery.net/txtjvb/
https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_3455134343&Connector=https://unitedstatednews.com
https://commie.io/#kCTo9Mzh
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2512213514&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2115352134&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_2315142534&Connector=https://unitedstatednews.com
https://wow.curseforge.com/paste/1986621b
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_2252313353&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3135315421&Connector=https://unitedstatednews.com
https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_4353233433&Connector=https://unitedstatednews.com
https://sebsauvage.net/paste/?098e454bff792d61#BVpV4OBAVeC3ytdQdhl7G+tzPC4FZoZtWdmrrsZKHjI=
By Costas Pitas
July 1 (Reuters) - President Joe Biden, a Democrat, and Republican former President Donald Trump will face each other in the 2024 presidential election in what is expected to be a divisive and closely fought contest. Here is a timeline of events related to the Nov. 5 election between now and Inauguration Day in January 2025.
2024 - July 11: Trump, the first sitting or former U.S. president to be convicted of a crime, will be sentenced in the Manhattan hush money case where he was found guilty of falsifying documents to cover up a payment to silence a porn star. He denies wrongdoing and plans to appeal.
- July 15-18: Republican National Convention in Milwaukee, Wisconsin, where the party formally chooses its candidate.
- In late July or beyond: Vice President Kamala Harris has agreed to debate Trump's yet-to-be-announced running mate on CBS on either of the proposed dates of July 23 or Aug. 13. Trump has accepted a vice presidential debate to be held by Fox News.
Fox wrote to both campaigns suggesting July 23, Aug. 13 or other dates following the party conventions. Trump's campaign has yet to say whether his running mate would debate Harris on CBS. Nor has the Biden campaign said if Harris would debate on Fox.
- By Aug. 7: The Democrats are set to hold a "virtual roll call" to nominate Biden. It will take place by Aug. 7, the original ballot certification deadline in Ohio, although that date has since been pushed back.
Biden had been due to be officially nominated at the Democratic National Convention later in August.
- Aug. 19-22: Democratic National Convention in Chicago.
- Sept. 10: The second debate between Biden and Trump will be hosted by ABC.
- Nov. 5: Election Day
- Later in November: It could take days for the election result to be known, especially if it is close and mail-in ballots are a factor.
2025
- Jan. 6: The vice president presides over the Electoral College vote count at a joint session of Congress, announces the results and declares who has been elected. Ahead of the count on Jan. 6, 2021, then-President Trump lambasted his vice president, Mike Pence, for refusing to try to prevent Congress from certifying Biden's win. On that day, the U.S. Capitol was attacked by rioters, and some people chanted, "hang Mike Pence" as they tried to stop the count. Both chambers of Congress later resumed their work and certified Biden's win.
Congress has since passed the Electoral Count Reform and Presidential Transition Improvement Act of 2022, which requires approval of one-fifth of the House and Senate to consider a challenge to a state's results - a much higher bar than existed before, when any single lawmaker from each chamber could trigger a challenge.
- Jan. 20: The inauguration of the election winner and their vice president takes place. At this ceremony, the victor and vice president are officially sworn into office. (Reporting by Costas Pitas, Tim Reid and Susan Heavey; Editing by Howard Goller and Stephen Coates).... |
qsdcfqsdfcxqfqs/Substandard-prostate-cancer-procedures-cutting-Kiwi-mens-lives-short-experts-say-df-updated | qsdcfqsdfcxqfqs | "2024-07-02T17:29:02Z" | 0 | 0 | null | [
"en",
"region:us"
] | null | "2024-07-02T17:27:49Z" | ---
language:
- en
---
[![Build Status](https://www.csmonitor.com/extension/csm_daily/design/csm_design/images/csm_daily_logo_900x600.png)]()
read the full article here : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_4123231334&Connector=https://unitedstatednews.com
Source : https://commie.io/#WaoFy644
Flash News : https://www.sep.va.gov/sep/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=Games_Free_Generator_423&Connector=https://unitedstatednews.com
Biden last Talk : https://tempaste.com/p7VU0nUXJ0W
Russian Ukrain Breaking News : https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_2422212441&Connector=https://unitedstatednews.com
Other Sources :
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us5335343543&Connector=https://unitedstatednews.com
https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_1452455424&Connector=https://unitedstatednews.com
https://bitbin.it/2l7dpENH/
https://prod.pastebin.prod.webservices.mozgcp.net/erpnc2sA
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us4215525345&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_5332112353&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3433325324&Connector=https://unitedstatednews.com
https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_2252511535&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_2141115225&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2215122242&Connector=https://unitedstatednews.com
https://www.sep.va.gov/sep/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=Games_Free_Generator_421&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us4444315311&Connector=https://unitedstatednews.com
https://dev.bukkit.org/paste/cda95cb4
https://tempaste.com/create-paste
"New Zealand men are not as well served as they could be in terms of the diagnostic pathway for prostate cancer, especially when compared to other countries," Dickens said.
The process to diagnose prostate cancer usually starts with a prostate-specific antigen (PSA) blood test. Patients with a persistent high PSA level are referred for an MRI scan. Should the MRI scan reveal a cancer, a biopsy is conducted to confirm.
Biopsies are a critical step in the early detection of prostate cancer. An accurate biopsy can be the difference between a long life and a life cut short.
Transrectal biopsies are facing global scrutiny for the complications they can cause, such as rectal bleeding and infection. Despite this, they're the standard practice in New Zealand.
International guidelines recommend transperineal biopsies because they "almost entirely eliminate the risk of these complications".
Dickens said transrectal techniques can be far less accurate because they cannot access the entire prostate.
"The biopsy needle can miss the cancer because transrectally means that they haven't been able to reach the place that the cancer was actually hiding out," he said.
"A man may be told that he doesn't have prostate cancer, when, in fact, he does."
According to the Prostate Cancer Outcomes Registry of Australia and New Zealand Annual Report 2023, only 29% of biopsies performed in New Zealand during 2021 were transperineal, compared to more than 80% of biopsies in Australia.
Dickens said some experts hold strong opinions against the use of transrectal biopsies.
"A senior figure from the European Association of Urology commented on social media late last year that transrectal ultrasound biopsy in Europe is almost considered medical malpractice," he said.
Dr Simon van Rij is an Auckland-based urologist pushing for New Zealand to shift towards transperineal biopsies. He said with increasing pressure on hospitals, the move would free up operating theatres allowing time for more critical surgeries.
"Under local anaesthetic, a biopsy can be done in a clinic, outside of an operating theatre setting so it could also make biopsies more accessible around the country," he said.
Both van Rij and Dickens believe the main barrier to standardising transperineal biopsies is funding.
"The reason that transrectal biopsy is still used in New Zealand is mostly an issue of cost, particularly in public practice," Dickens said.
They are calling on the Government to "improve the diagnostic pathways for men accessing the public health system, with transperineal biopsy more widely available".
"If we can find cancer early, it gives our patients the space and confidence to take their time to choose the treatment that's right for them," van Rij said.
"We're in a very stretched medical system which is under-resourced and under-financed, and as a result, unfortunately, men in New Zealand pay the price by having a method of biopsy which is not standard of care in other countries.".... |
sims2k/Saul-Instruct-v1-gdpr-finetuned-v1.1-GGUF | sims2k | "2024-07-02T18:06:19Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T17:28:26Z" | Entry not found |
maxxi146/llama-3-8b-Instruct-bnb-4bit-personalv2 | maxxi146 | "2024-07-02T17:40:14Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T17:28:40Z" | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** maxxi146
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/Alsebay_-_Narisumashi-11B-v1.5-gguf | RichardErkhov | "2024-07-03T00:39:02Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-07-02T17:28:44Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Narisumashi-11B-v1.5 - GGUF
- Model creator: https://huggingface.co/Alsebay/
- Original model: https://huggingface.co/Alsebay/Narisumashi-11B-v1.5/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Narisumashi-11B-v1.5.Q2_K.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narisumashi-11B-v1.5-gguf/blob/main/Narisumashi-11B-v1.5.Q2_K.gguf) | Q2_K | 3.73GB |
| [Narisumashi-11B-v1.5.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narisumashi-11B-v1.5-gguf/blob/main/Narisumashi-11B-v1.5.IQ3_XS.gguf) | IQ3_XS | 4.14GB |
| [Narisumashi-11B-v1.5.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narisumashi-11B-v1.5-gguf/blob/main/Narisumashi-11B-v1.5.IQ3_S.gguf) | IQ3_S | 4.37GB |
| [Narisumashi-11B-v1.5.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narisumashi-11B-v1.5-gguf/blob/main/Narisumashi-11B-v1.5.Q3_K_S.gguf) | Q3_K_S | 4.34GB |
| [Narisumashi-11B-v1.5.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narisumashi-11B-v1.5-gguf/blob/main/Narisumashi-11B-v1.5.IQ3_M.gguf) | IQ3_M | 4.51GB |
| [Narisumashi-11B-v1.5.Q3_K.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narisumashi-11B-v1.5-gguf/blob/main/Narisumashi-11B-v1.5.Q3_K.gguf) | Q3_K | 4.84GB |
| [Narisumashi-11B-v1.5.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narisumashi-11B-v1.5-gguf/blob/main/Narisumashi-11B-v1.5.Q3_K_M.gguf) | Q3_K_M | 4.84GB |
| [Narisumashi-11B-v1.5.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narisumashi-11B-v1.5-gguf/blob/main/Narisumashi-11B-v1.5.Q3_K_L.gguf) | Q3_K_L | 5.26GB |
| [Narisumashi-11B-v1.5.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narisumashi-11B-v1.5-gguf/blob/main/Narisumashi-11B-v1.5.IQ4_XS.gguf) | IQ4_XS | 5.43GB |
| [Narisumashi-11B-v1.5.Q4_0.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narisumashi-11B-v1.5-gguf/blob/main/Narisumashi-11B-v1.5.Q4_0.gguf) | Q4_0 | 5.66GB |
| [Narisumashi-11B-v1.5.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narisumashi-11B-v1.5-gguf/blob/main/Narisumashi-11B-v1.5.IQ4_NL.gguf) | IQ4_NL | 5.72GB |
| [Narisumashi-11B-v1.5.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narisumashi-11B-v1.5-gguf/blob/main/Narisumashi-11B-v1.5.Q4_K_S.gguf) | Q4_K_S | 5.7GB |
| [Narisumashi-11B-v1.5.Q4_K.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narisumashi-11B-v1.5-gguf/blob/main/Narisumashi-11B-v1.5.Q4_K.gguf) | Q4_K | 6.02GB |
| [Narisumashi-11B-v1.5.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narisumashi-11B-v1.5-gguf/blob/main/Narisumashi-11B-v1.5.Q4_K_M.gguf) | Q4_K_M | 6.02GB |
| [Narisumashi-11B-v1.5.Q4_1.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narisumashi-11B-v1.5-gguf/blob/main/Narisumashi-11B-v1.5.Q4_1.gguf) | Q4_1 | 6.27GB |
| [Narisumashi-11B-v1.5.Q5_0.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narisumashi-11B-v1.5-gguf/blob/main/Narisumashi-11B-v1.5.Q5_0.gguf) | Q5_0 | 6.89GB |
| [Narisumashi-11B-v1.5.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narisumashi-11B-v1.5-gguf/blob/main/Narisumashi-11B-v1.5.Q5_K_S.gguf) | Q5_K_S | 6.89GB |
| [Narisumashi-11B-v1.5.Q5_K.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narisumashi-11B-v1.5-gguf/blob/main/Narisumashi-11B-v1.5.Q5_K.gguf) | Q5_K | 7.08GB |
| [Narisumashi-11B-v1.5.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narisumashi-11B-v1.5-gguf/blob/main/Narisumashi-11B-v1.5.Q5_K_M.gguf) | Q5_K_M | 7.08GB |
| [Narisumashi-11B-v1.5.Q5_1.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narisumashi-11B-v1.5-gguf/blob/main/Narisumashi-11B-v1.5.Q5_1.gguf) | Q5_1 | 7.51GB |
| [Narisumashi-11B-v1.5.Q6_K.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narisumashi-11B-v1.5-gguf/blob/main/Narisumashi-11B-v1.5.Q6_K.gguf) | Q6_K | 8.2GB |
| [Narisumashi-11B-v1.5.Q8_0.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narisumashi-11B-v1.5-gguf/blob/main/Narisumashi-11B-v1.5.Q8_0.gguf) | Q8_0 | 10.62GB |
Original model description:
---
language:
- en
license: cc-by-nc-4.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
- Roleplay
- roleplay
base_model: Sao10K/Fimbulvetr-11B-v2
---
# About this model
TSF content
Beta for V2 of https://huggingface.co/Alsebay/Narumashi-11B-v0.9 (wrong typo but I'm too lazy to fix), but have only 32 rank and 32 lora rank, which cause the model didn't learn well all dataset information, it just know basis information. Anyways, it good if your have a chinese, japanese prompt to trigger TSF content. Maybe not smart, I haven't test yet.
- **Finetuned from model :** Sao10K/Fimbulvetr-11B-v2 . Thank Sao10K a lot :)
## I have text and found that Sao10K/Fimbulvetr-11B-v2 could unlock as 8K context length (maybe logic will go down a bit?), so I leave it alone to reduce RAM and VRAM. That mean you can use as 8k context length although this model say only 4k.
## GGUF version? [here is it](https://huggingface.co/Alsebay/Narisumashi-GGUF).
## Dataset
All chinese novels dataset
```
Dataset(all are novels):
60% skinsuit
25% possession
5% transform(shapeshift)
10% other
```
# Thank Unsloth for good finetuning tool. This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ShakedAAA/mixstral_5000_2ndJuly | ShakedAAA | "2024-07-02T18:13:13Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T17:28:47Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DerenXd/Cute_Girls-V.2.0 | DerenXd | "2024-07-02T17:38:26Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"region:us"
] | text-to-image | "2024-07-02T17:29:42Z" | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/4280PDKe7M1JJmg24K4kS.png
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: null
---
# Cute_Girls V.2.0
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/DerenXd/Cute_Girls-V.2.0/tree/main) them in the Files & versions tab.
|
Koleshjr/flan-t5-base-finetuned-translation-v2 | Koleshjr | "2024-07-02T19:55:26Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-07-02T17:30:26Z" | Entry not found |
yizhujiao/llama3-8b-sft-medical | yizhujiao | "2024-07-03T01:29:31Z" | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | "2024-07-02T17:30:48Z" | ---
license: llama3
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Meta-Llama-3-8B-Instruct
model-index:
- name: llama3-8b-sft-medical
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3-8b-sft-medical
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.41e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 6
- gradient_accumulation_steps: 2
- total_train_batch_size: 12
- total_eval_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.1
- Pytorch 2.5.0.dev20240624+cu118
- Datasets 2.20.0
- Tokenizers 0.19.1 |
CoprolaliacPress/Lewd-Sydney-20B-Q6_K-GGUF | CoprolaliacPress | "2024-07-02T17:42:49Z" | 0 | 0 | null | [
"gguf",
"not-for-all-audiences",
"nsfw",
"llama-cpp",
"gguf-my-repo",
"base_model:Undi95/Lewd-Sydney-20B",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2024-07-02T17:31:16Z" | Invalid username or password. |
TheFinAI/finllm-8B-v0.3 | TheFinAI | "2024-07-02T17:39:44Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T17:31:17Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
qsdcfqsdfcxqfqs/Marriage-equality-brings-joy-3b-updated | qsdcfqsdfcxqfqs | "2024-07-02T17:32:37Z" | 0 | 0 | null | [
"en",
"region:us"
] | null | "2024-07-02T17:31:24Z" | ---
language:
- en
---
[![Build Status](https://www.gzeromedia.com/media-library/image.png?id=52524411&width=1245&height=700&coordinates=0%2C214%2C0%2C215)]()
read the full article here : https://huggingface.co/qsdcfqsdfcxqfqs/USS-Mason-to-head-home-after-9months-in-combat-at-sea-53-updated
Source : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2445453525&Connector=https://unitedstatednews.com
Flash News : https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_5414543232&Connector=https://unitedstatednews.com
Biden last Talk : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_4244241324&Connector=https://unitedstatednews.com
Russian Ukrain Breaking News : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1314221312&Connector=https://unitedstatednews.com
Other Sources :
https://paste.imirhil.fr/?4aba5a6917c1feaa#99V9OxJ8W02AZ+XwA2Y66eVo/jScrLK8bbB6YY7vIWo=
https://ctxt.io/2/AAAYro1UFg
https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_4532455455&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_1154243311&Connector=https://unitedstatednews.com
https://tempaste.com/PKaNC53z3i2
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3331225231&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3414423334&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us5432235521&Connector=https://unitedstatednews.com
https://tempaste.com/LTr1lxdq2PF
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3441545224&Connector=https://unitedstatednews.com
https://tempaste.com/create-paste
https://paste2.org/nIN0FD9J
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_2541255523&Connector=https://unitedstatednews.com
https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_1423544342&Connector=https://unitedstatednews.com
Same-sex couples have been expressing joy since the passage of Thailand's Marriage Equality Bill in the Senate two weeks ago.
They say they have been waiting patiently for this moment for a long time, and once the bill becomes law, they will apply for a marriage licence.
The law is meaningful to them because it helps guarantee a happy and healthy relationship as long as they are committed to the values that keep them together.
On June 18, the Senate endorsed the Marriage Equality Bill at its final reading.
The bill allows same-sex couples to register their marriage, with their relationship protected under the same law that applies to heterosexual couples.
The bill is expected to become law and take effect by the end of this year, when Thailand is set to become one of 37 countries in the world and the first Southeast Asian nation to legalise same-sex marriage.
A long fight
The initiative to allow same-sex marriage was first proposed in 2001 by then-interior minister Purachai Piamsomboon. However, it was shot down by Thaksin Shinawatra, prime minister at the time.
The idea was brought back in 2019 during Prime Minister Prayut Chan-o-cha's administration. Legislation on marriage equality was approved but later dropped as parliament was dissolved.
The Lower House finally passed the Marriage Equality Bill on March 27, and 84 days later, the Upper House announced its final approval of the bill on a 130:4 vote.
The law will take effect 120 days after it is published in the Royal Gazette following royal endorsement.
The Thai LGBTQ+ community considers it a victory after having fought for their rights for more than two decades.
The Bangkok Post recently sat down with some community members to seek their thoughts on the much-anticipated law.
Room to improve
Prinn Vadhanavira, 44, and Chakkrit Vadhanavira, 49, are among those looking to tie the knot, despite having been together in a relationship for 22 years.
Mr Prinn said that because there was no legal recognition for same-sex couples in the past, the couple faced many difficulties, especially when buying property, getting a loan together or listing each other as insurance beneficiaries.
They eventually solved these problems by having Mr Prinn's parents adopt Mr Chakkrit as a son so that they could have legal benefits as legal siblings.
The couple said they would register their marriage as soon as the law was in effect, adding they had already consulted legal experts and studied the procedures for changing their legal status from adoptive siblings to spouses.
The law is also expected to relieve crucial concerns for Sirorat Kanjanasumranwong, 38, and her partner Palita Areeras, 30. The couple have been in their relationship for three years.
"The fact that we couldn't sign medical approval for each other because we were not legally related always bothered us. Now we are relieved that the issue will soon be fixed," Ms Sirorat said.
While the law allows same-sex couples to register their marriage, some sections need to be improved, especially regarding gender-specific status, which may lead to the misgendering of some LGBTQ+ people, especially trans people.
Nachale Boonyapisomparn, vice president of the Foundation of Transgender Alliance for Human Rights, said that she, as a trans woman, would like to be a mother while her partner, a transman, would like to be a father if they decided to register their relationship.
Technically, they can register their marriage as a heterosexual couple. However, they have determined their gender identity is not the same as their biological sex.
However, the law still uses "father" and "mother\.... |
CassioBN/roberta-base_LeNER-Br | CassioBN | "2024-07-02T18:25:55Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:lener_br",
"base_model:FacebookAI/roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-07-02T17:31:48Z" | ---
license: mit
base_model: FacebookAI/roberta-base
tags:
- generated_from_trainer
datasets:
- lener_br
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base_LeNER-Br
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: lener_br
type: lener_br
config: lener_br
split: validation
args: lener_br
metrics:
- name: Precision
type: precision
value: 0.765
- name: Recall
type: recall
value: 0.8415841584158416
- name: F1
type: f1
value: 0.8014667365112624
- name: Accuracy
type: accuracy
value: 0.9711736213348917
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base_LeNER-Br
This model is a fine-tuned version of [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on the lener_br dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Precision: 0.765
- Recall: 0.8416
- F1: 0.8015
- Accuracy: 0.9712
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.293 | 1.0 | 979 | nan | 0.5758 | 0.7525 | 0.6524 | 0.9542 |
| 0.0596 | 2.0 | 1958 | nan | 0.6546 | 0.7987 | 0.7195 | 0.9534 |
| 0.0376 | 3.0 | 2937 | nan | 0.7366 | 0.8339 | 0.7822 | 0.9672 |
| 0.0256 | 4.0 | 3916 | nan | 0.6975 | 0.8042 | 0.7471 | 0.9627 |
| 0.0192 | 5.0 | 4895 | nan | 0.7173 | 0.8317 | 0.7702 | 0.9646 |
| 0.013 | 6.0 | 5874 | nan | 0.7271 | 0.8498 | 0.7837 | 0.9605 |
| 0.013 | 7.0 | 6853 | nan | 0.7426 | 0.8537 | 0.7943 | 0.9680 |
| 0.0064 | 8.0 | 7832 | nan | 0.7493 | 0.8399 | 0.7920 | 0.9702 |
| 0.0052 | 9.0 | 8811 | nan | 0.7611 | 0.8273 | 0.7928 | 0.9725 |
| 0.0044 | 10.0 | 9790 | nan | 0.765 | 0.8416 | 0.8015 | 0.9712 |
### Testing results
metrics={'test_loss': 0.08161260932683945, 'test_precision': 0.8342714196372732, 'test_recall': 0.8840291583830351, 'test_f1': 0.8584298584298585, 'test_accuracy': 0.9863512377202157, 'test_runtime': 20.4317, 'test_samples_per_second': 68.032, 'test_steps_per_second': 8.516})
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
qsdcfqsdfcxqfqs/Sesame-a-vital-oilseed-crop-of-Punjab-42-updated | qsdcfqsdfcxqfqs | "2024-07-02T17:33:13Z" | 0 | 0 | null | [
"en",
"region:us"
] | null | "2024-07-02T17:31:57Z" | ---
language:
- en
---
[![Build Status](https://cdn.abcotvs.com/dip/images/15017033_070124-wtvd-dps-superintendent-cindy-5p-vid.jpg?w=1600)]()
read the full article here : https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_5454221511&Connector=https://unitedstatednews.com
Source : https://yamcode.com/raw/global-markets-rally-after-positive-economic-data
Flash News : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3531555253&Connector=https://unitedstatednews.com
Biden last Talk : https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_2124434544&Connector=https://unitedstatednews.com
Russian Ukrain Breaking News : https://tempaste.com/9Wysc7uAQ0R
Other Sources :
https://tempaste.com/zlpCGAbRB1l
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3324451314&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2133333115&Connector=https://unitedstatednews.com
https://huggingface.co/qsdcfqsdfcxqfqs/Governor-Yusuf-Queries-Refuse-Board-KAROTA-over-poor-service-eb-updated
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3411235113&Connector=https://unitedstatednews.com
https://tempaste.com/eUjrI2AwHOj
https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_4355113511&Connector=https://unitedstatednews.com
https://tempaste.com/NAwcFWjt4q1
https://tempaste.com/2l7hoL7EgDT
https://yamcode.com/whispers-of-ancient-secrets-in-a-modern-world
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2342112254&Connector=https://unitedstatednews.com
https://www.sep.va.gov/sep/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=Games_Free_Generator_332&Connector=https://unitedstatednews.com
https://rift.curseforge.com/paste/84323470
https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_1111534523&Connector=https://unitedstatednews.com
Sesame is an important short-duration oilseed crop cultivated in Punjab for centuries. Its seeds contain more than 50% edible oil and about 22% good quality protein.
According to health experts, its properties are very similar to olive oil, which is why it is the best nutrition for humans and livestock.
Dr Hafiz Salim Akhtar Anjum says that sesame oil is used in pharmaceuticals, and dry cleaners, as well as in high-end soaps, perfumes, and fast food bakery products. Due to these characteristics, domestic and international demand for sesame seeds is increasing. The cost of cultivation of sesame seeds is low and the income is high.
Sajid Hussain, a local leader of the All Pakistan Kisan itehad, said sesame cultivation time in Pakistan is from June 1 to July 15. Light sandy and medium loam soil with good water absorption capacity is most suitable for sesame cultivation. The fertile and lowland is not suitable for this cultivation. The crop is cultivated over about 130,000 acres in the country.
According to farmer Haji Muhammad Aslam, at first, the land is prepared by ploughing two or three times and preparing the land well. The land should be level which is very important to avoid the ill effects of water shortage.
Sesame cultivation is done by tractor drill, usually in rows. The distance between rows is 45cm and the seed is sown to a depth of 2cm.
To speed up the growth of the crop, clean and healthy seeds are sown at 2kg per acre in suitable land. Along with this, one sack of DAP and half a sack of urea fertilizer are used at the time of sowing. A week after sowing, the germination of the seed is complete.
According to farmer Razak Hussain, sesame crop generally requires three to four times irrigation at 15 to 20-day intervals. The sesame harvest period is 100 to 120 days, after which harvesting begins. Sesame has an average yield of 15 maunds per acre. According to agronomist Muhammad Arshad Jutt, several diseases attack the sesame crop, which causes a decrease in production. Healthy and treated seeds should be used along with good pesticide spray to ensure a good sesame crop to avoid these diseases.
According to Dr Anjum, sesame seeds are called the meat of the poor. People who don't eat meat should use sesame seeds. The seeds are a rich source of vitamins. All the essential building blocks of the human body are present. It is very useful for heart diseases, skin diseases, high blood pressure, stroke, lung and stomach diseases. The use of sesame seeds is useful in improving the complexion as well as lengthening and darkening the hair. For children who wet the bed while sleeping, sesame seeds are very beneficial as they strengthen the bladder.
Sesame is widely used in sweets. Sesame laddoos, reoris and Gichak are popular souvenirs, said a baker, Muhammad Shehzad.... |
MrGonk/Gonk_3 | MrGonk | "2024-07-02T17:34:09Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T17:31:58Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
akhilpavuluri/GenAI1 | akhilpavuluri | "2024-07-02T17:33:01Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T17:33:01Z" | Entry not found |
mradermacher/Echidna-13b-v0.3-GGUF | mradermacher | "2024-07-02T18:21:43Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:NeverSleep/Echidna-13b-v0.3",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T17:33:40Z" | ---
base_model: NeverSleep/Echidna-13b-v0.3
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/NeverSleep/Echidna-13b-v0.3
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Echidna-13b-v0.3-GGUF/resolve/main/Echidna-13b-v0.3.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Echidna-13b-v0.3-GGUF/resolve/main/Echidna-13b-v0.3.IQ3_XS.gguf) | IQ3_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Echidna-13b-v0.3-GGUF/resolve/main/Echidna-13b-v0.3.IQ3_S.gguf) | IQ3_S | 5.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Echidna-13b-v0.3-GGUF/resolve/main/Echidna-13b-v0.3.Q3_K_S.gguf) | Q3_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Echidna-13b-v0.3-GGUF/resolve/main/Echidna-13b-v0.3.IQ3_M.gguf) | IQ3_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/Echidna-13b-v0.3-GGUF/resolve/main/Echidna-13b-v0.3.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Echidna-13b-v0.3-GGUF/resolve/main/Echidna-13b-v0.3.Q3_K_L.gguf) | Q3_K_L | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/Echidna-13b-v0.3-GGUF/resolve/main/Echidna-13b-v0.3.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/Echidna-13b-v0.3-GGUF/resolve/main/Echidna-13b-v0.3.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Echidna-13b-v0.3-GGUF/resolve/main/Echidna-13b-v0.3.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Echidna-13b-v0.3-GGUF/resolve/main/Echidna-13b-v0.3.Q5_K_S.gguf) | Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Echidna-13b-v0.3-GGUF/resolve/main/Echidna-13b-v0.3.Q5_K_M.gguf) | Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/Echidna-13b-v0.3-GGUF/resolve/main/Echidna-13b-v0.3.Q6_K.gguf) | Q6_K | 10.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Echidna-13b-v0.3-GGUF/resolve/main/Echidna-13b-v0.3.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Niggendar/moxiePony_v13 | Niggendar | "2024-07-02T17:40:11Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-07-02T17:35:04Z" | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SimplCup/Gideon | SimplCup | "2024-07-02T17:50:00Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-07-02T17:35:14Z" | ---
license: openrail
---
|
taehyunzzz/switch-base-32-samsum-top-4-choose-1-deconly | taehyunzzz | "2024-07-02T21:03:37Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"switch_transformers",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:google/switch-base-32",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-07-02T17:35:28Z" | ---
license: apache-2.0
base_model: google/switch-base-32
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: switch-base-32-samsum-top-4-choose-1-deconly
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: validation
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 48.1483
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# switch-base-32-samsum-top-4-choose-1-deconly
This model is a fine-tuned version of [google/switch-base-32](https://huggingface.co/google/switch-base-32) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5476
- Rouge1: 48.1483
- Rouge2: 24.7832
- Rougel: 40.7375
- Rougelsum: 44.5607
- Gen Len: 16.791
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:------:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 3.4301 | 0.2172 | 200 | 2.5650 | 36.2949 | 15.7872 | 31.4771 | 33.5875 | 14.291 |
| 2.463 | 0.4343 | 400 | 1.9643 | 42.2665 | 19.5129 | 35.4042 | 39.1887 | 16.2946 |
| 2.2735 | 0.6515 | 600 | 1.8080 | 44.1285 | 20.7698 | 37.2279 | 40.8229 | 16.39 |
| 2.0163 | 0.8686 | 800 | 1.7496 | 43.767 | 21.2272 | 37.4578 | 40.5675 | 15.2604 |
| 1.9836 | 1.0858 | 1000 | 1.6872 | 45.4925 | 22.0234 | 38.4465 | 41.9728 | 16.1443 |
| 1.9816 | 1.3029 | 1200 | 1.6644 | 46.6391 | 23.2045 | 39.4297 | 43.2476 | 16.3778 |
| 2.0067 | 1.5201 | 1400 | 1.6287 | 46.692 | 22.8868 | 39.4165 | 43.1099 | 16.5391 |
| 1.8679 | 1.7372 | 1600 | 1.6210 | 46.9779 | 23.5089 | 40.0585 | 43.4129 | 16.0758 |
| 1.8658 | 1.9544 | 1800 | 1.6083 | 47.3286 | 24.0168 | 40.148 | 43.7942 | 16.78 |
| 1.7036 | 2.1716 | 2000 | 1.5961 | 47.3911 | 23.798 | 39.9685 | 43.8634 | 16.5306 |
| 1.7296 | 2.3887 | 2200 | 1.5955 | 47.9152 | 24.4805 | 40.8632 | 44.5938 | 16.2286 |
| 1.7464 | 2.6059 | 2400 | 1.5817 | 47.2239 | 23.886 | 40.3105 | 43.9387 | 16.3007 |
| 1.7085 | 2.8230 | 2600 | 1.5667 | 47.4369 | 24.0868 | 40.288 | 44.0761 | 16.3337 |
| 1.5667 | 3.0402 | 2800 | 1.5834 | 47.6073 | 24.4565 | 40.578 | 44.093 | 16.588 |
| 1.6104 | 3.2573 | 3000 | 1.5680 | 47.937 | 24.5777 | 40.7943 | 44.3661 | 16.5489 |
| 1.6556 | 3.4745 | 3200 | 1.5446 | 47.8843 | 24.6985 | 40.7419 | 44.3735 | 16.7127 |
| 1.6113 | 3.6916 | 3400 | 1.5500 | 47.6546 | 24.2782 | 40.1105 | 44.1072 | 17.0147 |
| 1.5974 | 3.9088 | 3600 | 1.5513 | 47.7263 | 24.592 | 40.7256 | 44.3474 | 16.5892 |
| 1.4848 | 4.1260 | 3800 | 1.5458 | 47.9634 | 24.7055 | 40.661 | 44.3527 | 16.7347 |
| 1.5184 | 4.3431 | 4000 | 1.5441 | 47.7037 | 24.5408 | 40.2843 | 44.2096 | 16.7237 |
| 1.5397 | 4.5603 | 4200 | 1.5417 | 48.3854 | 25.1618 | 40.7691 | 44.814 | 16.7702 |
| 1.6644 | 4.7774 | 4400 | 1.5459 | 48.2593 | 25.1185 | 40.8583 | 44.6804 | 16.835 |
| 1.5555 | 4.9946 | 4600 | 1.5476 | 48.1483 | 24.7832 | 40.7375 | 44.5607 | 16.791 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.0.1+cu117
- Datasets 2.20.0
- Tokenizers 0.19.1
|
qsdcfqsdfcxqfqs/Ron-Paul-The-presidential-debate-should-be-a-wakeup-call-for-Americans-b5-updated | qsdcfqsdfcxqfqs | "2024-07-02T17:36:56Z" | 0 | 0 | null | [
"en",
"region:us"
] | null | "2024-07-02T17:35:41Z" | ---
language:
- en
---
[![Build Status](https://mf.b37mrtl.ru/files/2024.07/article/668329472030273e760d54b8.jpg)]()
read the full article here : https://www.wowace.com/paste/e60ac04f
Source : https://binshare.net/LZWMlhjCnvLd7WwpjNub
Flash News : https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_3221155311&Connector=https://unitedstatednews.com
Biden last Talk : https://tempaste.com/YAWo9WFEeBZ
Russian Ukrain Breaking News : https://www.sep.va.gov/sep/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=Games_Free_Generator_522&Connector=https://unitedstatednews.com
Other Sources :
https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_3244133435&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1345131534&Connector=https://unitedstatednews.com
https://www.sep.va.gov/sep/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=Games_Free_Generator_132&Connector=https://unitedstatednews.com
https://authors-old.curseforge.com/paste/11649e32
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us5421115125&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us3451535353&Connector=https://unitedstatednews.com
https://tempaste.com/ouowrhNZ23F
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us5544141435&Connector=https://unitedstatednews.com
https://bitbin.it/eksCaTzc/
https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_3322232213&Connector=https://unitedstatednews.com
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us5352244133&Connector=https://unitedstatednews.com
https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_1215211155&Connector=https://unitedstatednews.com
https://sebsauvage.net/paste/?e8cef9c10c2ae25b#qVE0LMZfzFM0HVsyMB6b4veKN7vvwhFXAfprCaKF8sw=
https://www.taskade.com/d/eFiMrArNRqEg98yV?share=view&view=z1eqaj2Zo7AYpVNq&as=list
There were plenty of surprises in last week's presidential debate. For one, Americans who rely on the mainstream media for their news learned that they had been lied to for the past three years about President Biden's capability to do the job he was elected to do.
The realization that the media has been lying for years about Biden is a positive development, as, hopefully, thoughtful Americans might begin wondering what else the media has been lying about. For example, they will find out that the media has been lying to them for years about Russia and Ukraine and about the Middle East and elsewhere. They will find out that our hyper-interventionist foreign policy does not make us safer and more free, but the opposite.
Unfortunately for most Americans, foreign policy is something that happens "over there," with few direct effects back home. Dumping nearly $200 billion into the lost cause called "Ukraine" may at most seem like an annoyance to many Americans, but it's not like they are being snatched up by gangs of military recruiters and sent to the front line as is happening to Ukrainian men.
However, $200 billion is real money and the effect on our economy is also real. The bill will be paid by each American family indirectly through the inflation "tax." Each dollar created out of thin air and spent on the Ukraine debacle devalues the rest of the dollars in circulation.
The danger posed by our foreign policy seemed to escape both candidates, who each tried to convince us they were "tougher" than the other. Despite Donald Trump's sober and accurate warning that Joe Biden has taken us to the brink of World War III, his solution to the problem is doing more of the same. His stated foreign policy seems to be that were he in office the rest of the world would not dare do anything against his will.
He would have been so tough that Russian president Vladimir Putin would never have dared to invade Ukraine, he claimed. He would have been so tough that Hamas would never have dared attack Israel on October 7th. It's only Joe Biden's "weakness" that leads to these disastrous foreign policy outcomes.
But the world does not work that way. Decades of US sanctions placed on any country that fails to do what Washington demands have backfired and led to the emergence of a block of countries united in their resistance to American dictates. Being "tough" on less-powerful countries may work...until it doesn't. That's where we are today.
Neither candidate seems to realize that the world has changed.
I have always said that real strength in foreign policy comes from restraint. To prevent these bad outcomes everywhere, stop intervening everywhere. It is not "toughness" that would have prevented Russia from taking action against Ukraine. It is restraint. Not launching a coup in Ukraine in 2014 would have prevented the disastrous war in Ukraine. Just like not stirring up trouble in the South China Sea would prevent a war with China. Not continuing to occupy and intervene in the Middle East would prevent a major regional war which might include Iran and other big players in the region.
Restraint is the real toughness. Non-intervention is the only foreign policy that will keep us safe and free. We've tried it the other way and it does not work. Let's try something different..... |
qsdcfqsdfcxqfqs/Srinagar-Records-Seasons-Hottest-Day-Rains-Likely-3e-updated | qsdcfqsdfcxqfqs | "2024-07-02T17:38:01Z" | 0 | 0 | null | [
"en",
"region:us"
] | null | "2024-07-02T17:36:47Z" | ---
language:
- en
---
[![Build Status](https://cdn0.celebritax.com/sites/default/files/styles/watermark_100/public/1719863439-joven-25-anos-asesina-padrastro-songo-maya-santiago-cuba.jpg)]()
read the full article here : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2551451155&Connector=https://unitedstatednews.com
Source : https://yamcode.com/global-markets-rally-after-positive-economic-data
Flash News : https://tempaste.com/7TAKOEoXXgx
Biden last Talk : https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_3424455354&Connector=https://unitedstatednews.com
Russian Ukrain Breaking News : https://authors-old.curseforge.com/paste/f8c04508
Other Sources :
https://dev.bukkit.org/paste/c1ef434e
https://tempaste.com/2Qrmgwd5CEo
https://www.sep.va.gov/sep/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=Games_Free_Generator_134&Connector=https://unitedstatednews.com
https://tempaste.com/rJXkORpBV03
https://tempaste.com/fDpUj8jpCYH
https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1143252321&Connector=https://unitedstatednews.com
https://huggingface.co/qsdcfqsdfcxqfqs/Important-dates-in-the-2024-US-presidential-race-hh-updated/new/main/?filename=README.md
https://tempaste.com/a8SwwOO0488
https://huggingface.co/qsdcfqsdfcxqfqs/Marriage-equality-brings-joy-3b-updated
https://tempaste.com/piQTtUGGy7Z
https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_4133251443&Connector=https://unitedstatednews.com
https://huggingface.co/qsdcfqsdfcxqfqs/UN-group-demands-release-of-exPakistan-prime-minister-Imran-Khan-a1-updated
https://tempaste.com/ccKkMMHVDX9
https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_3551452343&Connector=https://unitedstatednews.com
Srinagar- The arrival of the new month has commenced with the heat wave in three stations of Kashmir including Srinagar, the summer capital of Jammu and Kashmir where the hottest day of the season was recorded on Monday at 34.3 degree Celsius.
According to the details available, Srinagar, Kokernag and Qazigund, the gateway of Kashmir, have recorded the heat wave today.
Pertinently, Srinagar and Qazigund were hotter than Kolkata today where the mercury settled at 31 degree Celsius.
As per the details, Qazigund recorded a maximum temperature of 32.4 degree Celsius while Kokernag recorded a maximum temperature of 31.9 degree Celsius.
The temperature in Kupwara, however, settled close to the heat wave temperature as 31.9 degree Celsius was recorded today. Pahalgam, a famous tourist destination and Gulmarg, a famous ski-resort recorded 27.6 degree Celsius and 24.4 degree Celsius respectively.
Meanwhile, all the Jammu stations recorded above normal temperature. The details reveal that Jammu, a winter capital of the Union Territory, recorded 36.2 degree Celsius while Banihal recorded 31.0 degree Celsius.
Batote, Katra and Bhaderwah recorded a maximum temperature of 28.8 degree Celsius, 34.0 degree Celsius and 30.8 degree Celsius respectively.
Moreover, Director Meteorological department (MeT), Dr Mukhtar Ahmad has however predicted the possibility of rains and thundershower at many places.
There is a possibility of spell of rain and thundershower at many places of Jammu division towards late night and early morning and spell of rain and thundershower at few places of Kashmir division till July 03.
He however, stated that from July 4 to 5, there is a possibility of intermittent light to moderate rainfall at most places of J&K with heavy showers and rainfall at few places
From July 6 to 7, intermittent light to moderate rainfall is expected at most places of J&K with heavy showers and rainfall at few places.
Besides, the weatherman has issued an advisory, saying that the flash floods, landslides, mudslides & shooting stones are expected at few vulnerable places.
Few low lying areas may experience temporary water logging conditions, the advisory reads, adding that few places may experience moderate thunderstorm and lightning.
J&K To Witness Intermittent Rains: Lotus
After a prolonged heatwave, Jammu Kashmir is expected to have 'moderate to heavy' rainfall from July 5 onwards. Weather experts however ruled out the possibility of floods but advised people especially those living in hilly areas of Chenab Valley to be vigilant.
Talking about the weather scenario in Jammu and Kashmir, Weather Expert Sonam Lotus told a Srinagar-based news agency that there is the possibility of rainfall from July 5 onwards. He said the rainfall may be intermittent in nature but due to heatwaves, glaciers have melted and water levels in water bodies are expected to rise.
"There are no chances of any floods but there may be flash floods, especially in Rajouri, Doda, Poonch, Kupwara and other hilly districts," Sonam Lotus said and advised the nomadic community to be alert.
Sonam Lotus also said that due to monsoon rains, the intensity of thunderstorms and cloud bursts may increase in Jammu and Kashmir, especially in hilly areas.
"In July-August, the possibility of landslides and mudslides always increases. People should not panic but there is a need to be cautious. In Srinagar or plains, there is no possibility of any floods as is being claimed. There will be no continuous rainfall. The water level may increase," he said.
Sonam Lotus said that different places en route to Amarnath Yatra may witness morning or evening showers on a daily or alternate basis..... |
WilAI/llama-2-7b-miniguanaco | WilAI | "2024-07-02T17:44:27Z" | 0 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T17:37:15Z" | Entry not found |
KYAGABA/wav2vec2-large-xls-r-300m-luo-googlefluers-5hr-v1 | KYAGABA | "2024-07-02T19:21:11Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:fleurs",
"base_model:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-07-02T17:37:47Z" | ---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
datasets:
- fleurs
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-luo-googlefluers-5hr-v1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: fleurs
type: fleurs
config: luo_ke
split: test
args: luo_ke
metrics:
- name: Wer
type: wer
value: 0.5508333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/asr-africa-research-team/ASR%20Africa/runs/3yjshy20)
# wav2vec2-large-xls-r-300m-luo-googlefluers-5hr-v1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7669
- Wer: 0.5508
- Cer: 0.1450
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-------:|:----:|:---------------:|:------:|:------:|
| 14.2071 | 2.6667 | 100 | 5.4183 | 1.0 | 1.0 |
| 4.5199 | 5.3333 | 200 | 3.5672 | 1.0 | 1.0 |
| 3.2401 | 8.0 | 300 | 2.9414 | 1.0 | 1.0 |
| 2.912 | 10.6667 | 400 | 2.8496 | 1.0 | 1.0 |
| 2.293 | 13.3333 | 500 | 1.0939 | 0.8385 | 0.2482 |
| 0.7468 | 16.0 | 600 | 0.6503 | 0.601 | 0.1549 |
| 0.4431 | 18.6667 | 700 | 0.6416 | 0.5808 | 0.1534 |
| 0.2886 | 21.3333 | 800 | 0.6753 | 0.5793 | 0.1535 |
| 0.2085 | 24.0 | 900 | 0.6925 | 0.562 | 0.1467 |
| 0.1715 | 26.6667 | 1000 | 0.7211 | 0.5673 | 0.1477 |
| 0.1394 | 29.3333 | 1100 | 0.7347 | 0.5532 | 0.1430 |
| 0.1249 | 32.0 | 1200 | 0.7424 | 0.5543 | 0.1449 |
| 0.1131 | 34.6667 | 1300 | 0.7561 | 0.5588 | 0.1471 |
| 0.1034 | 37.3333 | 1400 | 0.7595 | 0.553 | 0.1445 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
danielkosyra/polynomial_2000_9e-4_16b_w0.075 | danielkosyra | "2024-07-02T17:38:16Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T17:37:55Z" | ---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: polynomial_2000_9e-4_16b_w0.075
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# polynomial_2000_9e-4_16b_w0.075
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0009
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- lr_scheduler_warmup_steps: 250
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.5244 | 0.8234 | 250 | 3.4016 |
| 3.1742 | 1.6468 | 500 | 3.1288 |
| 2.9037 | 2.4702 | 750 | 2.9918 |
| 2.7072 | 3.2935 | 1000 | 2.9131 |
| 2.5479 | 4.1169 | 1250 | 2.8668 |
| 2.3946 | 4.9403 | 1500 | 2.8252 |
| 2.2317 | 5.7637 | 1750 | 2.8147 |
| 2.1172 | 6.5871 | 2000 | 2.8132 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
Cassiamendes/cassia | Cassiamendes | "2024-07-02T17:38:14Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T17:38:14Z" | Entry not found |
CoprolaliacPress/Thoth-3-Q6_K-GGUF | CoprolaliacPress | "2024-07-02T17:38:50Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:CoprolaliacPress/Thoth-3",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T17:38:26Z" | ---
base_model: CoprolaliacPress/Thoth-3
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# CoprolaliacPress/Thoth-3-Q6_K-GGUF
This model was converted to GGUF format from [`CoprolaliacPress/Thoth-3`](https://huggingface.co/CoprolaliacPress/Thoth-3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/CoprolaliacPress/Thoth-3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CoprolaliacPress/Thoth-3-Q6_K-GGUF --hf-file thoth-3-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CoprolaliacPress/Thoth-3-Q6_K-GGUF --hf-file thoth-3-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CoprolaliacPress/Thoth-3-Q6_K-GGUF --hf-file thoth-3-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CoprolaliacPress/Thoth-3-Q6_K-GGUF --hf-file thoth-3-q6_k.gguf -c 2048
```
|
qsdcfqsdfcxqfqs/Column-The-Supreme-Court-just-gave-itself-a-lot-more-power-ee-updated | qsdcfqsdfcxqfqs | "2024-07-02T17:39:00Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T17:39:00Z" | Entry not found |
qsdcfqsdfcxqfqs/SelfAssembling-Highly-Conductive-Sensors-Could-Improve-Wearable-Devices-25-updated | qsdcfqsdfcxqfqs | "2024-07-02T17:40:13Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T17:40:13Z" | Entry not found |
GGarri/whisper_finetuned_ver2 | GGarri | "2024-07-02T20:06:17Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-07-02T17:41:23Z" | ---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper_finetuned_ver2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper_finetuned_ver2
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0048
- Cer: 0.5262
- Wer: 0.4840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|
| 0.0 | 35.71 | 1000 | 0.0047 | 0.5496 | 0.5227 |
| 0.0001 | 71.43 | 2000 | 0.0048 | 0.5262 | 0.4840 |
| 0.0 | 107.14 | 3000 | 0.0051 | 0.5964 | 0.5615 |
| 0.0 | 142.86 | 4000 | 0.0053 | 0.6080 | 0.5808 |
| 0.0 | 178.57 | 5000 | 0.0054 | 0.6080 | 0.5808 |
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Nutanix/Meta-Llama-3-8B-Instruct_KTO_lora_Anthropic_HH_Golden-processed | Nutanix | "2024-07-02T18:11:42Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"region:us"
] | null | "2024-07-02T17:42:40Z" | ---
base_model: meta-llama/Meta-Llama-3-8B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
RichardErkhov/venkycs_-_Zyte-1B-gguf | RichardErkhov | "2024-07-02T17:53:40Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-07-02T17:42:43Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Zyte-1B - GGUF
- Model creator: https://huggingface.co/venkycs/
- Original model: https://huggingface.co/venkycs/Zyte-1B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Zyte-1B.Q2_K.gguf](https://huggingface.co/RichardErkhov/venkycs_-_Zyte-1B-gguf/blob/main/Zyte-1B.Q2_K.gguf) | Q2_K | 0.4GB |
| [Zyte-1B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/venkycs_-_Zyte-1B-gguf/blob/main/Zyte-1B.IQ3_XS.gguf) | IQ3_XS | 0.44GB |
| [Zyte-1B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/venkycs_-_Zyte-1B-gguf/blob/main/Zyte-1B.IQ3_S.gguf) | IQ3_S | 0.47GB |
| [Zyte-1B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/venkycs_-_Zyte-1B-gguf/blob/main/Zyte-1B.Q3_K_S.gguf) | Q3_K_S | 0.47GB |
| [Zyte-1B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/venkycs_-_Zyte-1B-gguf/blob/main/Zyte-1B.IQ3_M.gguf) | IQ3_M | 0.48GB |
| [Zyte-1B.Q3_K.gguf](https://huggingface.co/RichardErkhov/venkycs_-_Zyte-1B-gguf/blob/main/Zyte-1B.Q3_K.gguf) | Q3_K | 0.51GB |
| [Zyte-1B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/venkycs_-_Zyte-1B-gguf/blob/main/Zyte-1B.Q3_K_M.gguf) | Q3_K_M | 0.51GB |
| [Zyte-1B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/venkycs_-_Zyte-1B-gguf/blob/main/Zyte-1B.Q3_K_L.gguf) | Q3_K_L | 0.55GB |
| [Zyte-1B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/venkycs_-_Zyte-1B-gguf/blob/main/Zyte-1B.IQ4_XS.gguf) | IQ4_XS | 0.57GB |
| [Zyte-1B.Q4_0.gguf](https://huggingface.co/RichardErkhov/venkycs_-_Zyte-1B-gguf/blob/main/Zyte-1B.Q4_0.gguf) | Q4_0 | 0.59GB |
| [Zyte-1B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/venkycs_-_Zyte-1B-gguf/blob/main/Zyte-1B.IQ4_NL.gguf) | IQ4_NL | 0.6GB |
| [Zyte-1B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/venkycs_-_Zyte-1B-gguf/blob/main/Zyte-1B.Q4_K_S.gguf) | Q4_K_S | 0.6GB |
| [Zyte-1B.Q4_K.gguf](https://huggingface.co/RichardErkhov/venkycs_-_Zyte-1B-gguf/blob/main/Zyte-1B.Q4_K.gguf) | Q4_K | 0.62GB |
| [Zyte-1B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/venkycs_-_Zyte-1B-gguf/blob/main/Zyte-1B.Q4_K_M.gguf) | Q4_K_M | 0.62GB |
| [Zyte-1B.Q4_1.gguf](https://huggingface.co/RichardErkhov/venkycs_-_Zyte-1B-gguf/blob/main/Zyte-1B.Q4_1.gguf) | Q4_1 | 0.65GB |
| [Zyte-1B.Q5_0.gguf](https://huggingface.co/RichardErkhov/venkycs_-_Zyte-1B-gguf/blob/main/Zyte-1B.Q5_0.gguf) | Q5_0 | 0.71GB |
| [Zyte-1B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/venkycs_-_Zyte-1B-gguf/blob/main/Zyte-1B.Q5_K_S.gguf) | Q5_K_S | 0.71GB |
| [Zyte-1B.Q5_K.gguf](https://huggingface.co/RichardErkhov/venkycs_-_Zyte-1B-gguf/blob/main/Zyte-1B.Q5_K.gguf) | Q5_K | 0.73GB |
| [Zyte-1B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/venkycs_-_Zyte-1B-gguf/blob/main/Zyte-1B.Q5_K_M.gguf) | Q5_K_M | 0.73GB |
| [Zyte-1B.Q5_1.gguf](https://huggingface.co/RichardErkhov/venkycs_-_Zyte-1B-gguf/blob/main/Zyte-1B.Q5_1.gguf) | Q5_1 | 0.77GB |
| [Zyte-1B.Q6_K.gguf](https://huggingface.co/RichardErkhov/venkycs_-_Zyte-1B-gguf/blob/main/Zyte-1B.Q6_K.gguf) | Q6_K | 0.84GB |
| [Zyte-1B.Q8_0.gguf](https://huggingface.co/RichardErkhov/venkycs_-_Zyte-1B-gguf/blob/main/Zyte-1B.Q8_0.gguf) | Q8_0 | 1.09GB |
Original model description:
---
license: apache-2.0
language:
- en
metrics:
- accuracy
- bertscore
- bleu
tags:
- slm
- llama
- tiny
- tinyllama
datasets:
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized
---
# Zyte-1.1b: Tiny but Mighty
## Model Details
### Model Description
The **Zyte 1B** model is a cutting-edge advancement in AI language understanding and generation. This version is a sophisticated refinement of the acclaimed **tinyllama** model, incorporating the advanced Direct Parameter Optimization (DPO) technique. Diligently enhanced this model using state-of-the-art datasets, ensuring unparalleled performance and accuracy.
- **Model type**: TinyLlama
- **Specialization**: AI Language Understanding and Generation
The aihub-app/zyte-1.1b model represents a significant advancement in the field of AI language understanding and generation. This model is a meticulously fine-tuned version of the renowned tinyllama model, utilizing the advanced Direct Parameter Optimization (DPO) technique. Our team at AI Hub App has dedicated considerable effort to enhance this model using state-of-the-art datasets.
"<|system|> You are a helpful AI assistant.</s><|user|>{prompt}</s><|assistant|>"
Inference Code - https://huggingface.co/aihub-app/zyte-1B/blob/main/inference_zyte_1b.ipynb
|
Jalex00/ddpm-custom-dataset-128 | Jalex00 | "2024-07-02T17:43:37Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T17:43:37Z" | Entry not found |
cyan2k/promptvieh_chat | cyan2k | "2024-07-02T18:30:36Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T17:43:54Z" | ---
base_model: unsloth/phi-3-mini-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
---
# Uploaded model
- **Developed by:** cyan2k
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Shinyystarz/Furina_eng | Shinyystarz | "2024-07-02T17:44:43Z" | 0 | 0 | null | [
"license:bsl-1.0",
"region:us"
] | null | "2024-07-02T17:44:43Z" | ---
license: bsl-1.0
---
|
Niggendar/datassRev3Pony_rev3 | Niggendar | "2024-07-02T17:53:01Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-07-02T17:44:52Z" | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gglabs/Solar-kiosk-scenario-2-epoch | gglabs | "2024-07-02T17:45:04Z" | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"gguf",
"en",
"base_model:gglabs/Solar-kiosk-scenario-1-epoch",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T17:45:03Z" | ---
base_model: gglabs/Solar-kiosk-scenario-1-epoch
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** gglabs
- **License:** apache-2.0
- **Finetuned from model :** gglabs/Solar-kiosk-scenario-1-epoch
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|