Model Card for AKALI

AKALI (Aggressive Knowledge Augmenter and Language Interface) is a library for language model augmentation and interfaces, designed to enhance AI model capabilities through strategic data augmentation and efficient task management.

Model Details

Model Description

  • Developed by: Ali Eren Ak
  • Funded by: [More Information Needed]
  • Shared by: Ali Eren Ak
  • Model type: Language model trained with augmented data
  • Language(s) (NLP): Multiple (supports various language models)
  • License: Proprietary and confidential
  • Finetuned from model: google/gemma-2-2b-it using AKALI is a framework)

Model Sources [optional]

Direct Use

  1. Load and interact with various language models.
  2. Perform knowledge augmentation to improve model performance.
  3. Manage different NLP tasks.
  4. Make predictions using loaded models.

Downstream Use [optional]

AKALI can be integrated into larger AI systems or applications for:

  1. Enhancing existing language models through data augmentation.
  2. Creating custom NLP tasks and processors.
  3. Building more robust and accurate AI systems.

Out-of-Scope Use

AKALI should not be used for:

  1. Generating or promoting harmful, biased, or misleading content.
  2. Unauthorized access to proprietary language models.
  3. Violating data privacy or intellectual property rights.

Bias, Risks, and Limitations

  1. AKALI's performance depends on the quality and biases of the underlying language models used.
  2. The effectiveness of augmentation strategies may vary depending on the specific task and dataset.
  3. Users should be aware of potential biases in the generated or augmented data.

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

How to Get Started with the Model

Use the code below to get started with the model.

from akali import LanguageInterface

# Load a model
li = LanguageInterface.load_model("alierenak/gemma-7b-akali")

# Set the task
li.set_task("EntitySentimentReasoner")

# Make a prediction
result = li.predict(system_text=None, user_message="Turkcell hiç güzel çeken bir hat değil o yüzden Vodofone'u tercih ediyorum hem de daha ucuz")
print(result)

Training Details

AKALI itself is not a trained model, but a framework for augmenting and interfacing with language models. The training data would depend on the specific models and tasks used with AKALI. This model is trained on data augmented by Meta-Llama-3.1-70B-Instruct and fine-tuned version of google/gemma-2-2b-it.

Training Data

Can be accessed from Github repo

Evaluation

Evaluation of AKALI would depend on the specific use case, models, and tasks it's applied to. Users are encouraged to perform task-specific evaluations.

Environmental Impact

The environmental impact of using AKALI would vary based on the specific models and compute resources used. Users are encouraged to use the Machine Learning Impact calculator to estimate the carbon emissions for their specific use case.

Model Card Authors

Ali Eren Ak

Model Card Contact

akali@sabanciuniv.edu

Downloads last month
7
Safetensors
Model size
2.61B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for alierenak/gemma-7b-akali

Quantizations
1 model