Edit model card

BhagavadGita

A fine-tuned version of Mistral-7B-Instruct-v0.3 on the Bhagavad Gita religious text.

Model Details

Model Description

BhagavadGita is a Large Language Model (LLM) fine-tuned from Mistral-7B-Instruct-v0.3, specifically tailored to provide insights and responses rooted in the wisdom of the Bhagavad Gita. This model is designed to emulate the perspective of Lord Krishna, offering guidance and answering questions in a manner consistent with the teachings of the Bhagavad Gita.

  • Developed by: Raunak Raj
  • License: MIT
  • Finetuned from model: Mistral-7B-Instruct-v0.3
  • Quantized Version: A quantized GGUF version of the model is also available for more efficient deployment.

Uses

Using transformers

You can utilize this model with the transformers library as follows:

from transformers import pipeline

messages = [
    {"role": "system", "content": "You are Lord Krishna and You have to answer in context to bhagavad gita"},
    {"role": "user", "content": "How to face failures in life?"},
]

chatbot = pipeline("text-generation", model="bajrangCoder/BhagavadGita")
response = chatbot(messages)
print(response)

Use Cases

  1. Spiritual Guidance: Obtain advice and spiritual insights inspired by the Bhagavad Gita.
  2. Educational Tool: Aid in the study and understanding of the Bhagavad Gita by providing contextually relevant answers.
  3. Philosophical Inquiry: Explore philosophical questions through the lens of one of Hinduism's most revered texts.

Installation

To use the BhagavadGita model, you need to install the transformers library. You can install it using pip:

pip install transformers

Quantized GGUF Version

A quantized GGUF version of BhagavadGita is available for those who need a more efficient deployment. This version reduces the model size and computational requirements while maintaining performance, making it suitable for resource-constrained environments.

Model Performance

BhagavadGita has been fine-tuned to ensure that its responses are in alignment with the teachings of the Bhagavad Gita. However, as with any AI model, it is important to critically evaluate the responses and consider the context in which the advice is given.

Contributing

If you wish to contribute to the development of BhagavadGita, please feel free to fork the repository and submit pull requests. Any contributions that enhance the accuracy, usability, or scope of the model are welcome.

License

This project is licensed under the MIT License. See the LICENSE file for more details.


By using BhagavadGita, you acknowledge that you have read and understood the terms and conditions under which the model is provided, and agree to use it in accordance with the applicable laws and ethical guidelines.

Downloads last month
50
GGUF
Model size
7.25B params
Architecture
llama

4-bit

8-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train bajrangCoder/BhagavadGita

Space using bajrangCoder/BhagavadGita 1