File size: 2,849 Bytes
d0707e0
00d602d
d0707e0
1d2d342
 
00d602d
e48b23f
d0707e0
 
00d602d
d0707e0
00d602d
d0707e0
 
 
 
 
00d602d
 
1d2d342
 
d0707e0
 
 
1d2d342
 
 
00d602d
1d2d342
 
d0707e0
00d602d
 
 
 
 
 
 
 
 
 
 
 
 
1d2d342
d0707e0
 
 
 
 
00d602d
 
 
 
d0707e0
 
 
1d2d342
d0707e0
00d602d
 
d0707e0
 
 
00d602d
d0707e0
 
 
 
 
 
 
00d602d
 
 
d0707e0
 
 
00d602d
 
d0707e0
 
 
1d2d342
d0707e0
 
 
00d602d
 
d0707e0
1d2d342
d0707e0
00d602d
d0707e0
 
 
1d2d342
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
---
library_name: Llama3-8b-FlyingManual-Tutor
tags:
- llama3
- flying-manual
- ai-tutoring
- llama-factory
---

# Model Card for Llama3-8b-FlyingManual-Tutor

This model is a fine-tuned version of the Llama3-8b model, specifically trained on the FlyingManual dataset to serve as an AI tutor for aviation-related subjects. It is designed to provide guidance and nudge users when they answer questions incorrectly.

## Model Details

### Model Description

- **Developed by:** Canarie Teams
- **Model type:** Large Language Model (LLM) for AI Tutoring
- **Language(s) (NLP):** English (primary), potentially others depending on the FlyingManual dataset
- **Finetuned from model:** Llama3-8b by Meta AI

## How to Get Started with the Model

```python
from transformers import AutoTokenizer, AutoModelForCausalLM

model_name = "path/to/your/Llama3-8b-FlyingManual-Tutor"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

# Example usage for tutoring
def tutor_interaction(question, user_answer):
    prompt = f"Question: {question}\nUser Answer: {user_answer}\nTutor Response:"
    inputs = tokenizer(prompt, return_tensors="pt")
    outputs = model.generate(**inputs, max_length=200)
    response = tokenizer.decode(outputs[0], skip_special_tokens=True)
    return response.split("Tutor Response:")[-1].strip()

# Example
question = "What are the primary flight controls of an aircraft?"
user_answer = "Steering wheel and gas pedal"
tutor_feedback = tutor_interaction(question, user_answer)
print(tutor_feedback)
```

## Training Details

### Training Data

The model was fine-tuned on the FlyingManual dataset, augmented with:
- Sample Q&A pairs related to aviation topics
- Examples of constructive feedback and explanations
- Scenarios demonstrating correct and incorrect responses to aviation-related questions

### Training Procedure

#### Preprocessing

- Conversion of training data into a dialogue format suitable for tutoring interactions
- Augmentation of data with tutoring-specific tokens or markers

#### Training Hyperparameters

-----------------------------

## Evaluation

### Testing Data, Factors & Metrics

#### Testing Data

A held-out portion of the FlyingManual dataset, supplemented with:
- A set of typical student questions and answers
- Scenarios designed to test the model's ability to provide constructive feedback

#### Metrics

- Human evaluation of tutoring quality (clarity, accuracy, helpfulness)
- Task-specific metrics (e.g., ability to correctly identify and address user mistakes)

### Results

[Provide the evaluation results here]

## Environmental Impact

- **Hardware Type:** 8 x NVIDIA A100 40GB GPUs


## Model Card Authors

Canarie Teams

## Model Card Contact

[Your contact information or a link to where people can reach out with questions]