marlonsousa commited on
Commit
1979bb6
1 Parent(s): 2d043ef

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +82 -12
README.md CHANGED
@@ -1,12 +1,82 @@
1
- ---
2
- language:
3
- - pt
4
- - en
5
- metrics:
6
- - accuracy
7
- pipeline_tag: question-answering
8
- tags:
9
- - personal
10
- ---
11
-
12
- ## Personal Model
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - pt
4
+ - en
5
+ metrics:
6
+ - accuracy
7
+ pipeline_tag: question-answering
8
+ tags:
9
+ - personal
10
+ license: mit
11
+ ---
12
+
13
+ # Luna Model
14
+
15
+ This document describes the use and functionalities of the personal model named Luna, which was trained based on the Phi-3 model. This model was developed for specific tasks as detailed below.
16
+
17
+ ## Table of Contents
18
+ - [Introduction](#introduction)
19
+ - [Requirements](#requirements)
20
+ - [Installation](#installation)
21
+ - [Usage](#usage)
22
+ - [Features](#features)
23
+ - [Contribution](#contribution)
24
+ - [License](#license)
25
+
26
+ ## Introduction
27
+ The Luna Model is a customized version of the Phi-3 model tailored for specific tasks such as text generation. This model leverages the capabilities of the Phi-3 architecture to provide efficient and accurate results for various natural language processing tasks.
28
+
29
+ ## Requirements
30
+ - Ollama
31
+
32
+ # Installation
33
+
34
+ ## Install Ollama
35
+
36
+ ```shell
37
+ curl -fsSL https://ollama.com/install.sh | sh
38
+ ```
39
+
40
+ # Usage
41
+
42
+ ## Create Modelfile
43
+
44
+ ```shell
45
+ touch Modelfile
46
+ ```
47
+
48
+ ## Modelfile content
49
+
50
+ ```
51
+ FROM ./models/luna-4b-v0.5.gguf
52
+
53
+ PARAMETER temperature 1
54
+ """
55
+ ```
56
+
57
+ ## Load the model
58
+
59
+ ```bash
60
+ ollama create luna -f ./Modelfile
61
+ ```
62
+
63
+ ## Run Model
64
+
65
+ ```bash
66
+ ollama run luna
67
+ ```
68
+
69
+ # Usage Python
70
+
71
+ ```python
72
+ import ollama
73
+
74
+ stream = ollama.chat(
75
+ model='llama3',
76
+ messages=[{'role': 'user', 'content': 'Who are you?'}],
77
+ stream=True,
78
+ )
79
+
80
+ for chunk in stream:
81
+ print(chunk['message']['content'], end='', flush=True)
82
+ ```