mrSoul7766 commited on
Commit
2ecdd17
β€’
1 Parent(s): 37062e1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -1
README.md CHANGED
@@ -15,4 +15,40 @@ tags:
15
  ---
16
 
17
  ## Note
18
- Introducing AgriQBot πŸŒΎπŸ€–: Embarking on the journey to cultivate knowledge in agriculture! 🚜🌱 Currently in its early testing phase, AgriQBot is a multilingual small language model dedicated to agriculture. 🌍🌾 As we harvest insights, the data generation phase is underway, and continuous improvement is the key. πŸ”„πŸ’‘ The vision? Crafting a compact yet powerful model fueled by a high-quality dataset, with plans to fine-tune it for direct tasks in the future.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  ---
16
 
17
  ## Note
18
+ Introducing AgriQBot πŸŒΎπŸ€–: Embarking on the journey to cultivate knowledge in agriculture! 🚜🌱 Currently in its early testing phase, AgriQBot is a multilingual small language model dedicated to agriculture. 🌍🌾 As we harvest insights, the data generation phase is underway, and continuous improvement is the key. πŸ”„πŸ’‘ The vision? Crafting a compact yet powerful model fueled by a high-quality dataset, with plans to fine-tune it for direct tasks in the future.
19
+
20
+ ```python
21
+ # Use a pipeline as a high-level helper
22
+
23
+ from transformers import pipeline
24
+
25
+ pipe = pipeline("text2text-generation", model="mrSoul7766/AgriQBot")
26
+
27
+ # Example user query
28
+ user_query = "How can I increase the yield of my potato crop?"
29
+
30
+ # Generate response
31
+ answer = pipe(f"answer: {user_query}", max_length=512)
32
+
33
+ # Print the generated answer
34
+ print(answer[0]['generated_text'])
35
+ ```
36
+ ### or
37
+ ```python
38
+ # Load model directly
39
+ from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
40
+
41
+ tokenizer = AutoTokenizer.from_pretrained("mrSoul7766/AgriQBot")
42
+ model = AutoModelForSeq2SeqLM.from_pretrained("mrSoul7766/AgriQBot")
43
+
44
+ # Set maximum generation length
45
+ max_length = 512
46
+
47
+ # Generate response with question as input
48
+ input_ids = tokenizer.encode("answer: How can I increase the yield of my potato crop?", return_tensors="pt")
49
+ output_ids = model.generate(input_ids, max_length=max_length)
50
+
51
+ # Decode response
52
+ response = tokenizer.decode(output_ids[0], skip_special_tokens=True)
53
+ print(response)
54
+ ```