Update README.md
Browse files
README.md
CHANGED
@@ -4,6 +4,9 @@ Reference: R. Luu and M.J. Buehler, Adv. Science, 2023, DOI: https://doi.org/10.
|
|
4 |
|
5 |
Abstract: The study of biological materials and bio-inspired materials science is well established; however, surprisingly little knowledge is systematically translated to engineering solutions. To accelerate discovery and guide insights, an open-source autoregressive transformer large language model (LLM), BioinspiredLLM, is reported. The model is finetuned with a corpus of over a thousand peer-reviewed articles in the field of structural biological and bio-inspired materials and can be prompted to recall information, assist with research tasks, and function as an engine for creativity. The model has proven that it is able to accurately recall information about biological materials and is further strengthened with enhanced reasoning ability, as well as with Retrieval-Augmented Generation (RAG) to incorporate new data during generation that can also help to traceback sources, update the knowledge base, and connect knowledge domains. BioinspiredLLM also has shown to develop sound hypotheses regarding biological materials design and remarkably so for materials that have never been explicitly studied before. Lastly, the model shows impressive promise in collaborating with other generative artificial intelligence models in a workflow that can reshape the traditional materials design process. This collaborative generative artificial intelligence method can stimulate and enhance bio-inspired materials design workflows. Biological materials are at a critical intersection of multiple scientific fields and models like BioinspiredLLM help to connect knowledge domains.
|
6 |
|
|
|
|
|
|
|
7 |
```
|
8 |
model = PeftModel.from_pretrained('lamm-mit/BioinspiredLLM')
|
9 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
@@ -60,4 +63,98 @@ output_text=generate_response ( text_input=txt,eos_token_id=2,
|
|
60 |
)
|
61 |
|
62 |
print(output_text)
|
63 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
|
5 |
Abstract: The study of biological materials and bio-inspired materials science is well established; however, surprisingly little knowledge is systematically translated to engineering solutions. To accelerate discovery and guide insights, an open-source autoregressive transformer large language model (LLM), BioinspiredLLM, is reported. The model is finetuned with a corpus of over a thousand peer-reviewed articles in the field of structural biological and bio-inspired materials and can be prompted to recall information, assist with research tasks, and function as an engine for creativity. The model has proven that it is able to accurately recall information about biological materials and is further strengthened with enhanced reasoning ability, as well as with Retrieval-Augmented Generation (RAG) to incorporate new data during generation that can also help to traceback sources, update the knowledge base, and connect knowledge domains. BioinspiredLLM also has shown to develop sound hypotheses regarding biological materials design and remarkably so for materials that have never been explicitly studied before. Lastly, the model shows impressive promise in collaborating with other generative artificial intelligence models in a workflow that can reshape the traditional materials design process. This collaborative generative artificial intelligence method can stimulate and enhance bio-inspired materials design workflows. Biological materials are at a critical intersection of multiple scientific fields and models like BioinspiredLLM help to connect knowledge domains.
|
6 |
|
7 |
+
|
8 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/Xdp_nCYiF2IAPamG5ffIC.png)
|
9 |
+
|
10 |
```
|
11 |
model = PeftModel.from_pretrained('lamm-mit/BioinspiredLLM')
|
12 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
|
|
63 |
)
|
64 |
|
65 |
print(output_text)
|
66 |
+
```
|
67 |
+
|
68 |
+
Dataset: https://onlinelibrary.wiley.com/action/downloadSupplement?doi=10.1002%2Fadvs.202306724&file=advs7235-sup-0002-SuppMat.csv
|
69 |
+
|
70 |
+
Performance:
|
71 |
+
|
72 |
+
The figure below shows results from knowledge recall evaluation experiments of BioinspiredLLM a) Total scores of each model, Llama 13b-chat (grey), Orca-2 13b (blue), Llama-BioLLM (orange), BioinspiredLLM (light green) and BioinspiredLLM with Retrieval-Augmented Generation (RAG) (dark green) on the 100-question biological materials exam b) Scores on the exam separated by question category: general, specific, numerical, and non-biological. c) Retrieval-Augmented Generation(RAG) method framework and two examples of BioinspiredLLM's response when supplemented using RAG, additionally showing the source the retrieved content traces back to. This method allows tracing the origin of certain knowledge, ideas, or data used as BioinspiredLLM formulates its response.
|
73 |
+
|
74 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/V8g9f0lCKb2HMepf7lGbM.png)
|
75 |
+
|
76 |
+
### Retrieval Augmented Generation (RAG)
|
77 |
+
|
78 |
+
```
|
79 |
+
import chromadb
|
80 |
+
from llama_index import VectorStoreIndex, SimpleDirectoryReader
|
81 |
+
from chromadb.config import Settings
|
82 |
+
from llama_index.vector_stores import ChromaVectorStore
|
83 |
+
from llama_index.storage.storage_context import StorageContext
|
84 |
+
|
85 |
+
coll_name="Bioinspired"
|
86 |
+
coll_path='./Bioinspired_Chroma' ## PATH TO CHROMA DATABASE
|
87 |
+
|
88 |
+
client = chromadb.PersistentClient(path=coll_path)
|
89 |
+
collection = client.get_collection (name=coll_name,)
|
90 |
+
|
91 |
+
db2 = chromadb.PersistentClient(path=coll_path)
|
92 |
+
chroma_collection = db2.get_or_create_collection(coll_name)
|
93 |
+
vector_store = ChromaVectorStore(chroma_collection=chroma_collection)
|
94 |
+
|
95 |
+
chroma_collection.count()
|
96 |
+
```
|
97 |
+
|
98 |
+
Set up BioinspiredLMM as custom LLM:
|
99 |
+
|
100 |
+
|
101 |
+
```
|
102 |
+
from llama_index.prompts.prompts import SimpleInputPrompt
|
103 |
+
from llama_index import (
|
104 |
+
VectorStoreIndex,
|
105 |
+
get_response_synthesizer,)
|
106 |
+
from llama_index.retrievers import VectorIndexRetriever
|
107 |
+
from llama_index.query_engine import RetrieverQueryEngine
|
108 |
+
eos_token=32000
|
109 |
+
|
110 |
+
#system_prompt = ""
|
111 |
+
system_prompt = "You are BioinspiredLLM. You are knowledgeable in biological and bio-inspired materials and provide accurate and qualitative insights about biological materials found in Nature. You are a cautious assistant. You think step by step. You carefully follow instructions."
|
112 |
+
|
113 |
+
query_wrapper_prompt = SimpleInputPrompt( "<|im_start|>system\n"+system_prompt+"<|im_end|>\n<|im_start|>user\n{query_str}<|im_end|>\n<|im_start|>assistant")
|
114 |
+
|
115 |
+
from llama_index.llms import HuggingFaceLLM
|
116 |
+
llm_custom = HuggingFaceLLM(context_window=2048,
|
117 |
+
max_new_tokens=300,
|
118 |
+
query_wrapper_prompt=query_wrapper_prompt,
|
119 |
+
stopping_ids=[eos_token, 2],
|
120 |
+
model=model,
|
121 |
+
generate_kwargs={"temperature": 0.1, "do_sample": True,
|
122 |
+
"repetition_penalty":1.1, "top_p":0.95, "top_k":50, "eos_token_id": [eos_token, 2] , #"max_new_tokens": 1024,
|
123 |
+
},
|
124 |
+
tokenizer=tokenizer)
|
125 |
+
llm_custom.model_name='BioinspiredLLM'
|
126 |
+
```
|
127 |
+
Set up custom LLM service context and vector store indedx:
|
128 |
+
```
|
129 |
+
from llama_index.llms import LlamaCPP
|
130 |
+
from llama_index import ServiceContext
|
131 |
+
from llama_index.llms.llama_utils import (
|
132 |
+
messages_to_prompt,
|
133 |
+
completion_to_prompt,
|
134 |
+
)
|
135 |
+
|
136 |
+
service_context = ServiceContext.from_defaults(
|
137 |
+
llm=llm_custom,
|
138 |
+
chunk_size=1024,
|
139 |
+
embed_model="local:BAAI/bge-large-en"
|
140 |
+
)
|
141 |
+
index = VectorStoreIndex.from_vector_store(
|
142 |
+
vector_store,
|
143 |
+
service_context=service_context,
|
144 |
+
)
|
145 |
+
```
|
146 |
+
Set up query engine:
|
147 |
+
```
|
148 |
+
from IPython.display import Markdown, display
|
149 |
+
query_engine = index.as_query_engine(
|
150 |
+
#response_mode="tree_summarize",
|
151 |
+
#response_mode='compact',
|
152 |
+
#response_mode='accumulate',
|
153 |
+
#streaming=True,
|
154 |
+
similarity_top_k=5,
|
155 |
+
)
|
156 |
+
|
157 |
+
question = "Which horn does not have tubules? A) big horn sheep B) pronghorn C) mountain goat"
|
158 |
+
response = query_engine.query(question)
|
159 |
+
display(Markdown(f"<b>{response}</b>"))
|
160 |
+
```
|