Text Generation
Transformers
English
llama
Edit model card
TheBlokeAI

TheBloke's LLM work is generously supported by a grant from andreessen horowitz (a16z)


Synthia 70B v1.1 - GGML

Description

This repo contains GGML format model files for Migel Tissera's Synthia 70B v1.1.

Important note regarding GGML files.

The GGML format has now been superseded by GGUF. As of August 21st 2023, llama.cpp no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.

Please use the GGUF models instead.

About GGML

GPU acceleration is now available for Llama 2 70B GGML files, with both CUDA (NVidia) and Metal (macOS). The following clients/libraries are known to work with these files, including with GPU acceleration:

  • llama.cpp, commit e76d630 and later.
  • text-generation-webui, the most widely used web UI.
  • KoboldCpp, version 1.37 and later. A powerful GGML web UI, especially good for story telling.
  • LM Studio, a fully featured local GUI with GPU acceleration for both Windows and macOS. Use 0.1.11 or later for macOS GPU acceleration with 70B models.
  • llama-cpp-python, version 0.1.77 and later. A Python library with LangChain support, and OpenAI-compatible API server.
  • ctransformers, version 0.2.15 and later. A Python library with LangChain support, and OpenAI-compatible API server.

Repositories available

Prompt template: Synthia

SYSTEM: You are Synthia. As a an AI intelligence, give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually.
USER: {prompt}
ASSISTANT:

Compatibility

Works with llama.cpp commit e76d630 until August 21st, 2023

Will not work with llama.cpp after commit dadbed99e65252d79f81101a392d0d6497b86caa.

For compatibility with latest llama.cpp, please use GGUF files instead.

Or one of the other tools and libraries listed above.

To use in llama.cpp, you must add -gqa 8 argument.

For other UIs and libraries, please check the docs.

Explanation of the new k-quant methods

Click to see details

The new methods available are:

  • GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
  • GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
  • GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
  • GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
  • GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
  • GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.

Refer to the Provided Files table below to see what files use which methods, and how.

Provided files

Name Quant method Bits Size Max RAM required Use case
synthia-70b-v1.1.ggmlv3.Q2_K.bin Q2_K 2 28.59 GB 31.09 GB New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors.
synthia-70b-v1.1.ggmlv3.Q3_K_S.bin Q3_K_S 3 29.75 GB 32.25 GB New k-quant method. Uses GGML_TYPE_Q3_K for all tensors
synthia-70b-v1.1.ggmlv3.Q3_K_M.bin Q3_K_M 3 33.04 GB 35.54 GB New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K
synthia-70b-v1.1.ggmlv3.Q3_K_L.bin Q3_K_L 3 36.15 GB 38.65 GB New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K
synthia-70b-v1.1.ggmlv3.Q4_0.bin Q4_0 4 38.87 GB 41.37 GB Original quant method, 4-bit.
synthia-70b-v1.1.ggmlv3.Q4_K_S.bin Q4_K_S 4 38.87 GB 41.37 GB New k-quant method. Uses GGML_TYPE_Q4_K for all tensors
synthia-70b-v1.1.ggmlv3.Q4_K_M.bin Q4_K_M 4 41.38 GB 43.88 GB New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K
synthia-70b-v1.1.ggmlv3.Q4_1.bin Q4_1 4 43.17 GB 45.67 GB Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.
synthia-70b-v1.1.ggmlv3.Q5_0.bin Q5_0 5 47.46 GB 49.96 GB Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference.
synthia-70b-v1.1.ggmlv3.Q5_K_S.bin Q5_K_S 5 47.46 GB 49.96 GB New k-quant method. Uses GGML_TYPE_Q5_K for all tensors
synthia-70b-v1.1.ggmlv3.Q5_K_M.bin Q5_K_M 5 48.75 GB 51.25 GB New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K
synthia-70b-v1.1.ggmlv3.q5_1.bin q5_1 5 51.76 GB 54.26 GB Original quant method, 5-bit. Higher accuracy, slower inference than q5_0.
synthia-70b-v1.1.ggmlv3.q6_K.bin q6_K 6 56.59 GB 59.09 GB New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors
synthia-70b-v1.1.ggmlv3.q8_0.bin q8_0 8 73.23 GB 75.73 GB Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users.

q5_1, q6_K and q8_0 files require expansion from archive

Note: HF does not support uploading files larger than 50GB. Therefore I have uploaded the q6_K and q8_0 files as multi-part ZIP files. They are not compressed, they are just for storing a .bin file in two parts.

Click for instructions regarding q5_1, q6_K and q8_0 files

q5_1

Please download:

  • synthia-70b-v1.1.ggmlv3.q5_1.zip
  • synthia-70b-v1.1.ggmlv3.q5_1.z01

q6_K

Please download:

  • synthia-70b-v1.1.ggmlv3.q6_K.zip
  • synthia-70b-v1.1.ggmlv3.q6_K.z01

q8_0

Please download:

  • synthia-70b-v1.1.ggmlv3.q8_0.zip
  • synthia-70b-v1.1.ggmlv3.q8_0.z01

Then extract the .zip archive. This will will expand both parts automatically. On Linux I found I had to use 7zip - the basic unzip tool did not work. Example:

sudo apt update -y && sudo apt install 7zip
7zz x synthia-70b-v1.1.ggmlv3.q6_K.zip

How to run in llama.cpp

Make sure you are using llama.cpp from commit dadbed99e65252d79f81101a392d0d6497b86caa or earlier.

For compatibility with latest llama.cpp, please use GGUF files instead.

I use the following command line; adjust for your tastes and needs:

./main -t 10 -ngl 40 -gqa 8 -m synthia-70b-v1.1.ggmlv3.q4_K_M.bin --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "SYSTEM: You are Synthia. As a an AI intelligence, give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually.\nUSER: Write a story about llamas\nASSISTANT:"

Change -t 10 to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use -t 8. If you are fully offloading the model to GPU, use -t 1

Change -ngl 40 to the number of GPU layers you have VRAM for. Use -ngl 100 to offload all layers to VRAM - if you have a 48GB card, or 2 x 24GB, or similar. Otherwise you can partially offload as many as you have VRAM for, on one or more GPUs.

If you want to have a chat-style conversation, replace the -p <PROMPT> argument with -i -ins

Remember the -gqa 8 argument, required for Llama 70B models.

Change -c 4096 to the desired sequence length for this model. For models that use RoPE, add --rope-freq-base 10000 --rope-freq-scale 0.5 for doubled context, or --rope-freq-base 10000 --rope-freq-scale 0.25 for 4x context.

For other parameters and how to use them, please refer to the llama.cpp documentation

How to run in text-generation-webui

Further instructions here: text-generation-webui/docs/llama.cpp-models.md.

Discord

For further support, and discussions on these models and AI in general, join us at:

TheBloke AI's Discord server

Thanks, and how to contribute.

Thanks to the chirper.ai team!

I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.

If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.

Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.

Special thanks to: Aemon Algiz.

Patreon special mentions: Kacper Wikieł, knownsqashed, Leonard Tan, Asp the Wyvern, Daniel P. Andersen, Luke Pendergrass, Stanislav Ovsiannikov, RoA, Dave, Ai Maven, Kalila, Will Dee, Imad Khwaja, Nitin Borwankar, Joseph William Delisle, Tony Hughes, Cory Kujawski, Rishabh Srivastava, Russ Johnson, Stephen Murray, Lone Striker, Johann-Peter Hartmann, Elle, J, Deep Realms, SuperWojo, Raven Klaugh, Sebastain Graf, ReadyPlayerEmma, Alps Aficionado, Mano Prime, Derek Yates, Gabriel Puliatti, Mesiah Bishop, Magnesian, Sean Connelly, biorpg, Iucharbius, Olakabola, Fen Risland, Space Cruiser, theTransient, Illia Dulskyi, Thomas Belote, Spencer Kim, Pieter, John Detwiler, Fred von Graf, Michael Davis, Swaroop Kallakuri, subjectnull, Clay Pascal, Subspace Studios, Chris Smitley, Enrico Ros, usrbinkat, Steven Wood, alfie_i, David Ziegler, Willem Michiel, Matthew Berman, Andrey, Pyrater, Jeffrey Morgan, vamX, LangChain4j, Luke @flexchar, Trenton Dambrowitz, Pierre Kircher, Alex, Sam, James Bentley, Edmond Seymore, Eugene Pentland, Pedro Madruga, Rainer Wilmers, Dan Guido, Nathan LeClaire, Spiking Neurons AB, Talal Aujan, zynix, Artur Olbinski, Michael Levine, 阿明, K, John Villwock, Nikolai Manek, Femi Adebogun, senxiiz, Deo Leter, NimbleBox.ai, Viktor Bowallius, Geoffrey Montalvo, Mandus, Ajan Kanaga, ya boyyy, Jonathan Leane, webtim, Brandon Frisco, danny, Alexandros Triantafyllidis, Gabriel Tamborski, Randy H, terasurfer, Vadim, Junyu Yang, Vitor Caleffi, Chadd, transmissions 11

Thank you to all my generous patrons and donaters!

And thank you again to a16z for their generous grant.

Original model card: Migel Tissera's Synthia 70B v1.1

Synthia-70B-v1.1

SynthIA (Synthetic Intelligent Agent) is a LLama-2-70B model trained on Orca style datasets. It has been fine-tuned for instruction following as well as having long-form conversations.

This model has generalized "Tree of Thought" reasoning capabilities. Evoke it with the following system message:

Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning

Synthia



License Disclaimer:

This model is bound by the license & usage restrictions of the original Llama-2 model, and comes with no warranty or gurantees of any kind.


Evaluation

We evaluated Synthia-70B on a wide range of tasks using Language Model Evaluation Harness from EleutherAI.

Here are the results on metrics used by HuggingFaceH4 Open LLM Leaderboard

Task Metric Value
arc_challenge acc_norm 70.05
hellaswag acc_norm 87.12
mmlu acc_norm 70.34
truthfulqa_mc mc2 57.84
Total Average - 71.34

Example Usage

Here is prompt format:

SYSTEM: You are Synthia. As a an AI intelligence, give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually.
USER: How is a rocket launched from the surface of the earth to Low Earth Orbit?
ASSISTANT:

Below shows a code example on how to use this model:

import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer

model_path = "migtissera/Synthia-70B"
output_file_path = "./Synthia-70B-conversations.jsonl"

model = AutoModelForCausalLM.from_pretrained(
    model_path,
    torch_dtype=torch.float16,
    device_map="auto",
    load_in_8bit=False,
    trust_remote_code=True,
)

tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)


def generate_text(instruction):
    tokens = tokenizer.encode(instruction)
    tokens = torch.LongTensor(tokens).unsqueeze(0)
    tokens = tokens.to("cuda")

    instance = {
        "input_ids": tokens,
        "top_p": 1.0,
        "temperature": 0.75,
        "generate_len": 1024,
        "top_k": 50,
    }

    length = len(tokens[0])
    with torch.no_grad():
        rest = model.generate(
            input_ids=tokens,
            max_length=length + instance["generate_len"],
            use_cache=True,
            do_sample=True,
            top_p=instance["top_p"],
            temperature=instance["temperature"],
            top_k=instance["top_k"],
            num_return_sequences=1,
        )
    output = rest[0][length:]
    string = tokenizer.decode(output, skip_special_tokens=True)
    answer = string.split("USER:")[0].strip()
    return f"{answer}"


conversation = f"SYSTEM: As a an AI superintelligence, give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually."


while True:
    user_input = input("You: ")
    llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
    answer = generate_text(llm_prompt)
    print(answer)
    conversation = f"{llm_prompt}{answer}"
    json_data = {"prompt": user_input, "answer": answer}

    ## Save your conversation
    with open(output_file_path, "a") as output_file:
        output_file.write(json.dumps(json_data) + "\n")

Limitations & Biases:

While this model aims for accuracy, it can occasionally produce inaccurate or misleading results.

Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content.

Exercise caution and cross-check information when necessary. This is an uncensored model.


Citiation:

Please kindly cite using the following BibTeX:

@misc{Synthia-70B,
  author = {Migel Tissera},
  title = {Synthia-70B: Synthetic Intelligent Agent},
  year = {2023},
  publisher = {GitHub, HuggingFace},
  journal = {GitHub repository, HuggingFace repository},
  howpublished = {\url{https://huggingface.co/migtissera/Synthia-70B},
}
@misc{mukherjee2023orca,
      title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, 
      author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
      year={2023},
      eprint={2306.02707},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
@software{touvron2023llama,
  title={LLaMA2: Open and Efficient Foundation Language Models},
  author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
  journal={arXiv preprint arXiv:2302.13971},
  year={2023}
}

Example Output

You: What is Earth's atmosphere like? What is Earth's magnetic field like? What is the relationship between the two?
                                                                                                                                                                                                                                                                                              
Earth's atmosphere is a layer of gases that surrounds the planet, held in place by Earth's gravity. It is primarily composed of nitrogen (78.1%), oxygen (20.9%), argon (0.9%), and other trace gases, such as carb
on dioxide and water vapor. The atmosphere plays a crucial role in regulating Earth's temperature and climate by trapping heat and absorbing harmful radiation from the sun. It also provides the air we breathe, p
rotects us from meteorites and the sun's ultraviolet radiation, and allows for the development of life on the planet.                                                                                              
                                                                                                                                                                                                                   
Earth's magnetic field is a phenomenon generated by the convection of molten iron in the planet's outer core, which creates an electric current. This current, in turn, produces a magnetic field that extends from
 the Earth's interior to space, protecting the planet from the solar wind and cosmic radiation. The magnetic field also plays a role in the formation of Earth's Van Allen radiation belts, which trap high-energy 
particles from the sun and cosmic rays.                                                                                                                                                                            
                                                                                                                                                                                                                   
The relationship between Earth's atmosphere and magnetic field is complex and multifaceted. The magnetic field influences the distribution of charged particles in the atmosphere, affecting auroras, radio communi
cation, and the ionosphere's behavior. The atmosphere, in turn, plays a role in the distribution of the magnetic field's energy, as well as in the generation and loss of atmospheric ions. The interactions betwee
n the atmosphere and the magnetic field also influence the dynamics of Earth's climate and the long-term evolution of the planet's environment.                                                                    

You: Elaborate the the complex relationship between the two
                                                                                                                                                     
The complex relationship between Earth's atmosphere and magnetic field can be elaborated upon in several ways. Firstly, the magnetic field influences the distribution of charged particles in the atmosphere. For 
example, the magnetic field deflects the solar wind particles that would otherwise strip away the atmosphere. This deflection creates a bow shock, where the solar wind particles slow down and dissipate their ene
rgy, producing a shock wave that propagates downstream from the Earth. The magnetic field also guides charged particles from the solar wind into the inner magnetosphere, where they become trapped and produce the
 Van Allen radiation belts.                                                                                                                                                                                        
                                                                                                                                                                                                                   
Secondly, the atmosphere plays a role in the distribution of the magnetic field's energy. The ionosphere, which is the upper layer of the atmosphere that is ionized by solar radiation, is affected by the magneti
c field. The energy from the magnetic field is transferred to the ionosphere, where it drives the motion of the charged particles and affects the propagation of radio waves.                                      
                                                                                                                                                                                                                   
Thirdly, the interactions between the atmosphere and the magnetic field influence the dynamics of Earth's climate. The magnetic field affects the distribution of charged particles in the atmosphere, which in tur
n affects the formation of clouds and precipitation. The magnetic field also affects the distribution of heat in the atmosphere, which can influence the patterns of atmospheric circulation and the distribution o
f temperatures around the planet.                                                                                                                                                                                  
                                                                                                                                                                                                                   
Finally, the interactions between the atmosphere and the magnetic field play a role in the long-term evolution of the planet's environment. The magnetic field protects the atmosphere from the erosive effects of 
the solar wind, which would otherwise strip away the atmosphere over time. The magnetic field also affects the distribution of heat in the atmosphere, which can influence the patterns of atmospheric circulation 
and the distribution of temperatures around the planet. Over time, these interactions can shape the evolution of the planet's climate and environment.

You: Please explain the bow shock wave, Van Allen radiation belts in detail
                                                                                                                                      
The bow shock wave is a phenomenon that occurs when the solar wind, a stream of charged particles emitted by the sun, interacts with Earth's magnetic field. The solar wind particles are moving at high speeds, an
d when they encounter the Earth's magnetic field, they slow down and dissipate their energy. This creates a shock wave that propagates downstream from the Earth. The bow shock wave is located about 15 Earth radi
i (approximately 93,000 miles) upstream from Earth, and it marks the boundary between the solar wind and Earth's magnetosphere.

The Van Allen radiation belts are zones of high-energy charged particles, primarily protons and electrons, that are trapped by Earth's magnetic field. The belts are divided into two main regions: the inner belt,
 which extends from about 1,000 to 6,000 kilometers above Earth's surface, and the outer belt, which extends from about 13,000 to 65,000 kilometers above the surface. The particles in the belts are trapped by Ea
rth's magnetic field, and they can remain in the belts for long periods of time.

The particles in the Van Allen belts are generated by a variety of processes, including the impact of solar wind particles on the Earth's atmosphere and the decay of radioactive elements in the planet's crust. T
he belts are important for understanding the behavior of charged particles in Earth's magnetosphere and for studying the effects of space weather on human activities in space. They can also pose a hazard to spac
ecraft and satellites, as the high-energy particles in the belts can damage sensitive electronics and cause other problems.

You: Does the Van Allen belt present any danger to Geo stationary satellites? Where are these belts? Are they near any of the orbits?
  
Yes, the Van Allen radiation belts can present a danger to geostationary satellites and other spacecraft. The high-energy charged particles in the belts can damage sensitive electronics and cause other problems,
 including radiation damage and single-event upsets. Geostationary satellites are located at an altitude of approximately 35,786 kilometers above Earth's surface, which is just outside the inner belt but still w
ithin the outer belt.

The Van Allen radiation belts are located in Earth's magnetosphere, which is the region of space around the planet where the magnetic field is dominant. The inner belt is located between about 1,000 and 6,000 ki
lometers above the surface, while the outer belt is located between about 13,000 and 65,000 kilometers above the surface. The belts are not located in any specific orbit, but they are present throughout the magn
etosphere, which covers a large portion of the space around the planet.

To mitigate the effects of the Van Allen belts on spacecraft, engineers use a variety of techniques, including shielding and designing electronics to be resistant to radiation. Additionally, some spacecraft are 
designed to operate in orbits that avoid the Van Allen belts, while others are designed to operate within the belts and withstand the effects of the high-energy particles.

You: So when we send spacecraft to the moon, say the ones carrying humans, how do we make sure that the solar wind charged particles will not affect the mission? Are there very specific remedies that we can take
?

When sending spacecraft to the moon, there are several steps that can be taken to protect the spacecraft and its occupants from the effects of solar wind charged particles. One approach is to design the spacecra
ft with a strong magnetic field, which can help deflect the charged particles away from the spacecraft. Another approach is to use radiation shielding, which can help protect the electronics and any human occupa
nts from the effects of radiation.

In addition to these measures, space agencies can also use data from satellites and other spacecraft to monitor the level of solar wind activity and predict when there may be an increased risk of charged particl
e events. This information can be used to adjust the timing of missions and other activities to minimize exposure to high-energy particles.

It's also important to note that the moon is outside the magnetosphere and therefore not directly affected by the Van Allen belts. However, the moon does receive a significant amount of radiation from the sun an
d other sources, so measures to protect the spacecraft and its occupants from radiation exposure are still necessary.

Overall, protecting spacecraft and humans from the effects of solar wind charged particles requires a combination of engineering and operational measures, including strong magnetic fields, radiation shielding, a
nd careful mission planning.

You: Does spacecraft actually create strong magnetic fields? Which Apollo spacecrafts did this? Or is this hypothetical?

Yes, spacecraft can create strong magnetic fields, although this is not typically a primary design consideration for most spacecraft. Some spacecraft, such as those designed for plasma physics research, may incl
ude a magnetic field generator to create controlled conditions for experiments. However, for most spacecraft, the focus is on using existing magnetic fields, such as those generated by the Earth's magnetic field
, to protect the spacecraft and its occupants.

The Apollo spacecraft did not create strong magnetic fields, as their primary focus was on propulsion, navigation, and life support systems for the astronauts. However, some of the Apollo missions did include sc
ientific experiments to study the effects of solar wind and other charged particles on the spacecraft and its occupants.

The use of magnetic fields to protect spacecraft and humans from the effects of radiation is an ongoing area of research and development, and there are a number of proposals and concepts for how this might be ac
hieved. For example, some researchers have proposed using superconducting coils to create a magnetic field around the spacecraft, which could help deflect charged particles and other forms of radiation. However,
 these proposals are still in the early stages of development and are not yet ready for implementation on spacecraft.
Downloads last month
32
Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for TheBloke/Synthia-70B-v1.1-GGML

Finetuned
(1)
this model