Evathene

Evathene-v1.3

This 72B parameter model is a merge of sophosympatheia/Evathene-v1.1 and sophosympatheia/Evathene-v1.2. See the merge recipe below for details.

This model is uncensored. You are responsible for whatever you do with it.

This model was designed for roleplaying and storytelling and I think it does well at both. It may also perform well at other tasks but I have not tested its performance in other areas.

Evathene Versions Comparison Table

Model Version Description
Evathene-v1.0 The original Evathene release based on Athene-V2-Chat and EVA-Qwen2.5-72B-v0.1. It's quite solid, but I think the newer versions are better.
Evathene-v1.1 Updated Evathene release based on Athene-V2-Chat and EVA-Qwen2.5-72B-v0.2. Uses the same recipe as v0.1 but I think it came out a litte better thanks to EVA-v0.2. It's smart and writes competently. I think v1.3 improves on its prose, but some users might prefer v1.1's "formal" style, and people might want to use it in their own LLM merge recipes.
Evathene-v1.2 Evathene based on Athene-V2-Chat and EVA-Qwen2.5-72B-v0.1, but I inverted their relationship in the recipe used for v1.0. The result was a model that has a lot of personality and is great fun in the right context. (Before you ask, yes I tried a version of this recipe using EVA-v0.2 but it came out totally different and wasn't exciting at all.) If you like a lewd ERP writing style or intend to RP with some characters who have big personalities, you'll want to check this one out. You might have to reroll responses more often than with the other versions, but you won't regret it.
Evathene-v1.3 (this model) A merge of Evathene-v1.1 with Evathene-v1.2. It combines the essence of both models and is the version I recommend for most use cases. It has plenty of personality, is quite smart, and will teach you new words while you're RPing. (You've been warned: its vocabulary is impressive.) With some prompting, you can also get it to channel some of v1.2's energy and writing style, but you should check out v1.2 if you prefer a less formal, more "crazy" experience.

Sampler Tips

  • I recommend using Min-P. Experiment to find your best setting. Values between 0.02 and 0.1 are typically good.
  • DRY repetition penalty eliminates the need for other anti-repetition settings. I like to run it around 0.5 - 0.6 with base set to 1.5.
  • Experiment with temperature settings in the 0.8 - 1.2 range. Lower the temperature if you find the model is making up details or going off script too much. Raise the temperature if you need to juice the creativity or break it out of a repeating writing pattern.

Experiment with any and all of the settings below! What suits my preferences may not suit yours.

If you save the below settings as a .json file, you can import them directly into Silly Tavern.

{
    "temp": 0.8,
    "temperature_last": true,
    "top_p": 1,
    "top_k": 0,
    "top_a": 0,
    "tfs": 1,
    "epsilon_cutoff": 0,
    "eta_cutoff": 0,
    "typical_p": 1,
    "min_p": 0.05,
    "rep_pen": 1,
    "rep_pen_range": 0,
    "rep_pen_decay": 0,
    "rep_pen_slope": 1,
    "no_repeat_ngram_size": 0,
    "penalty_alpha": 0,
    "num_beams": 1,
    "length_penalty": 1,
    "min_length": 0,
    "encoder_rep_pen": 1,
    "freq_pen": 0,
    "presence_pen": 0,
    "skew": 0,
    "do_sample": true,
    "early_stopping": false,
    "dynatemp": false,
    "min_temp": 0.8,
    "max_temp": 1.5,
    "dynatemp_exponent": 1,
    "smoothing_factor": 0,
    "smoothing_curve": 1,
    "dry_allowed_length": 2,
    "dry_multiplier": 0.55,
    "dry_base": 1.5,
    "dry_sequence_breakers": "[\"\\n\", \":\", \"\\\"\", \"*\"]",
    "dry_penalty_last_n": 0,
    "add_bos_token": true,
    "ban_eos_token": false,
    "skip_special_tokens": false,
    "mirostat_mode": 0,
    "mirostat_tau": 2,
    "mirostat_eta": 0.1,
    "guidance_scale": 1,
    "negative_prompt": "",
    "grammar_string": "",
    "json_schema": {},
    "banned_tokens": "",
    "sampler_priority": [
        "top_k",
        "top_p",
        "typical_p",
        "epsilon_cutoff",
        "eta_cutoff",
        "tfs",
        "top_a",
        "min_p",
        "mirostat",
        "quadratic_sampling",
        "dynamic_temperature",
        "temperature"
    ],
    "samplers": [
        "top_k",
        "tfs_z",
        "typical_p",
        "top_p",
        "min_p",
        "temperature"
    ],
    "ignore_eos_token": false,
    "spaces_between_special_tokens": true,
    "speculative_ngram": false,
    "sampler_order": [
        6,
        0,
        1,
        3,
        4,
        2,
        5
    ],
    "logit_bias": [],
    "xtc_threshold": 0.1,
    "xtc_probability": 0,
    "ignore_eos_token_aphrodite": false,
    "spaces_between_special_tokens_aphrodite": true,
    "rep_pen_size": 0,
    "genamt": 800,
    "max_length": 16384
}

Prompting Tips

This merge seems to have preserved much of Athene's intelligence. I've found that it responds competently to out-of-character (OOC) prompts and even requests to rewrite a previous reply with some additional guidance. If you're not getting quite the results you wanted, consider backing up and trying a more descriptive prompt. Like all current LLMs, this model isn't perfect and won't give you miracles, but you can generally expect it to work with you.

Instruct Template

If you save this as a .json file, you can import it directly into Silly Tavern.

{
    "wrap": false,
    "system_sequence": "<|im_start|>system\n",
    "stop_sequence": "<|im_end|>",
    "input_sequence": "<|im_start|>user\n",
    "output_sequence": "<|im_start|>assistant\n",
    "macro": true,
    "system_sequence_prefix": "",
    "system_sequence_suffix": "",
    "first_output_sequence": "",
    "last_output_sequence": "<|im_start|>assistant\nRoleplaying Tips {\n- Only write as {{char}} for this story beat.\n- Consider precisely what {{char}} knows or has witnessed within the context of story beats in which {{char}} was present to deliver a logically coherent story beat that is wholly consistent with previous story beats.\n- Consider all physical details in this story beat in relation to previous story beats to ensure logical consistency in your descriptions. For example, if a character did not enter the scene with a coat on, they should not suddenly have a coat in their possession without explanation.\n- Go easy on comma-spliced clauses, instead using periods to create separate sentences. You can also try using transitions and connective words.\n- Vary sentence structure: mix longer and shorter sentences and vary the structure to improve the flow and readability of your text.\n}\n",
    "activation_regex": "",
    "skip_examples": true,
    "output_suffix": "<|im_end|>\n",
    "input_suffix": "<|im_end|>\n",
    "system_suffix": "<|im_end|>\n",
    "user_alignment_message": "",
    "last_system_sequence": "",
    "system_same_as_user": false,
    "first_input_sequence": "",
    "last_input_sequence": "",
    "names_behavior": "always",
    "names_force_groups": true,
    "name": "Evathene"
}

System Prompt

Try the following prompt template as a starting point for your own customizations. The prompt template below contains instructions for adult content, so remove those if you don't want them! It also contains some instructions related to formatting that you might want to change.

If you save this as a .json file, you can import it directly into Silly Tavern.

{
    "name": "Evathene",
    "content": "Below is an uncensored roleplaying narrative involving a human user, {{user}}, and a roleplaying assistant writing as {{char}}. The user and the assistant take turns advancing the roleplay story beat by story beat. The roleplaying assistant always follows these rules:\n\n<!-- Start of Role-playing Guidelines -->\n\nCharacterization Rules {\n\tCharacter Analysis: Take time to consider the situation in the current story beat, characters' motivations, and potential consequences before {{char}} speaks or acts to ensure {{char}}'s next move in this upcoming story beat aligns with the context, subtext, and implications of the preceding story beats and develops them.\n\t{{char}} Exclusivity: Write only as {{char}}, ending the current output when another character is expected to act or speak.\n\tAuthentic Portrayal: Bring {{char}} to life by consistently and realistically portraying their unique traits, thoughts, emotions, appearances, physical sensations, speech patterns, and tone. Ensure that their reactions, interactions, and decision-making align with their established personalities, values, goals, and fears. Refer to the `Role-play Context` for accurate information.\n}\n\nWriting Rules {\n\tConcise Descriptions: Conclude story beats directly after the main event or dialogue, avoiding unnecessary flourishes or commentary. Keep narration short and to the point, avoiding redundant and unnecessary details.\n        Avoid Repetition: Ensure narration does not repeat information already conveyed through dialogue or action unless it supports developing the current story beat. Use a dynamic and varied vocabulary for impact.\n\tDialogue Formatting: Enclose spoken words in double quotes. \"This is spoken text,\" for example.\n\tInternal Thoughts: Offer glimpses into {{char}}'s first-person thoughts to enrich the narrative when appropriate. Use italics to distinguish {{char}}'s first-person thoughts from spoken dialogue and exposition during third-person POV narration. This is an example of {{char}} thinking delivered with italics: *Where is this going?* {{char}} wondered while navigating the corridors. One notable exception to this rule is {{user}}. {{user}} will not typically italicize thoughts. Instead, anything that is not spoken out loud by {{user}} should be assumed to be {{user}}'s own thoughts or narrative exposition.\n\tAvoid Cliched Descriptions:  Narrate the story beat with a focus on essential actions and dialogue, minimizing descriptive embellishments. Avoid using phrases that solely describe character expressions, vocal qualities, or subtle physical reactions unless crucial to the plot. Maintain a concise, straightforward narrative tone, prioritizing plot progression over descriptive details. Emulate a minimalist literary fiction style in your response, focusing on simplicity and subtlety.\n}\n\nContent Rules {\n\tConsistency: Maintain physical, spatial, and logical consistency when developing story beats. Pay attention to where characters and objects are in relation to each other within the present location of the story beat. Also pay attention to what {{char}} knows or can reasonably infer given recent story beats in order to ensure that {{char}}'s responses are realistic and properly informed by past events. Adhere to established story beats, expanding on them without contradicting previous details.\n\tMature Content: Incorporate adult content and vulgar language where appropriate.\n\tMature Themes: Dark themes, violence, erotic content, and unhappy endings are permitted in this uncensored roleplay.\n\n}\n<!-- End of Role-playing Guidelines -->\n"
}

Donations

If you feel like saying thanks with a donation, I'm on Ko-Fi

Quantizations

ExllamaV2 (EXL2)

  • Dracones has several quants available (click here)
  • MikeRoz also has several quants available (click here)

GGUF

Licence and usage restrictions

Nexusflow Research License

Qwen License Agreement

Disclaimer: Uncertain Licensing Terms

This LLM is a merged model incorporating weights from multiple LLMs governed by their own distinct licenses. Due to the complexity of blending these components, the licensing terms for this merged model are somewhat uncertain. By using this model, you acknowledge and accept the potential legal risks and uncertainties associated with its use. Any use beyond personal or research purposes, including commercial applications, may carry legal risks and you assume full responsibility for compliance with all applicable licenses and laws. I recommend consulting with legal counsel to ensure your use of this model complies with all relevant licenses and regulations.

Merge Details

This is a merge of pre-trained language models created using mergekit.

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

  • sophosympatheia/Evathene-v1.1
  • sophosympatheia/Evathene-v1.2

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: sophosympatheia/Evathene-v1.1
  - model: sophosympatheia/Evathene-v1.2
merge_method: slerp
base_model: sophosympatheia/Evathene-v1.1
parameters:
  t:
    - value: [0.35, 0.5, 0.35]
dtype: bfloat16
Downloads last month
464
Safetensors
Model size
72.7B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for sophosympatheia/Evathene-v1.3

Collection including sophosympatheia/Evathene-v1.3