Edit model card
PsyMedRP-v1-13B-p1:
[jondurbin/airoboros-l2-13b-3.0](0.85) x [ehartford/Samantha-1.11-13b](0.15)

PsyMedRP-v1-13B-p2:
[Xwin-LM/Xwin-LM-13B-V0.1](0.85) x [chaoyi-wu/MedLLaMA_13B](0.15)

PsyMedRP-v1-13B-p3:
[PsyMedRP-v1-13B-p1](0.55) x [PsyMedRP-v1-13B-p2](0.45)

PsyMedRP-v1-13B-p4:
[The-Face-Of-Goonery/Huginn-13b-FP16 merge with Gryphe gradient with PsyMedRP-v1-13B-p3]

PsyMedRP-v1-13B:
Apply Undi95/LimaRP-v3-120-Days at 0.3 weight to PsyMedRP-v1-13B-p4

In testing. 20B will follow!

If you want to support me, you can here.

Downloads last month
11
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Undi95/PsyMedRP-v1-13B

Merges
2 models
Quantizations
3 models

Collection including Undi95/PsyMedRP-v1-13B