---
license: mit
language:
- ja
metrics:
- accuracy
pipeline_tag: audio-to-audio
tags:
- rvc
---
#
RVC Genshin Impact Japanese Voice Model
![model-cover.png](https://huggingface.co/ArkanDash/rvc-genshin-impact/resolve/main/model-cover.png)
# About Retrieval based Voice Conversion (RVC)
Learn more about Retrieval based Voice Conversion in this link below:
[RVC WebUI](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI)
# How to use?
Download the prezipped model and put to your RVC Project.
Model test: [Google Colab](https://colab.research.google.com/drive/110kiMZTdP6Ri1lY9-NbQf17GVPPhHyeT?usp=sharing) / [RVC Models New](https://huggingface.co/spaces/ArkanDash/rvc-models-new) (Which is basically the same but hosted on spaces)
## INFO
Model Created by ArkanDash
The voice that was used in this model belongs to Hoyoverse.
The voice I make to make this model was ripped from the game (3.7).
Total Models: 37 Models
V1 Models: 19
V2 Models: 18
Duplicate:
- Zhongli (v1 & v2)
- Nahida (v1 & v2)
Plans:
- Noelle
- Qiqi
Replace:
- Raiden Shogun model is now replaced with newer dataset due to bad voice from older model, The old model is now deleted.
Have a request?
I accept genshin character request if you want it.
For HSR coming soon...
### V1 Model
This was trained on Original RVC.
Pitch Extract using Harvest.
This model was trained with 100 epochs, 10 batch sizes, and a 40K sample rate (some models had a 48k sample rate).
Every V1 model was trained more or less around 30 minutes of character voice.
I may exclude some models to higher epochs due to the low duration of the character's voice.
- Klee 150 Epochs
- Fischl 150 Epochs
### (New) V2 Model
This was trained on Mangio-Fork RVC.
Pitch Extract using Crepe.
This model was trained with 100 epochs, 8 batch sizes, and a 48K sample rate. (some models had a 40k sample rate).
Every V2 model was trained more or less around 60 minutes of character voice.
Other request:
- Greater Lord Rukkhadevata: 750 Epochs, 16 Batch size, 48k Sample rate. (10 minutes dataset)
- Charlotte: 400 Epochs, 16 Batch size, 48k Sample rate. (18 minutes dataset)
Note:
- For faruzan, somehow the index file is smaller, But it output a log when training here:
`Converged (lack of improvement in inertia) at step 1152/48215`
I might retrain faruzan soon.