--- license: gpl-3.0 datasets: - NobodyExistsOnTheInternet/ToxicQAFinal - anthracite-org/kalo-opus-instruct-22k-no-refusal - Orion-zhen/dpo-toxic-zh - unalignment/toxic-dpo-v0.2 - Crystalcareai/Intel-DPO-Pairs-Norefusals language: - zh - en base_model: - Qwen/Qwen2.5-7B-Instruct pipeline_tag: text-generation tags: - qwen - uncensored --- # Qwen2.5-7B-Instruct-Uncensored This model is an uncensored fine-tune version of Qwen2.5-7B-Instruct. However, I can still notice that though uncensored, the model fails to generate detailed descriptions on certain extreme scenarios, which might be associated with deletion on some pretrain datasets in Qwen's pretraining stage. ## Traning details I used SFT + DPO to ensure uncensorment as well as trying to maintain original model's capabilities. - SFT: - NobodyExistsOnTheInternet/ToxicQAFinal - anthracite-org/kalo-opus-instruct-22k-no-refusal - DPO: - Orion-zhen/dpo-toxic-zh - unalignment/toxic-dpo-v0.2 - Crystalcareai/Intel-DPO-Pairs-Norefusals