A Flexible Large Language Models Guardrail Development Methodology Applied to Off-Topic Prompt Detection
Abstract
Large Language Models are prone to off-topic misuse, where users may prompt these models to perform tasks beyond their intended scope. Current guardrails, which often rely on curated examples or custom classifiers, suffer from high false-positive rates, limited adaptability, and the impracticality of requiring real-world data that is not available in pre-production. In this paper, we introduce a flexible, data-free guardrail development methodology that addresses these challenges. By thoroughly defining the problem space qualitatively and passing this to an LLM to generate diverse prompts, we construct a synthetic dataset to benchmark and train off-topic guardrails that outperform heuristic approaches. Additionally, by framing the task as classifying whether the user prompt is relevant with respect to the system prompt, our guardrails effectively generalize to other misuse categories, including jailbreak and harmful prompts. Lastly, we further contribute to the field by open-sourcing both the synthetic dataset and the off-topic guardrail models, providing valuable resources for developing guardrails in pre-production environments and supporting future research and development in LLM safety.
Community
Large Language Models (LLMs) are powerful, but they're prone to off-topic misuse, where users push them beyond their intended scope. Think harmful prompts, jailbreaks, and misuse. So how do we build better guardrails?
Traditional guardrails rely on curated examples or classifiers. The problem?
⚠️ High false-positive rates
⚠️ Poor adaptability to new misuse types
⚠️ Require real-world data, which is often unavailable during pre-production
Our method skips the need for real-world misuse examples. Instead, we:
1️⃣ Define the problem space qualitatively
2️⃣ Use an LLM to generate synthetic misuse prompts
3️⃣ Train and test guardrails on this dataset
We apply this to the off-topic prompt detection problem, and fine-tune simple bi- and cross-encoder classifiers that outperform heuristics based on cosine similarity or prompt engineering.
Additionally, framing the problem as prompt relevance allows these fine-tuned classifiers to generalise to other risk categories (e.g., jailbreak, toxic prompts).
Through this work, we also open-source our dataset (2M examples, ~50M+ tokens) and models.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Fine-tuned Large Language Models (LLMs): Improved Prompt Injection Attacks Detection (2024)
- No more hard prompts: SoftSRV prompting for synthetic data generation (2024)
- DROJ: A Prompt-Driven Attack against Large Language Models (2024)
- SAG: Style-Aligned Article Generation via Model Collaboration (2024)
- Prompting and Fine-tuning Large Language Models for Automated Code Review Comment Generation (2024)
- In-Context Code-Text Learning for Bimodal Software Engineering (2024)
- A Recipe For Building a Compliant Real Estate Chatbot (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend