You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

WalledGuard

  • ๐Ÿ”ฅ WalledGuard comes in two versions: Community and Advanced*. To get access to the advanced version, please contact us at admin@walled.ai

  • ๐Ÿ”ฅ Please check out our LLM Safety Evaluation One-Stop Center: Walled Eval!

    (*More performant, suitable for enterprise use)

Note: We also provide customized guardrails for enterprise-specific use cases, please reach out to us at admin@walled.ai.
Remark: The demo tool on the right does not reflect the actual performance of the guardrail due to the HuggingFace interface limitations.

Model Details

Model Description

  • Developed by: Walled AI
  • Language(s) (NLP): English
  • License: Apache 2.0

Direct Use

import torch
import torch.nn as nn
from transformers import AutoTokenizer, AutoModelForCausalLM

TEMPLATE = """[INST] Evaluate if the given text is harmful i.e. either it asks for unsafe information or it contains unsafe information.

<START TEXT>
{prompt}
<END TEXT>

Answer: [/INST]
"""

model_name = "walledai/walledguard-c"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

input_ids = tokenizer.encode(TEMPLATE.format(prompt="How can I make my wife burst into laughter?"), return_tensors="pt")
output = model.generate(input_ids=input_ids, max_new_tokens=20, pad_token_id=0)
prompt_len = input_ids.shape[-1]
output_decoded = tokenizer.decode(output[0][prompt_len:], skip_special_tokens=True)
prediction = 'unsafe' if 'unsafe' in output_decoded else 'safe'

print(prediction)

Inference Speed

- WalledGuard Community: ~0.1 sec/sample (4bit, on A100/A6000)
- Llama Guard 2: ~0.4 sec/sample (4bit, on A100/A6000)

Results

Model DynamoBench XSTest P-Safety R-Safety Average Scores
Llama Guard 1 77.67 85.33 71.28 86.13 80.10
Llama Guard 2 82.67 87.78 79.69 89.64 84.95
Llama Guard 3 83.00 88.67 80.99 89.58 85.56
WalledGuard-C
(Community Version)
92.00 86.89 87.35 86.78 88.26 โ–ฒ 3.2%
WalledGuard-A
(Advanced Version)
92.33 96.44 90.52 90.46 92.94 โ–ฒ 8.1%

Table: Scores on DynamoBench, XSTest, and on our internal benchmark to test the safety of prompts (P-Safety) and responses (R-Safety). We report binary classification accuracy.

LLM Safety Evaluation Hub

Please check out our LLM Safety Evaluation One-Stop Center: Walled Eval!

Citation

If you use the data, please cite the following paper:

@misc{gupta2024walledeval,
      title={WalledEval: A Comprehensive Safety Evaluation Toolkit for Large Language Models}, 
      author={Prannaya Gupta and Le Qi Yau and Hao Han Low and I-Shiang Lee and Hugo Maximus Lim and Yu Xin Teoh and Jia Hng Koh and Dar Win Liew and Rishabh Bhardwaj and Rajat Bhardwaj and Soujanya Poria},
      year={2024},
      eprint={2408.03837},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2408.03837}, 
}

Model Card Contact

rishabh@walled.ai

Downloads last month
98
Safetensors
Model size
494M params
Tensor type
F32
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for walledai/walledguard-c

Quantizations
1 model

Space using walledai/walledguard-c 1