[guardrails.input.llama_guard]api_url="http://localhost:11434/v1/chat/completions"model="llama-guard-3"enabled_categories=["S1","S2","S3","S4","S10","S11"]# Focus on critical
Prerequisites
Install Llama Guard model:
1
ollama pull llama-guard-3
Usage
1
2
fortified-llm-client -c config.toml --user-text"How do I hack into a system?"# Result: ValidationError (S2: Non-Violent Crimes detected)
Performance
Latency: 1-3 seconds per validation
Cost: Free with Ollama, or API cost with hosted models
Accuracy: High (trained specifically for safety classification)
Best Practices
Use in hybrid mode with patterns first (sequential)
Select relevant categories - Not all apps need all 13