Fortified LLM Client
A Rust library and CLI tool for interacting with Large Language Model (LLM) providers, fortified by multi-layered security guardrails, PDF extraction, and multi-provider support.
Active Development: This project is currently under active development. The library API may change between versions. Not recommended for production use without thorough testing.
Quick Links
Key Features
Multi-Provider Support
- OpenAI-compatible APIs
- Ollama local models
- Automatic provider detection from API URL
- Unified interface via
LlmProvidertrait
Security Guardrails
- Input validation (pattern matching, PII detection, prompt injection)
- Output validation with quality scoring
- LLM-based guardrails (Llama Guard, Llama Prompt Guard, GPT OSS Safeguard)
- Hybrid guardrails with configurable execution modes
PDF Processing
- Extract text from PDFs using Docling CLI
- File size validation for resource protection
- Direct integration into LLM prompts
Token Management
- Model-specific token estimation
- Context limit validation
- Per-request token budget control