Architecture
Understanding the design and structure of Fortified LLM Client.
Overview
Fortified LLM Client follows a layered architecture with clear separation of concerns:
- CLI Layer - Argument parsing, config merging, output formatting
- Library Layer - Public API (
evaluate(),evaluate_with_guardrails()) - Client Layer - Provider-agnostic LLM client abstraction
- Provider Layer - Provider-specific implementations (OpenAI, Ollama)
- Guardrails Layer - Security validation pipeline
Core Concepts
Evaluation Pipeline
The evaluation flow follows this sequence:
- PDF Extraction (optional) - Extract text from PDF using Docling CLI
- Input Guardrails - Validate user input (NOT system prompts - those are trusted)
- Token Validation - Estimate tokens and check against context limits
- LLM Invocation - Call provider API
- Output Guardrails (optional) - Validate LLM response
- Metadata Generation - Create structured output with execution metadata
Configuration System
Dual configuration approach:
- Figment Merging - Handles scalar fields with priority: CLI args > Config file
- ConfigFileRequest - Parses complex nested structures (guardrails) from TOML/JSON
Provider System
- Auto-detection - Analyzes API URL patterns to infer provider type
- Explicit Override -
--providerflag forces specific provider format - Unified Interface -
LlmProvidertrait with commoninvoke()method
Key Design Principles
- Defense in Depth - Multiple security layers (pattern-based + LLM-based guardrails)
- Provider Agnostic - Unified interface for all LLM providers
- Fail Fast - Validate early to save API costs
- Composable Guardrails - Mix and match validation strategies
- System Prompts Are Trusted - Only user inputs are validated by guardrails
Section Contents
- Layers - 5-layer design details
- Evaluation Pipeline - Step-by-step execution flow
- Providers - Provider detection and implementation
- Testing - Test organization and strategy