pubstructOpenAIProvider{api_url:String,api_key:Option<String>,timeout:Duration,}#[async_trait]implLlmProviderforOpenAIProvider{asyncfninvoke(&self,request:LlmRequest)->Result<LlmResponse,FortifiedError>{// Build request body// Make HTTP POST to api_url// Parse JSON response// Extract content from choices[0].message.content}}
Request Format
1
2
3
4
5
6
7
8
9
10
{"model":"gpt-4","messages":[{"role":"system","content":"You are a helpful assistant."},{"role":"user","content":"Explain Rust ownership"}],"temperature":0.7,"max_tokens":1000,"seed":42}
pubstructOllamaProvider{api_url:String,timeout:Duration,}#[async_trait]implLlmProviderforOllamaProvider{asyncfninvoke(&self,request:LlmRequest)->Result<LlmResponse,FortifiedError>{// Same format as OpenAI (Ollama is compatible)// No API key required}}
Differences from OpenAI
No API key required - Ollama runs locally
Same request/response format - OpenAI-compatible
Local models - Models must be pulled first (ollama pull llama3)
Error Handling
Common Errors
Error
Provider
Cause
401 Unauthorized
OpenAI
Invalid/missing API key
404 Not Found
Ollama
Model not pulled
429 Rate Limit
OpenAI
Too many requests
Connection Refused
Ollama
Ollama not running
Timeout
Both
Request took too long
Error Mapping
1
2
3
4
5
6
7
8
9
10
11
matchstatus{401=>FortifiedError::ApiError{message:"Authentication failed".to_string(),status_code:Some(401),},404=>FortifiedError::ApiError{message:"Model not found".to_string(),status_code:Some(404),},// ...}
Adding New Providers
Step 1: Implement LlmProvider Trait
Create src/providers/my_provider.rs:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
useasync_trait::async_trait;usecrate::providers::{LlmProvider,LlmRequest,LlmResponse};usecrate::FortifiedError;pubstructMyProvider{api_url:String,api_key:Option<String>,}#[async_trait]implLlmProviderforMyProvider{asyncfninvoke(&self,request:LlmRequest)->Result<LlmResponse,FortifiedError>{// Your implementation}}
Step 2: Update Provider Enum
In src/lib.rs:
1
2
3
4
5
pubenumProvider{OpenAI,Ollama,MyProvider,// Add new variant}