Quick Start

This tutorial walks you through your first LLM interactions using both the CLI and library API.

Prerequisites

  • Fortified LLM Client installed
  • Access to an LLM provider (Ollama or OpenAI)

CLI Quick Start

Example 1: Basic LLM Call

Using Ollama (local):

1
2
3
fortified-llm-client --api-url http://localhost:11434/v1/chat/completions \
  --model llama3 \
  --user-text "Explain Rust ownership in one sentence"

Using OpenAI:

1
2
3
4
5
export OPENAI_API_KEY=sk-...
fortified-llm-client --api-url https://api.openai.com/v1/chat/completions \
  --model gpt-4 \
  --api-key-name OPENAI_API_KEY \
  --user-text "Explain Rust ownership in one sentence"

Expected output:

1
2
3
4
5
6
7
8
9
10
{
  "status": "success",
  "content": "Rust ownership ensures memory safety by enforcing that each value has a single owner, automatically deallocating when the owner goes out of scope.",
  "metadata": {
    "model": "llama3",
    "tokens_estimated": 45,
    "latency_ms": 1234,
    "timestamp": "2025-01-30T12:00:00Z"
  }
}

Example 2: With System Prompt

Add context to guide the LLM:

1
2
3
4
fortified-llm-client --api-url http://localhost:11434/v1/chat/completions \
  --model llama3 \
  --system-text "You are a Rust expert. Explain concepts clearly and concisely." \
  --user-text "What is the borrow checker?"

Example 3: Save Output to File

1
2
3
4
fortified-llm-client --api-url http://localhost:11434/v1/chat/completions \
  --model llama3 \
  --user-text "Write a haiku about Rust" \
  --output response.json

Check the file:

1
cat response.json | jq '.content'

Example 4: Using a Config File

Create config.toml:

1
2
3
4
api_url = "http://localhost:11434/v1/chat/completions"
model = "llama3"
temperature = 0.7
max_tokens = 500

Run with config:

1
2
fortified-llm-client --config config.toml \
  --user-text "What are the benefits of Rust?"

CLI arguments override config file values. This allows you to reuse configs while customizing specific parameters.

Library Quick Start

Example 1: Basic Usage

Create examples/basic.rs:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
use fortified_llm_client::{evaluate, EvaluationConfig};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let config = EvaluationConfig {
        api_url: "http://localhost:11434/v1/chat/completions".to_string(),
        model: "llama3".to_string(),
        user_prompt: "Explain Rust ownership".to_string(),
        ..Default::default()
    };

    let result = evaluate(config).await?;
    println!("Response: {}", result.content);
    println!("Tokens: {}", result.metadata.tokens_estimated);

    Ok(())
}

Run it:

1
cargo run --example basic

Example 2: With System Prompt and Temperature

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
use fortified_llm_client::{evaluate, EvaluationConfig};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let config = EvaluationConfig {
        api_url: "http://localhost:11434/v1/chat/completions".to_string(),
        model: "llama3".to_string(),
        system_prompt: Some("You are a Rust expert.".to_string()),
        user_prompt: "What is the borrow checker?".to_string(),
        temperature: Some(0.7),
        max_tokens: Some(500),
        ..Default::default()
    };

    let result = evaluate(config).await?;
    println!("{}", result.content);

    Ok(())
}

Example 3: With Error Handling

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
use fortified_llm_client::{evaluate, EvaluationConfig, FortifiedError};

#[tokio::main]
async fn main() {
    let config = EvaluationConfig {
        api_url: "http://localhost:11434/v1/chat/completions".to_string(),
        model: "llama3".to_string(),
        user_prompt: "Hello!".to_string(),
        ..Default::default()
    };

    match evaluate(config).await {
        Ok(result) => {
            println!("Success: {}", result.content);
        }
        Err(FortifiedError::ApiError { message, .. }) => {
            eprintln!("API error: {}", message);
        }
        Err(FortifiedError::ValidationError { message, .. }) => {
            eprintln!("Validation error: {}", message);
        }
        Err(e) => {
            eprintln!("Error: {:?}", e);
        }
    }
}

Testing Your Setup

Verify Ollama is Running

1
2
3
4
5
6
curl http://localhost:11434/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "llama3",
    "messages": [{"role": "user", "content": "Hello"}]
  }'

If this fails, start Ollama: ollama serve

Verify OpenAI API Key

1
2
echo $OPENAI_API_KEY
# Should print: sk-...

If empty, set it:

1
export OPENAI_API_KEY=sk-your-key-here

Common Issues

“Error: Model not found”

Ollama: Pull the model first:

1
ollama pull llama3

OpenAI: Check model name spelling (e.g., gpt-4, not gpt4).

“Connection refused”

Ollama not running: Start it with ollama serve

Wrong API URL: Verify the URL matches your provider’s endpoint.

“API key not found”

Set the environment variable:

1
export OPENAI_API_KEY=sk-...

Or use --api-key-name flag to specify a different variable name.

Next Steps

Now that you’ve completed the quick start:

Learn More