uncloseai.

Rust Examples - Free LLM & TTS AI Service

Rust Examples

This page demonstrates how to use the uncloseai. API endpoints with Rust using the async-openai community library. All examples use the same OpenAI-compatible API interface, making it easy to switch between different models and endpoints.

Available Endpoints:

Rust Client Installation

Add the async-openai crate to your Cargo.toml:

[dependencies]
async-openai = "0.24"
tokio = { version = "1", features = ["full"] }

Non-Streaming Examples

Non-streaming mode waits for the complete response before returning. This is simpler to use but provides no intermediate feedback during generation.

Using Hermes (General Purpose)

use async_openai::{
    config::OpenAIConfig,
    types::{ChatCompletionRequestMessage, CreateChatCompletionRequestArgs},
    Client,
};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let config = OpenAIConfig::new()
        .with_api_key("choose-any-value")
        .with_api_base("https://hermes.ai.unturf.com/v1");

    let client = Client::with_config(config);

    let request = CreateChatCompletionRequestArgs::default()
        .model("adamo1139/Hermes-3-Llama-3.1-8B-FP8-Dynamic")
        .messages(vec![ChatCompletionRequestMessage::User(
            "Give a Python Fizzbuzz solution in one line of code?".into()
        )])
        .temperature(0.5)
        .max_tokens(150u32)
        .build()?;

    let response = client.chat().create(request).await?;

    println!("{}", response.choices[0].message.content.as_ref().unwrap());

    Ok(())
}

Using Qwen 3 Coder (Specialized for Coding)

use async_openai::{
    config::OpenAIConfig,
    types::{ChatCompletionRequestMessage, CreateChatCompletionRequestArgs},
    Client,
};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let config = OpenAIConfig::new()
        .with_api_key("choose-any-value")
        .with_api_base("https://qwen.ai.unturf.com/v1");

    let client = Client::with_config(config);

    let request = CreateChatCompletionRequestArgs::default()
        .model("hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:Q4_K_M")
        .messages(vec![ChatCompletionRequestMessage::User(
            "Give a Python Fizzbuzz solution in one line of code?".into()
        )])
        .temperature(0.5)
        .max_tokens(150u32)
        .build()?;

    let response = client.chat().create(request).await?;

    println!("{}", response.choices[0].message.content.as_ref().unwrap());

    Ok(())
}

Streaming Examples

Streaming mode returns chunks of the response as they are generated, providing real-time feedback. This is ideal for interactive applications and long responses.

Using Hermes (General Purpose)

use async_openai::{
    config::OpenAIConfig,
    types::{ChatCompletionRequestMessage, CreateChatCompletionRequestArgs},
    Client,
};
use futures::StreamExt;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let config = OpenAIConfig::new()
        .with_api_key("choose-any-value")
        .with_api_base("https://hermes.ai.unturf.com/v1");

    let client = Client::with_config(config);

    let request = CreateChatCompletionRequestArgs::default()
        .model("adamo1139/Hermes-3-Llama-3.1-8B-FP8-Dynamic")
        .messages(vec![ChatCompletionRequestMessage::User(
            "Give a Python Fizzbuzz solution in one line of code?".into()
        )])
        .temperature(0.5)
        .max_tokens(150u32)
        .build()?;

    let mut stream = client.chat().create_stream(request).await?;

    while let Some(result) = stream.next().await {
        match result {
            Ok(response) => {
                for choice in response.choices {
                    if let Some(content) = &choice.delta.content {
                        print!("{}", content);
                    }
                }
            }
            Err(err) => eprintln!("Error: {}", err),
        }
    }

    Ok(())
}

Using Qwen 3 Coder (Specialized for Coding)

use async_openai::{
    config::OpenAIConfig,
    types::{ChatCompletionRequestMessage, CreateChatCompletionRequestArgs},
    Client,
};
use futures::StreamExt;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let config = OpenAIConfig::new()
        .with_api_key("choose-any-value")
        .with_api_base("https://qwen.ai.unturf.com/v1");

    let client = Client::with_config(config);

    let request = CreateChatCompletionRequestArgs::default()
        .model("hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:Q4_K_M")
        .messages(vec![ChatCompletionRequestMessage::User(
            "Give a Python Fizzbuzz solution in one line of code?".into()
        )])
        .temperature(0.5)
        .max_tokens(150u32)
        .build()?;

    let mut stream = client.chat().create_stream(request).await?;

    while let Some(result) = stream.next().await {
        match result {
            Ok(response) => {
                for choice in response.choices {
                    if let Some(content) = &choice.delta.content {
                        print!("{}", content);
                    }
                }
            }
            Err(err) => eprintln!("Error: {}", err),
        }
    }

    Ok(())
}

Text-to-Speech Example

Generate audio speech from text using the TTS endpoint. The audio is saved as an MP3 file.

use async_openai::{
    config::OpenAIConfig,
    types::{CreateSpeechRequestArgs, SpeechModel, Voice},
    Client,
};
use std::fs::File;
use std::io::Write;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let config = OpenAIConfig::new()
        .with_api_key("YOLO")
        .with_api_base("https://speech.ai.unturf.com/v1");

    let client = Client::with_config(config);

    let request = CreateSpeechRequestArgs::default()
        .model(SpeechModel::Tts1)
        .voice(Voice::Alloy)
        .input("I think so therefore, Today is a wonderful day to grow something people love!")
        .speed(0.9)
        .build()?;

    let response = client.audio().speech(request).await?;

    let mut file = File::create("speech.mp3")?;
    file.write_all(&response.bytes)?;

    Ok(())
}