uncloseai.

C# Examples - Free LLM & TTS AI Service

C# Examples

This page demonstrates how to use the uncloseai. API endpoints with C# using the official OpenAI .NET library. All examples use the same OpenAI-compatible API interface, making it easy to switch between different models and endpoints.

Available Endpoints:

C# Client Installation

Install the OpenAI NuGet package using the .NET CLI:

dotnet add package OpenAI --version 2.5.0

Or via Package Manager Console:

Install-Package OpenAI

Non-Streaming Examples

Non-streaming mode waits for the complete response before returning. This is simpler to use but provides no intermediate feedback during generation.

Using Hermes (General Purpose)

using OpenAI.Chat;

var client = new ChatClient(
    model: "adamo1139/Hermes-3-Llama-3.1-8B-FP8-Dynamic",
    apiKey: "choose-any-value",
    new OpenAIClientOptions
    {
        Endpoint = new Uri("https://hermes.ai.unturf.com/v1")
    }
);

ChatCompletion completion = await client.CompleteChatAsync(
    new List<ChatMessage>
    {
        new UserChatMessage("Give a Python Fizzbuzz solution in one line of code?")
    },
    new ChatCompletionOptions
    {
        Temperature = 0.5f,
        MaxTokens = 150
    }
);

Console.WriteLine(completion.Content[0].Text);

Using Qwen 3 Coder (Specialized for Coding)

using OpenAI.Chat;

var client = new ChatClient(
    model: "hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:Q4_K_M",
    apiKey: "choose-any-value",
    new OpenAIClientOptions
    {
        Endpoint = new Uri("https://qwen.ai.unturf.com/v1")
    }
);

ChatCompletion completion = await client.CompleteChatAsync(
    new List<ChatMessage>
    {
        new UserChatMessage("Give a Python Fizzbuzz solution in one line of code?")
    },
    new ChatCompletionOptions
    {
        Temperature = 0.5f,
        MaxTokens = 150
    }
);

Console.WriteLine(completion.Content[0].Text);

Streaming Examples

Streaming mode returns chunks of the response as they are generated, providing real-time feedback. This is ideal for interactive applications and long responses.

Using Hermes (General Purpose)

using OpenAI.Chat;

var client = new ChatClient(
    model: "adamo1139/Hermes-3-Llama-3.1-8B-FP8-Dynamic",
    apiKey: "choose-any-value",
    new OpenAIClientOptions
    {
        Endpoint = new Uri("https://hermes.ai.unturf.com/v1")
    }
);

var streamingUpdates = client.CompleteChatStreamingAsync(
    new List<ChatMessage>
    {
        new UserChatMessage("Give a Python Fizzbuzz solution in one line of code?")
    },
    new ChatCompletionOptions
    {
        Temperature = 0.5f,
        MaxTokens = 150
    }
);

await foreach (var update in streamingUpdates)
{
    foreach (var contentPart in update.ContentUpdate)
    {
        Console.Write(contentPart.Text);
    }
}

Using Qwen 3 Coder (Specialized for Coding)

using OpenAI.Chat;

var client = new ChatClient(
    model: "hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:Q4_K_M",
    apiKey: "choose-any-value",
    new OpenAIClientOptions
    {
        Endpoint = new Uri("https://qwen.ai.unturf.com/v1")
    }
);

var streamingUpdates = client.CompleteChatStreamingAsync(
    new List<ChatMessage>
    {
        new UserChatMessage("Give a Python Fizzbuzz solution in one line of code?")
    },
    new ChatCompletionOptions
    {
        Temperature = 0.5f,
        MaxTokens = 150
    }
);

await foreach (var update in streamingUpdates)
{
    foreach (var contentPart in update.ContentUpdate)
    {
        Console.Write(contentPart.Text);
    }
}

Text-to-Speech Example

Generate audio speech from text using the TTS endpoint. The audio is saved as an MP3 file.

using OpenAI.Audio;

var client = new AudioClient(
    model: "tts-1",
    apiKey: "YOLO",
    new OpenAIClientOptions
    {
        Endpoint = new Uri("https://speech.ai.unturf.com/v1")
    }
);

BinaryData speech = await client.GenerateSpeechAsync(
    "I think so therefore, Today is a wonderful day to grow something people love!",
    GeneratedSpeechVoice.Alloy,
    new SpeechGenerationOptions
    {
        Speed = 0.9f
    }
);

await File.WriteAllBytesAsync("speech.mp3", speech.ToArray());