uncloseai.

Ruby Examples - Free LLM & TTS AI Service

Ruby Examples

This page demonstrates how to use the uncloseai. API endpoints with Ruby using the official OpenAI Ruby gem. All examples use the same OpenAI-compatible API interface, making it easy to switch between different models and endpoints.

Available Endpoints:

Ruby Client Installation

To install the official OpenAI gem for Ruby, add to your Gemfile or use gem install:

gem install openai -v 0.31.0

Non-Streaming Examples

Non-streaming mode waits for the complete response before returning. This is simpler to use but provides no intermediate feedback during generation.

Using Hermes (General Purpose)

require "openai"

client = OpenAI::Client.new(
  access_token: "choose-any-value",
  uri_base: "https://hermes.ai.unturf.com/v1"
)

response = client.chat(
  parameters: {
    model: "adamo1139/Hermes-3-Llama-3.1-8B-FP8-Dynamic",
    messages: [
      { role: "user", content: "Give a Python Fizzbuzz solution in one line of code?" }
    ],
    temperature: 0.5,
    max_tokens: 150
  }
)

puts response.dig("choices", 0, "message", "content")

Using Qwen 3 Coder (Specialized for Coding)

require "openai"

client = OpenAI::Client.new(
  access_token: "choose-any-value",
  uri_base: "https://qwen.ai.unturf.com/v1"
)

response = client.chat(
  parameters: {
    model: "hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:Q4_K_M",
    messages: [
      { role: "user", content: "Give a Python Fizzbuzz solution in one line of code?" }
    ],
    temperature: 0.5,
    max_tokens: 150
  }
)

puts response.dig("choices", 0, "message", "content")

Streaming Examples

Streaming mode returns chunks of the response as they are generated, providing real-time feedback. This is ideal for interactive applications and long responses.

Using Hermes (General Purpose)

require "openai"

client = OpenAI::Client.new(
  access_token: "choose-any-value",
  uri_base: "https://hermes.ai.unturf.com/v1"
)

client.chat(
  parameters: {
    model: "adamo1139/Hermes-3-Llama-3.1-8B-FP8-Dynamic",
    messages: [
      { role: "user", content: "Give a Python Fizzbuzz solution in one line of code?" }
    ],
    temperature: 0.5,
    max_tokens: 150,
    stream: proc do |chunk, _bytesize|
      content = chunk.dig("choices", 0, "delta", "content")
      print content if content
    end
  }
)

Using Qwen 3 Coder (Specialized for Coding)

require "openai"

client = OpenAI::Client.new(
  access_token: "choose-any-value",
  uri_base: "https://qwen.ai.unturf.com/v1"
)

client.chat(
  parameters: {
    model: "hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:Q4_K_M",
    messages: [
      { role: "user", content: "Give a Python Fizzbuzz solution in one line of code?" }
    ],
    temperature: 0.5,
    max_tokens: 150,
    stream: proc do |chunk, _bytesize|
      content = chunk.dig("choices", 0, "delta", "content")
      print content if content
    end
  }
)

Text-to-Speech Example

Generate audio speech from text using the TTS endpoint. The audio is saved as an MP3 file.

require "openai"

client = OpenAI::Client.new(
  access_token: "YOLO",
  uri_base: "https://speech.ai.unturf.com/v1"
)

response = client.audio.speech(
  parameters: {
    model: "tts-1",
    voice: "alloy",
    input: "I think so therefore, Today is a wonderful day to grow something people love!",
    speed: 0.9
  }
)

File.binwrite("speech.mp3", response)