uncloseai.

Elixir Examples - Free LLM & TTS AI Service

Elixir Examples

This page demonstrates how to use the uncloseai. API endpoints with Elixir using the openai_ex community library. All examples use the same OpenAI-compatible API interface, making it easy to switch between different models and endpoints.

Available Endpoints:

Elixir Client Installation

Add openai_ex to your mix.exs dependencies:

def deps do
  [
    {:openai_ex, "~> 0.9.18"}
  ]
end

Non-Streaming Examples

Non-streaming mode waits for the complete response before returning. This is simpler to use but provides no intermediate feedback during generation.

Using Hermes (General Purpose)

config = %OpenaiEx.Config{
  api_key: "choose-any-value",
  api_url: "https://hermes.ai.unturf.com/v1"
}

{:ok, response} = OpenaiEx.ChatCompletion.create(
  config,
  model: "adamo1139/Hermes-3-Llama-3.1-8B-FP8-Dynamic",
  messages: [
    %{role: "user", content: "Give a Python Fizzbuzz solution in one line of code?"}
  ],
  temperature: 0.5,
  max_tokens: 150
)

IO.puts(response["choices"] |> List.first() |> get_in(["message", "content"]))

Using Qwen 3 Coder (Specialized for Coding)

config = %OpenaiEx.Config{
  api_key: "choose-any-value",
  api_url: "https://qwen.ai.unturf.com/v1"
}

{:ok, response} = OpenaiEx.ChatCompletion.create(
  config,
  model: "hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:Q4_K_M",
  messages: [
    %{role: "user", content: "Give a Python Fizzbuzz solution in one line of code?"}
  ],
  temperature: 0.5,
  max_tokens: 150
)

IO.puts(response["choices"] |> List.first() |> get_in(["message", "content"]))

Streaming Examples

Streaming mode returns chunks of the response as they are generated, providing real-time feedback. This is ideal for interactive applications and long responses.

Using Hermes (General Purpose)

config = %OpenaiEx.Config{
  api_key: "choose-any-value",
  api_url: "https://hermes.ai.unturf.com/v1"
}

OpenaiEx.ChatCompletion.create_stream(
  config,
  model: "adamo1139/Hermes-3-Llama-3.1-8B-FP8-Dynamic",
  messages: [
    %{role: "user", content: "Give a Python Fizzbuzz solution in one line of code?"}
  ],
  temperature: 0.5,
  max_tokens: 150
)
|> Stream.each(fn chunk ->
  content = get_in(chunk, ["choices", Access.at(0), "delta", "content"])
  if content, do: IO.write(content)
end)
|> Stream.run()

Using Qwen 3 Coder (Specialized for Coding)

config = %OpenaiEx.Config{
  api_key: "choose-any-value",
  api_url: "https://qwen.ai.unturf.com/v1"
}

OpenaiEx.ChatCompletion.create_stream(
  config,
  model: "hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:Q4_K_M",
  messages: [
    %{role: "user", content: "Give a Python Fizzbuzz solution in one line of code?"}
  ],
  temperature: 0.5,
  max_tokens: 150
)
|> Stream.each(fn chunk ->
  content = get_in(chunk, ["choices", Access.at(0), "delta", "content"])
  if content, do: IO.write(content)
end)
|> Stream.run()

Text-to-Speech Example

Generate audio speech from text using the TTS endpoint. The audio is saved as an MP3 file.

config = %OpenaiEx.Config{
  api_key: "YOLO",
  api_url: "https://speech.ai.unturf.com/v1"
}

{:ok, audio_data} = OpenaiEx.Audio.Speech.create(
  config,
  model: "tts-1",
  voice: "alloy",
  input: "I think so therefore, Today is a wonderful day to grow something people love!",
  speed: 0.9
)

File.write!("speech.mp3", audio_data)