uncloseai.
Python Examples - Free LLM & TTS AI Service
Python Examples
This page demonstrates how to use the uncloseai. API endpoints with Python using the OpenAI client library. All examples use the same OpenAI-compatible API interface, making it easy to switch between different models and endpoints.
Available Endpoints:
- Hermes:
https://hermes.ai.unturf.com/v1
- General purpose conversational AI - Qwen 3 Coder:
https://qwen.ai.unturf.com/v1
- Specialized coding model - TTS:
https://speech.ai.unturf.com/v1
- Text-to-speech generation
Python Client Installation
To install the OpenAI package for Python, use pip
:
pip install openai==2.3.0
Non-Streaming Examples
Non-streaming mode waits for the complete response before returning. This is simpler to use but provides no intermediate feedback during generation.
Using Hermes (General Purpose)
# Python Fizzbuzz Example with Hermes
from openai import OpenAI
client = OpenAI(base_url="https://hermes.ai.unturf.com/v1", api_key="choose-any-value")
MODEL = "adamo1139/Hermes-3-Llama-3.1-8B-FP8-Dynamic"
messages = [{"role": "user", "content": "Give a Python Fizzbuzz solution in one line of code?"}]
response = client.chat.completions.create(
model=MODEL,
messages=messages,
temperature=0.5,
max_tokens=150
)
print(response.choices[0].message.content)
Using Qwen 3 Coder (Specialized for Coding)
# Python Fizzbuzz Example with Qwen 3 Coder
from openai import OpenAI
client = OpenAI(base_url="https://qwen.ai.unturf.com/v1", api_key="choose-any-value")
MODEL = "hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:Q4_K_M"
messages = [{"role": "user", "content": "Give a Python Fizzbuzz solution in one line of code?"}]
response = client.chat.completions.create(
model=MODEL,
messages=messages,
temperature=0.5,
max_tokens=150
)
print(response.choices[0].message.content)
Streaming Examples
Streaming mode returns chunks of the response as they are generated, providing real-time feedback. This is ideal for interactive applications and long responses.
Using Hermes (General Purpose)
# Streaming response in Python with Hermes
from openai import OpenAI
client = OpenAI(base_url="https://hermes.ai.unturf.com/v1", api_key="choose-any-value")
MODEL = "adamo1139/Hermes-3-Llama-3.1-8B-FP8-Dynamic"
messages = [
{"role": "user", "content": "Give a Python Fizzbuzz solution in one line of code?"}
]
response = client.chat.completions.create(
model=MODEL,
messages=messages,
temperature=0.5,
max_tokens=150,
stream=True, # Enable streaming
)
for chunk in response:
if hasattr(chunk.choices[0].delta, "content"):
print(chunk.choices[0].delta.content, end="")
Using Qwen 3 Coder (Specialized for Coding)
# Streaming response in Python with Qwen 3 Coder
from openai import OpenAI
client = OpenAI(base_url="https://qwen.ai.unturf.com/v1", api_key="choose-any-value")
MODEL = "hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:Q4_K_M"
messages = [
{"role": "user", "content": "Give a Python Fizzbuzz solution in one line of code?"}
]
response = client.chat.completions.create(
model=MODEL,
messages=messages,
temperature=0.5,
max_tokens=150,
stream=True, # Enable streaming
)
for chunk in response:
if hasattr(chunk.choices[0].delta, "content"):
print(chunk.choices[0].delta.content, end="")
Text-to-Speech Example
Generate audio speech from text using the TTS endpoint. The audio is saved as an MP3 file.
# TTS Speech Example in Python
import openai
client = openai.OpenAI(
api_key = "YOLO",
base_url = "https://speech.ai.unturf.com/v1",
)
with client.audio.speech.with_streaming_response.create(
model="tts-1",
voice="alloy",
speed=0.9,
input="I think so therefore, Today is a wonderful day to grow something people love!"
) as response:
response.stream_to_file("speech.mp3")