uncloseai.
Swift Examples - Free LLM & TTS AI Service
Swift Examples
This page demonstrates how to use the uncloseai. API endpoints with Swift using the SwiftOpenAI community library. All examples use the same OpenAI-compatible API interface, making it easy to switch between different models and endpoints.
Available Endpoints:
- Hermes:
https://hermes.ai.unturf.com/v1
- General purpose conversational AI - Qwen 3 Coder:
https://qwen.ai.unturf.com/v1
- Specialized coding model - TTS:
https://speech.ai.unturf.com/v1
- Text-to-speech generation
Swift Client Installation
Add SwiftOpenAI to your Package.swift
dependencies:
dependencies: [
.package(url: "https://github.com/jamesrochabrun/SwiftOpenAI", from: "3.8.5")
]
Non-Streaming Examples
Non-streaming mode waits for the complete response before returning. This is simpler to use but provides no intermediate feedback during generation.
Using Hermes (General Purpose)
import SwiftOpenAI
let service = OpenAIServiceFactory.service(
apiKey: "choose-any-value",
baseURL: "https://hermes.ai.unturf.com/v1"
)
let parameters = ChatCompletionParameters(
messages: [.init(role: .user, content: .text("Give a Python Fizzbuzz solution in one line of code?"))],
model: .custom("adamo1139/Hermes-3-Llama-3.1-8B-FP8-Dynamic"),
temperature: 0.5,
maxTokens: 150
)
do {
let completion = try await service.startChat(parameters: parameters)
if let content = completion.choices.first?.message.content {
print(content)
}
} catch {
print("Error: \(error)")
}
Using Qwen 3 Coder (Specialized for Coding)
import SwiftOpenAI
let service = OpenAIServiceFactory.service(
apiKey: "choose-any-value",
baseURL: "https://qwen.ai.unturf.com/v1"
)
let parameters = ChatCompletionParameters(
messages: [.init(role: .user, content: .text("Give a Python Fizzbuzz solution in one line of code?"))],
model: .custom("hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:Q4_K_M"),
temperature: 0.5,
maxTokens: 150
)
do {
let completion = try await service.startChat(parameters: parameters)
if let content = completion.choices.first?.message.content {
print(content)
}
} catch {
print("Error: \(error)")
}
Streaming Examples
Streaming mode returns chunks of the response as they are generated, providing real-time feedback. This is ideal for interactive applications and long responses.
Using Hermes (General Purpose)
import SwiftOpenAI
let service = OpenAIServiceFactory.service(
apiKey: "choose-any-value",
baseURL: "https://hermes.ai.unturf.com/v1"
)
let parameters = ChatCompletionParameters(
messages: [.init(role: .user, content: .text("Give a Python Fizzbuzz solution in one line of code?"))],
model: .custom("adamo1139/Hermes-3-Llama-3.1-8B-FP8-Dynamic"),
temperature: 0.5,
maxTokens: 150
)
do {
let stream = try await service.startStreamedChat(parameters: parameters)
for try await chunk in stream {
if let content = chunk.choices.first?.delta.content {
print(content, terminator: "")
}
}
} catch {
print("Error: \(error)")
}
Using Qwen 3 Coder (Specialized for Coding)
import SwiftOpenAI
let service = OpenAIServiceFactory.service(
apiKey: "choose-any-value",
baseURL: "https://qwen.ai.unturf.com/v1"
)
let parameters = ChatCompletionParameters(
messages: [.init(role: .user, content: .text("Give a Python Fizzbuzz solution in one line of code?"))],
model: .custom("hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:Q4_K_M"),
temperature: 0.5,
maxTokens: 150
)
do {
let stream = try await service.startStreamedChat(parameters: parameters)
for try await chunk in stream {
if let content = chunk.choices.first?.delta.content {
print(content, terminator: "")
}
}
} catch {
print("Error: \(error)")
}
Text-to-Speech Example
Generate audio speech from text using the TTS endpoint. The audio is saved as an MP3 file.
import SwiftOpenAI
import Foundation
let service = OpenAIServiceFactory.service(
apiKey: "YOLO",
baseURL: "https://speech.ai.unturf.com/v1"
)
let parameters = AudioSpeechParameters(
model: .tts1,
input: "I think so therefore, Today is a wonderful day to grow something people love!",
voice: .alloy,
speed: 0.9
)
do {
let audioData = try await service.createSpeech(parameters: parameters)
let fileURL = FileManager.default.temporaryDirectory.appendingPathComponent("speech.mp3")
try audioData.write(to: fileURL)
print("Audio saved to: \(fileURL)")
} catch {
print("Error: \(error)")
}