uncloseai.
Kotlin Examples - Free LLM & TTS AI Service
Kotlin Examples
This page demonstrates how to use the uncloseai. API endpoints with Kotlin using the openai-kotlin community library. All examples use the same OpenAI-compatible API interface, making it easy to switch between different models and endpoints.
Available Endpoints:
- Hermes:
https://hermes.ai.unturf.com/v1
- General purpose conversational AI - Qwen 3 Coder:
https://qwen.ai.unturf.com/v1
- Specialized coding model - TTS:
https://speech.ai.unturf.com/v1
- Text-to-speech generation
Kotlin Client Installation
Add openai-kotlin to your build.gradle.kts
:
dependencies {
implementation("com.aallam.openai:openai-client:3.8.2")
implementation("io.ktor:ktor-client-okhttp:2.3.12")
}
Non-Streaming Examples
Non-streaming mode waits for the complete response before returning. This is simpler to use but provides no intermediate feedback during generation.
Using Hermes (General Purpose)
import com.aallam.openai.api.chat.*
import com.aallam.openai.api.model.ModelId
import com.aallam.openai.client.OpenAI
import com.aallam.openai.client.OpenAIConfig
import com.aallam.openai.client.OpenAIHost
suspend fun main() {
val openAI = OpenAI(
OpenAIConfig(
token = "choose-any-value",
host = OpenAIHost("https://hermes.ai.unturf.com/v1")
)
)
val chatCompletionRequest = ChatCompletionRequest(
model = ModelId("adamo1139/Hermes-3-Llama-3.1-8B-FP8-Dynamic"),
messages = listOf(
ChatMessage(
role = ChatRole.User,
content = "Give a Python Fizzbuzz solution in one line of code?"
)
),
temperature = 0.5,
maxTokens = 150
)
val completion = openAI.chatCompletion(chatCompletionRequest)
println(completion.choices[0].message.content)
}
Using Qwen 3 Coder (Specialized for Coding)
import com.aallam.openai.api.chat.*
import com.aallam.openai.api.model.ModelId
import com.aallam.openai.client.OpenAI
import com.aallam.openai.client.OpenAIConfig
import com.aallam.openai.client.OpenAIHost
suspend fun main() {
val openAI = OpenAI(
OpenAIConfig(
token = "choose-any-value",
host = OpenAIHost("https://qwen.ai.unturf.com/v1")
)
)
val chatCompletionRequest = ChatCompletionRequest(
model = ModelId("hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:Q4_K_M"),
messages = listOf(
ChatMessage(
role = ChatRole.User,
content = "Give a Python Fizzbuzz solution in one line of code?"
)
),
temperature = 0.5,
maxTokens = 150
)
val completion = openAI.chatCompletion(chatCompletionRequest)
println(completion.choices[0].message.content)
}
Streaming Examples
Streaming mode returns chunks of the response as they are generated, providing real-time feedback. This is ideal for interactive applications and long responses.
Using Hermes (General Purpose)
import com.aallam.openai.api.chat.*
import com.aallam.openai.api.model.ModelId
import com.aallam.openai.client.OpenAI
import com.aallam.openai.client.OpenAIConfig
import com.aallam.openai.client.OpenAIHost
import kotlinx.coroutines.flow.collect
suspend fun main() {
val openAI = OpenAI(
OpenAIConfig(
token = "choose-any-value",
host = OpenAIHost("https://hermes.ai.unturf.com/v1")
)
)
val chatCompletionRequest = ChatCompletionRequest(
model = ModelId("adamo1139/Hermes-3-Llama-3.1-8B-FP8-Dynamic"),
messages = listOf(
ChatMessage(
role = ChatRole.User,
content = "Give a Python Fizzbuzz solution in one line of code?"
)
),
temperature = 0.5,
maxTokens = 150
)
openAI.chatCompletions(chatCompletionRequest).collect { chunk ->
chunk.choices.forEach { choice ->
choice.delta.content?.let { print(it) }
}
}
}
Using Qwen 3 Coder (Specialized for Coding)
import com.aallam.openai.api.chat.*
import com.aallam.openai.api.model.ModelId
import com.aallam.openai.client.OpenAI
import com.aallam.openai.client.OpenAIConfig
import com.aallam.openai.client.OpenAIHost
import kotlinx.coroutines.flow.collect
suspend fun main() {
val openAI = OpenAI(
OpenAIConfig(
token = "choose-any-value",
host = OpenAIHost("https://qwen.ai.unturf.com/v1")
)
)
val chatCompletionRequest = ChatCompletionRequest(
model = ModelId("hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:Q4_K_M"),
messages = listOf(
ChatMessage(
role = ChatRole.User,
content = "Give a Python Fizzbuzz solution in one line of code?"
)
),
temperature = 0.5,
maxTokens = 150
)
openAI.chatCompletions(chatCompletionRequest).collect { chunk ->
chunk.choices.forEach { choice ->
choice.delta.content?.let { print(it) }
}
}
}
Text-to-Speech Example
Generate audio speech from text using the TTS endpoint. The audio is saved as an MP3 file.
import com.aallam.openai.api.audio.SpeechRequest
import com.aallam.openai.api.audio.Voice
import com.aallam.openai.api.model.ModelId
import com.aallam.openai.client.OpenAI
import com.aallam.openai.client.OpenAIConfig
import com.aallam.openai.client.OpenAIHost
import java.io.File
suspend fun main() {
val openAI = OpenAI(
OpenAIConfig(
token = "YOLO",
host = OpenAIHost("https://speech.ai.unturf.com/v1")
)
)
val speechRequest = SpeechRequest(
model = ModelId("tts-1"),
input = "I think so therefore, Today is a wonderful day to grow something people love!",
voice = Voice.Alloy,
speed = 0.9
)
val speech = openAI.speech(speechRequest)
File("speech.mp3").writeBytes(speech)
}