uncloseai.
Dart Examples - Free LLM & TTS AI Service
Dart Examples
This page demonstrates how to use the uncloseai. API endpoints with Dart/Flutter using the dart_openai community library. All examples use the same OpenAI-compatible API interface, making it easy to switch between different models and endpoints.
Available Endpoints:
- Hermes:
https://hermes.ai.unturf.com/v1
- General purpose conversational AI - Qwen 3 Coder:
https://qwen.ai.unturf.com/v1
- Specialized coding model - TTS:
https://speech.ai.unturf.com/v1
- Text-to-speech generation
Dart Client Installation
Add dart_openai to your pubspec.yaml
:
dependencies:
dart_openai: ^5.1.0
Non-Streaming Examples
Non-streaming mode waits for the complete response before returning. This is simpler to use but provides no intermediate feedback during generation.
Using Hermes (General Purpose)
import 'package:dart_openai/dart_openai.dart';
void main() async {
OpenAI.apiKey = "choose-any-value";
OpenAI.baseUrl = "https://hermes.ai.unturf.com/v1";
final chatCompletion = await OpenAI.instance.chat.create(
model: "adamo1139/Hermes-3-Llama-3.1-8B-FP8-Dynamic",
messages: [
OpenAIChatCompletionChoiceMessageModel(
role: OpenAIChatMessageRole.user,
content: [
OpenAIChatCompletionChoiceMessageContentItemModel.text(
"Give a Python Fizzbuzz solution in one line of code?"
),
],
),
],
temperature: 0.5,
maxTokens: 150,
);
print(chatCompletion.choices.first.message.content?.first.text);
}
Using Qwen 3 Coder (Specialized for Coding)
import 'package:dart_openai/dart_openai.dart';
void main() async {
OpenAI.apiKey = "choose-any-value";
OpenAI.baseUrl = "https://qwen.ai.unturf.com/v1";
final chatCompletion = await OpenAI.instance.chat.create(
model: "hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:Q4_K_M",
messages: [
OpenAIChatCompletionChoiceMessageModel(
role: OpenAIChatMessageRole.user,
content: [
OpenAIChatCompletionChoiceMessageContentItemModel.text(
"Give a Python Fizzbuzz solution in one line of code?"
),
],
),
],
temperature: 0.5,
maxTokens: 150,
);
print(chatCompletion.choices.first.message.content?.first.text);
}
Streaming Examples
Streaming mode returns chunks of the response as they are generated, providing real-time feedback. This is ideal for interactive applications and long responses.
Using Hermes (General Purpose)
import 'package:dart_openai/dart_openai.dart';
import 'dart:io';
void main() async {
OpenAI.apiKey = "choose-any-value";
OpenAI.baseUrl = "https://hermes.ai.unturf.com/v1";
final chatStream = OpenAI.instance.chat.createStream(
model: "adamo1139/Hermes-3-Llama-3.1-8B-FP8-Dynamic",
messages: [
OpenAIChatCompletionChoiceMessageModel(
role: OpenAIChatMessageRole.user,
content: [
OpenAIChatCompletionChoiceMessageContentItemModel.text(
"Give a Python Fizzbuzz solution in one line of code?"
),
],
),
],
temperature: 0.5,
maxTokens: 150,
);
await for (final chunk in chatStream) {
final content = chunk.choices.first.delta.content;
if (content != null && content.isNotEmpty) {
stdout.write(content.first.text);
}
}
}
Using Qwen 3 Coder (Specialized for Coding)
import 'package:dart_openai/dart_openai.dart';
import 'dart:io';
void main() async {
OpenAI.apiKey = "choose-any-value";
OpenAI.baseUrl = "https://qwen.ai.unturf.com/v1";
final chatStream = OpenAI.instance.chat.createStream(
model: "hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:Q4_K_M",
messages: [
OpenAIChatCompletionChoiceMessageModel(
role: OpenAIChatMessageRole.user,
content: [
OpenAIChatCompletionChoiceMessageContentItemModel.text(
"Give a Python Fizzbuzz solution in one line of code?"
),
],
),
],
temperature: 0.5,
maxTokens: 150,
);
await for (final chunk in chatStream) {
final content = chunk.choices.first.delta.content;
if (content != null && content.isNotEmpty) {
stdout.write(content.first.text);
}
}
}
Text-to-Speech Example
Generate audio speech from text using the TTS endpoint. The audio is saved as an MP3 file.
import 'package:dart_openai/dart_openai.dart';
import 'dart:io';
void main() async {
OpenAI.apiKey = "YOLO";
OpenAI.baseUrl = "https://speech.ai.unturf.com/v1";
final speech = await OpenAI.instance.audio.createSpeech(
model: "tts-1",
input: "I think so therefore, Today is a wonderful day to grow something people love!",
voice: "alloy",
speed: 0.9,
);
final file = File("speech.mp3");
await file.writeAsBytes(speech);
print("Audio saved to: ${file.path}");
}