uncloseai.

PHP Examples - Free LLM & TTS AI Service

PHP Examples

This page demonstrates how to use the uncloseai. API endpoints with PHP using the openai-php/client community library. All examples use the same OpenAI-compatible API interface, making it easy to switch between different models and endpoints.

Available Endpoints:

PHP Client Installation

Install the OpenAI PHP client via Composer:

composer require openai-php/client

Non-Streaming Examples

Non-streaming mode waits for the complete response before returning. This is simpler to use but provides no intermediate feedback during generation.

Using Hermes (General Purpose)

<?php

require 'vendor/autoload.php';

use OpenAI;

$client = OpenAI::factory()
    ->withApiKey('choose-any-value')
    ->withBaseUri('https://hermes.ai.unturf.com/v1')
    ->make();

$response = $client->chat()->create([
    'model' => 'adamo1139/Hermes-3-Llama-3.1-8B-FP8-Dynamic',
    'messages' => [
        ['role' => 'user', 'content' => 'Give a Python Fizzbuzz solution in one line of code?'],
    ],
    'temperature' => 0.5,
    'max_tokens' => 150,
]);

echo $response->choices[0]->message->content;

Using Qwen 3 Coder (Specialized for Coding)

<?php

require 'vendor/autoload.php';

use OpenAI;

$client = OpenAI::factory()
    ->withApiKey('choose-any-value')
    ->withBaseUri('https://qwen.ai.unturf.com/v1')
    ->make();

$response = $client->chat()->create([
    'model' => 'hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:Q4_K_M',
    'messages' => [
        ['role' => 'user', 'content' => 'Give a Python Fizzbuzz solution in one line of code?'],
    ],
    'temperature' => 0.5,
    'max_tokens' => 150,
]);

echo $response->choices[0]->message->content;

Streaming Examples

Streaming mode returns chunks of the response as they are generated, providing real-time feedback. This is ideal for interactive applications and long responses.

Using Hermes (General Purpose)

<?php

require 'vendor/autoload.php';

use OpenAI;

$client = OpenAI::factory()
    ->withApiKey('choose-any-value')
    ->withBaseUri('https://hermes.ai.unturf.com/v1')
    ->make();

$stream = $client->chat()->createStreamed([
    'model' => 'adamo1139/Hermes-3-Llama-3.1-8B-FP8-Dynamic',
    'messages' => [
        ['role' => 'user', 'content' => 'Give a Python Fizzbuzz solution in one line of code?'],
    ],
    'temperature' => 0.5,
    'max_tokens' => 150,
]);

foreach ($stream as $response) {
    echo $response->choices[0]->delta->content ?? '';
}

Using Qwen 3 Coder (Specialized for Coding)

<?php

require 'vendor/autoload.php';

use OpenAI;

$client = OpenAI::factory()
    ->withApiKey('choose-any-value')
    ->withBaseUri('https://qwen.ai.unturf.com/v1')
    ->make();

$stream = $client->chat()->createStreamed([
    'model' => 'hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:Q4_K_M',
    'messages' => [
        ['role' => 'user', 'content' => 'Give a Python Fizzbuzz solution in one line of code?'],
    ],
    'temperature' => 0.5,
    'max_tokens' => 150,
]);

foreach ($stream as $response) {
    echo $response->choices[0]->delta->content ?? '';
}

Text-to-Speech Example

Generate audio speech from text using the TTS endpoint. The audio is saved as an MP3 file.

<?php

require 'vendor/autoload.php';

use OpenAI;

$client = OpenAI::factory()
    ->withApiKey('YOLO')
    ->withBaseUri('https://speech.ai.unturf.com/v1')
    ->make();

$response = $client->audio()->speech([
    'model' => 'tts-1',
    'voice' => 'alloy',
    'input' => 'I think so therefore, Today is a wonderful day to grow something people love!',
    'speed' => 0.9,
]);

file_put_contents('speech.mp3', $response);