Skip to content

Releases: MaestroError/LarAgent

v0.5 - OpenAI‑compatible API endpoints, Multimodal inputs & more

27 Jul 13:53
34ccb1f
Compare
Choose a tag to compare

LarAgent v0.5 Release Notes 🎉

Welcome to LarAgent v0.5! This release makes it easier to turn your Laravel agents into OpenAI‑compatible APIs and gives you more control over how agents behave. It also adds multimodal inputs (images & audio), new drivers, flexible tool usage and better management of chat histories and structured output. Read on for the highlights.

!! Important: Check the upgrade guide: https://blog.laragent.ai/laragent-v0-4-to-v0-5-upgrade-guide/

New features ✨

Expose agents via API -- A new LarAgent\API\Completions class lets you serve an OpenAI‑style /v1/chat/completions endpoint from your Laravel app. You simply call the static make() method with a Request and your agent class to get a response:

use LarAgent\API\Completions;

public function completion(Request $request)
{
    $response = Completions::make($request, MyAgent::class);
    // Your custom code
}

Laravel controllers ready to use -- Completions::make is useful for custom implementations, but for common use cases we already implemented The new SingleAgentController and MultiAgentController let you expose one or many agents through REST endpoints without writing boiler‑plate. In your controller, set the $agentClass (for a single agent) or $agents (for multiple agents) and optionally $models to restrict models:

use LarAgent\API\Completions\Controllers\SingleAgentController;

class MyAgentApiController extends SingleAgentController
{
    protected ?string $agentClass = \App\AiAgents\MyAgent::class;
    protected ?array $models    = ['gpt‑4o‑mini', 'gpt-4.1-mini'];
}

For multiple agents:

use LarAgent\API\Completions\Controllers\MultiAgentController;

class AgentsController extends MultiAgentController
{
    protected ?array $agents = [
        \App\AiAgents\ChatAgent::class,
        \App\AiAgents\SupportAgent::class,
    ];
    protected ?array $models = [
        'chatAgent/gpt‑4.1‑mini',
        'chatAgent/gpt‑4.1‑nano',
        'supportAgent', // will use default model defined in agent class
    ];
}

Both controllers support completion and models endpoints to make it compatible with any OpenAI client, such as OpenWebUI. Example of routes registration:

Route::post('/v1/chat/completions', [MyAgentApiController::class, 'completion']);
Route::get('/v1/models', [MyAgentApiController::class, 'models']);

For more details, check the documentation

Server‑Sent Events (SSE) streaming -- When the client sets "stream": true, the API returns text/event-stream responses. Each event contains a JSON chunk that mirrors OpenAI's streaming format. Usage data appears only in the final chunk.

Custom controllers -- For full control, call Completions::make() directly in your own controller and stream the chunks manually.

Multimodal inputs

Image input -- Agents can now accept publicly‑accessible image URLs using the chainable withImages() method:

$images = [
    'https://example.com/image1.jpg',
    'https://example.com/image2.jpg',
];
$response = WeatherAgent::for('test_chat')
    ->withImages($images)
    ->respond();

Audio input -- Send base64‑encoded audio clips via withAudios(). Supported formats include wav, mp3, ogg, flac, m4a and webm:

$audios = [
    [
        'format' => 'mp3',
        'data'   => $base64Audio,
    ],
];
echo WeatherAgent::for('test_chat')->withAudios($audios)->respond();

Agent usage enrichment

Custom UserMessage instances -- Instead of passing a plain string, you can build a UserMessage with metadata (e.g. user ID or request ID). When using a UserMessage, the agent skips the prompt() method:

$userMessage = Message::user($finalPrompt, ['userRequest' => $userRequestId]);
$response    = WeatherAgent::for('test_chat')
    ->message($userMessage)
    ->respond();

Return message objects -- Call returnMessage() or set the $returnMessage property to true to receive a MessageInterface instance instead of a plain string. This is useful when you need the model's raw assistant message.

Check "Using an Agent" section

Groq driver

A new GroqDriver works with the Groq Platform API. Add GROQ_API_KEY to your .env file and set the provider to groq to use it. The configuration example in the quick‑start has been updated accordingly.

Contributed by @john-ltc

Tool handling enhancements 🔧

  • Tool selection methods -- You can now control whether tools are used on a per‑call basis:

    • toolNone() disables tools for the current call.

    • toolRequired() makes at least one tool call mandatory.

    • forceTool('toolName') forces the agent to call a specific tool. After the first call, the choice automatically resets to avoid infinite loops.

      • Dynamic tool management -- Tools can be added or removed at runtime using withTool() and removeTool().
    • Parallel tool calls -- The new parallelToolCalls(true) method enables or disables parallel tool execution. You can also set the tool choice manually via setToolChoice('none') or similar methods.

    • Phantom tools -- You can define Phantom Tools that are registered with the agent but not executed by LarAgent; instead they return a ToolCallMessage, allowing you to handle the execution externally. Phantom tools are useful for dynamic integration with external services or when tool execution happens elsewhere.

use LarAgent\PhantomTool;

$phantomTool = PhantomTool::create('phantom_tool', 'Get the current weather in a location')
    ->addProperty('location', 'string', 'City and state, e.g. San Francisco, CA')
    ->setRequired('location')
    ->setCallback('PhantomTool');

// Register with the agent
$agent->withTool($phantomTool);

Changes 🛠️

  • Chat session IDs -- Model names are no longer included in the chat session ID by default. To retain the old AgentName_ModelName_UserId format, set $includeModelInChatSessionId to true.

  • Usage metadata keys changed -- Keys in usage data are now snake‑case (prompt_tokens, completion_tokens, etc.) instead of camel‑case (promptTokens). Update any custom code that reads these keys.

  • Gemini streaming limitation -- The Gemini driver currently does not support streaming responses.

Check the upgrade guide

What's Changed

New Contributors

Full Changelog: 0.4.1...0.5.0

0.4.1 - Bug fix

27 May 11:31
4a92bc6
Compare
Choose a tag to compare

What's Changed

  • Ensures the current message is cleared after processing tool calls by @drjamesj in #32

New Contributors

Full Changelog: 0.4.0...0.4.1

v0.4 - Gemini driver, streaming, fallback provider and more!

25 May 12:27
06bb20b
Compare
Choose a tag to compare

Highlights:

  • Docs and Workflows: Documentation and GitHub workflows were updated for clarity and improved automation.
  • Dependency Update: Upgraded the peter-evans/create-pull-request GitHub Action from version 5 to 7.
  • IDE Helper Fix: Fixed an issue related to the IDE helper and added notes to the documentation.
  • Documentation Overhaul: The README and other docs now point to a new official documentation site (https://docs.laragent.ai/), with improved structure and clearer guides for both Laravel and standalone PHP usage.
  • Configuration Improvements:
    • Config files now support more flexible provider and driver settings, with better separation between OpenAI, Gemini, and custom providers.
    • New fallback provider logic for handling API failures.
    • Expanded support for additional configuration options and improved defaults.
  • Streaming & Structured Output:
    • Added support for streaming AI responses in both core code and documentation/examples.
    • New agent methods for streaming responses and more flexible output handling.
  • Tooling Enhancements:
    • Tools can now be added/removed using class references or objects.
    • Improved schema handling for structured output and tool calls.
  • Internal Refactoring:
    • Major refactoring for better extensibility and maintainability.
    • New base driver classes, cleaner separation of driver logic.
    • Improved event hooks and error handling.
  • New Example Files: Several new example scripts covering streaming, structured output, and advanced agent patterns.
  • Tests: Added and updated tests for streaming, tool handling, and agent configuration.

See the full changelog and code diff

What's Changed

New Contributors

Full Changelog: 0.3.1...0.4.0

Fixes

18 Apr 10:13
997974d
Compare
Choose a tag to compare

What's Changed

Full Changelog: 0.3.0...0.3.1

v0.3 - More providers, resoning models support and structured output in console

26 Mar 13:10
956e515
Compare
Choose a tag to compare

Release 0.3.0 changes

  • OpenAiCompatible driver: allows use of any provider compatible with OpenAI API, including Ollama, vLLM, OpenRouter and many more
  • Support for reasoning models like o1 & o3: New contributor @yannelli added a developer message type that allows us to use reasoning models in the Agents! More Thinking = Smarter agents 💪
  • Complete chat removal: New command agent:chat:remove provides a way to completely remove chat histories and their associated keys for a specific agent.
  • Structured output in console for agent:chat command: Now you can test your agent with structured output
  • Updated docs & Refactored agent initialization process: Minor updates for better clarity and smoother processes

Check examples below 👇

Community server

We will:

  • Help developers implement LarAgent in their projects
  • Shape LarAgent's future by planning new features and making decisions
  • Share the latest news
  • Discuss ideas and collaborate with contributors

Join the fresh new LarAgent community server on Discord: https://discord.gg/NAczq2T9F8

Examples

OpenAiCompatible driver

420611702-feb07fce-fdd7-496d-9343-8bf28d240296

Support for reasoning models like o1 & o3

422420386-4eabfe4e-c56b-41ba-9a1e-c304ded1fdc4

Structured output in console for agent:chat command

424600093-d1126119-64f4-4f7d-9ed3-984982ad01b6
424600036-262cbdd6-9dd9-47f1-b1d3-2796b400c0dd

What's Changed

New Contributors

Full Changelog: 0.2.2...0.3.0

0.2.2

08 Mar 16:02
66048a0
Compare
Choose a tag to compare
Merge pull request #10 from MaestroError/chat-keys-management

Fix chat clear command

0.2.1

08 Mar 15:59
70175b2
Compare
Choose a tag to compare

What's Changed

0.2.0

08 Mar 14:54
bfa658a
Compare
Choose a tag to compare

What's new in LarAgent?

  • Support for Laravel 12
  • Dynamic model setting via chainable withModel and overridable model methods
  • New command for batch cleaning of chat histories php artisan agent:chat:clear AgentName
  • New $saveChatKeys property to control whether store or not the chat keys for future management
  • New getChatKeys command for Agent

image

Release notes 👇

What's Changed

Full Changelog: 0.1.1...0.2.0

v0.1.1

16 Feb 13:23
Compare
Choose a tag to compare

LarAgent v0.1.1 is LIVE! 🚀

Bring the power of AI Agents to your Laravel projects with unparalleled ease! 🎉

We're thrilled to announce the first release of LarAgent, the easiest way to create and maintain AI agents in your Laravel applications.

Imagine building an AI assistant with the same elegance as creating an Eloquent model!

With LarAgent, you can:

  • ✨ Create Agents with Artisan: php artisan make:agent YourAgentName

  • 🛠️ Define Agent Behavior: Customize instructions, models, and more directly in your Agent class.

  • 🧰 Add Tools Effortlessly: Use the #[Tool] attribute to turn methods into powerful agent tools.

  • 🗣️ Manage Chat History: Built-in support for in-memory, cache, session, file, and JSON storage.

ray-so-export(1)

Check out the documentation

Initial release v0.1.0

15 Feb 18:48
Compare
Choose a tag to compare