Releases: MaestroError/LarAgent
v0.5 - OpenAI‑compatible API endpoints, Multimodal inputs & more
LarAgent v0.5 Release Notes 🎉
Welcome to LarAgent v0.5! This release makes it easier to turn your Laravel agents into OpenAI‑compatible APIs and gives you more control over how agents behave. It also adds multimodal inputs (images & audio), new drivers, flexible tool usage and better management of chat histories and structured output. Read on for the highlights.
!! Important: Check the upgrade guide: https://blog.laragent.ai/laragent-v0-4-to-v0-5-upgrade-guide/
New features ✨
Expose agents via API -- A new LarAgent\API\Completions
class lets you serve an OpenAI‑style /v1/chat/completions
endpoint from your Laravel app. You simply call the static make()
method with a Request
and your agent class to get a response:
use LarAgent\API\Completions;
public function completion(Request $request)
{
$response = Completions::make($request, MyAgent::class);
// Your custom code
}
Laravel controllers ready to use -- Completions::make
is useful for custom implementations, but for common use cases we already implemented The new SingleAgentController
and MultiAgentController
let you expose one or many agents through REST endpoints without writing boiler‑plate. In your controller, set the $agentClass
(for a single agent) or $agents
(for multiple agents) and optionally $models
to restrict models:
use LarAgent\API\Completions\Controllers\SingleAgentController;
class MyAgentApiController extends SingleAgentController
{
protected ?string $agentClass = \App\AiAgents\MyAgent::class;
protected ?array $models = ['gpt‑4o‑mini', 'gpt-4.1-mini'];
}
For multiple agents:
use LarAgent\API\Completions\Controllers\MultiAgentController;
class AgentsController extends MultiAgentController
{
protected ?array $agents = [
\App\AiAgents\ChatAgent::class,
\App\AiAgents\SupportAgent::class,
];
protected ?array $models = [
'chatAgent/gpt‑4.1‑mini',
'chatAgent/gpt‑4.1‑nano',
'supportAgent', // will use default model defined in agent class
];
}
Both controllers support completion
and models
endpoints to make it compatible with any OpenAI client, such as OpenWebUI. Example of routes registration:
Route::post('/v1/chat/completions', [MyAgentApiController::class, 'completion']);
Route::get('/v1/models', [MyAgentApiController::class, 'models']);
For more details, check the documentation
Server‑Sent Events (SSE) streaming -- When the client sets "stream": true
, the API returns text/event-stream
responses. Each event contains a JSON chunk that mirrors OpenAI's streaming format. Usage data appears only in the final chunk.
Custom controllers -- For full control, call Completions::make()
directly in your own controller and stream the chunks manually.
Multimodal inputs
Image input -- Agents can now accept publicly‑accessible image URLs using the chainable withImages()
method:
$images = [
'https://example.com/image1.jpg',
'https://example.com/image2.jpg',
];
$response = WeatherAgent::for('test_chat')
->withImages($images)
->respond();
Audio input -- Send base64‑encoded audio clips via withAudios()
. Supported formats include wav
, mp3
, ogg
, flac
, m4a
and webm
:
$audios = [
[
'format' => 'mp3',
'data' => $base64Audio,
],
];
echo WeatherAgent::for('test_chat')->withAudios($audios)->respond();
Agent usage enrichment
Custom UserMessage
instances -- Instead of passing a plain string, you can build a UserMessage
with metadata (e.g. user ID or request ID). When using a UserMessage
, the agent skips the prompt()
method:
$userMessage = Message::user($finalPrompt, ['userRequest' => $userRequestId]);
$response = WeatherAgent::for('test_chat')
->message($userMessage)
->respond();
Return message objects -- Call returnMessage()
or set the $returnMessage
property to true
to receive a MessageInterface
instance instead of a plain string. This is useful when you need the model's raw assistant message.
Check "Using an Agent" section
Groq driver
A new GroqDriver works with the Groq Platform API. Add GROQ_API_KEY to your .env file and set the provider to groq to use it. The configuration example in the quick‑start has been updated accordingly.
Contributed by @john-ltc
Tool handling enhancements 🔧
-
Tool selection methods -- You can now control whether tools are used on a per‑call basis:
-
toolNone()
disables tools for the current call. -
toolRequired()
makes at least one tool call mandatory. -
forceTool('toolName')
forces the agent to call a specific tool. After the first call, the choice automatically resets to avoid infinite loops.- Dynamic tool management -- Tools can be added or removed at runtime using
withTool()
andremoveTool()
.
- Dynamic tool management -- Tools can be added or removed at runtime using
-
Parallel tool calls -- The new
parallelToolCalls(true)
method enables or disables parallel tool execution. You can also set the tool choice manually viasetToolChoice('none')
or similar methods. -
Phantom tools -- You can define Phantom Tools that are registered with the agent but not executed by LarAgent; instead they return a
ToolCallMessage
, allowing you to handle the execution externally. Phantom tools are useful for dynamic integration with external services or when tool execution happens elsewhere.
-
use LarAgent\PhantomTool;
$phantomTool = PhantomTool::create('phantom_tool', 'Get the current weather in a location')
->addProperty('location', 'string', 'City and state, e.g. San Francisco, CA')
->setRequired('location')
->setCallback('PhantomTool');
// Register with the agent
$agent->withTool($phantomTool);
Changes 🛠️
-
Chat session IDs -- Model names are no longer included in the chat session ID by default. To retain the old
AgentName_ModelName_UserId
format, set$includeModelInChatSessionId
totrue
. -
Usage metadata keys changed -- Keys in usage data are now snake‑case (
prompt_tokens
,completion_tokens
, etc.) instead of camel‑case (promptTokens
). Update any custom code that reads these keys. -
Gemini streaming limitation -- The Gemini driver currently does not support streaming responses.
Check the upgrade guide
What's Changed
- Bump dependabot/fetch-metadata from 2.3.0 to 2.4.0 by @dependabot[bot] in #33
- Fix toDTO to use agent model by @MaestroError in #40
- Fix docblock typo in Agent.php by @MaestroError in #42
- Fix reinject instructions methods typo by @MaestroError in #43
- Fix SSE stream accumulation by @MaestroError in #41
- Improve streaming response tests by @MaestroError in #44
- Add streaming fallback provider support by @MaestroError in #45
- Fix/store usage while streaming by @MaestroError in #46
- Add additional OpenAI configuration options by @MaestroError in #47
- Bump stefanzweifel/git-auto-commit-action from 5 to 6 by @dependabot[bot] in #53
- Update openai-php/client requirement from ^0.13.0 to ^0.14.0 by @dependabot[bot] in #52
- Extend API request validation rules by @MaestroError in #48
- Add tool choice configuration in Agent by @MaestroError in #54
- Add PseudoTool tests and finalize API tool call handling by @MaestroError in #57
- New setter methods for API and first basic response by @MaestroError in #56
- Add audio modality support for agents by @MaestroError in #55
- [Feature]: Groq driver by @john-ltc in #66
- Manual tests created, readme update by @MaestroError in #67
- Codex/add OpenAI compatible api layer by @MaestroError in #70
- Add comprehensive Completions controller tests by @MaestroError in #71
- Expose any Agent(s) as API endpoint compatible to OpenAI API by @MaestroError in #72
New Contributors
Full Changelog: 0.4.1...0.5.0
0.4.1 - Bug fix
v0.4 - Gemini driver, streaming, fallback provider and more!
Highlights:
- Docs and Workflows: Documentation and GitHub workflows were updated for clarity and improved automation.
- Dependency Update: Upgraded the
peter-evans/create-pull-request
GitHub Action from version 5 to 7. - IDE Helper Fix: Fixed an issue related to the IDE helper and added notes to the documentation.
- Documentation Overhaul: The README and other docs now point to a new official documentation site (https://docs.laragent.ai/), with improved structure and clearer guides for both Laravel and standalone PHP usage.
- Configuration Improvements:
- Config files now support more flexible provider and driver settings, with better separation between OpenAI, Gemini, and custom providers.
- New fallback provider logic for handling API failures.
- Expanded support for additional configuration options and improved defaults.
- Streaming & Structured Output:
- Added support for streaming AI responses in both core code and documentation/examples.
- New agent methods for streaming responses and more flexible output handling.
- Tooling Enhancements:
- Tools can now be added/removed using class references or objects.
- Improved schema handling for structured output and tool calls.
- Internal Refactoring:
- Major refactoring for better extensibility and maintainability.
- New base driver classes, cleaner separation of driver logic.
- Improved event hooks and error handling.
- New Example Files: Several new example scripts covering streaming, structured output, and advanced agent patterns.
- Tests: Added and updated tests for streaming, tool handling, and agent configuration.
See the full changelog and code diff
What's Changed
- Streaming support by @MaestroError in #21
- Documentation links added by @MaestroError in #23
- TOC update by @MaestroError in #24
- Fix: Tools with no properties enabled, removed workaround by @MaestroError in #25
- Made withTool and removeTool methods more flexible by @MaestroError in #28
- feat: add configurable agent namespaces by @johalternate in #26
- Added Gemini driver, API key & URL properties to Agent class by @MaestroError in #29
New Contributors
- @johalternate made their first contribution in #26
Full Changelog: 0.3.1...0.4.0
Fixes
What's Changed
- Docs and workflows updated by @MaestroError in #17
- Bump peter-evans/create-pull-request from 5 to 7 by @dependabot in #18
- Fix for ide helper issue +note in docs by @MaestroError in #20
Full Changelog: 0.3.0...0.3.1
v0.3 - More providers, resoning models support and structured output in console
Release 0.3.0 changes
- OpenAiCompatible driver: allows use of any provider compatible with OpenAI API, including Ollama, vLLM, OpenRouter and many more
- Support for reasoning models like o1 & o3: New contributor @yannelli added a developer message type that allows us to use reasoning models in the Agents! More Thinking = Smarter agents 💪
- Complete chat removal: New command
agent:chat:remove
provides a way to completely remove chat histories and their associated keys for a specific agent. - Structured output in console for
agent:chat
command: Now you can test your agent with structured output - Updated docs & Refactored agent initialization process: Minor updates for better clarity and smoother processes
Check examples below 👇
Community server
We will:
- Help developers implement LarAgent in their projects
- Shape LarAgent's future by planning new features and making decisions
- Share the latest news
- Discuss ideas and collaborate with contributors
Join the fresh new LarAgent community server on Discord: https://discord.gg/NAczq2T9F8
Examples
OpenAiCompatible driver
Support for reasoning models like o1 & o3
Structured output in console for agent:chat
command
What's Changed
- OpenAI compatible driver added by @MaestroError in #11
- Add support for developer message creation and testing by @yannelli in #12
- Developer message by property, addMessage method by @MaestroError in #13
- Docs updat by @MaestroError in #14
- Complete chat removal feature by @MaestroError in #15
- Structured output in console for agent:chat command by @MaestroError in #16
New Contributors
Full Changelog: 0.2.2...0.3.0
0.2.2
Merge pull request #10 from MaestroError/chat-keys-management Fix chat clear command
0.2.1
What's Changed
- Fix chat clear command by @MaestroError in #9
0.2.0
What's new in LarAgent?
- Support for Laravel 12
- Dynamic model setting via chainable
withModel
and overridablemodel
methods - New command for batch cleaning of chat histories
php artisan agent:chat:clear AgentName
- New
$saveChatKeys
property to control whether store or not the chat keys for future management - New
getChatKeys
command for Agent
Release notes 👇
What's Changed
- Event docs by @MaestroError in #3
- Bump aglipanci/laravel-pint-action from 2.4 to 2.5 by @dependabot in #5
- Support for L12 by @MaestroError in #6
- Dynamic model setting for agents by @MaestroError in #7
- Chat keys management by @MaestroError in #8
Full Changelog: 0.1.1...0.2.0
v0.1.1
LarAgent v0.1.1 is LIVE! 🚀
Bring the power of AI Agents to your Laravel projects with unparalleled ease! 🎉
We're thrilled to announce the first release of LarAgent, the easiest way to create and maintain AI agents in your Laravel applications.
Imagine building an AI assistant with the same elegance as creating an Eloquent model!
With LarAgent, you can:
-
✨ Create Agents with Artisan: php artisan make:agent YourAgentName
-
🛠️ Define Agent Behavior: Customize instructions, models, and more directly in your Agent class.
-
🧰 Add Tools Effortlessly: Use the #[Tool] attribute to turn methods into powerful agent tools.
-
🗣️ Manage Chat History: Built-in support for in-memory, cache, session, file, and JSON storage.
Check out the documentation
Initial release v0.1.0
docs fix