Releases: redis/agent-memory-server
Releases · redis/agent-memory-server
0.9.4
What's Changed
New Features
- Make OpenAI and Anthropic API base URLs configurable by @abrookins in #43
Fixes
- Fix hash-based deduplication FT.AGGREGATE query execution by @abrookins in #41
Full Changelog: server/v0.9.3...server/v0.9.4
0.9.3
What's Changed
- Add remaining context percentage until auto-summarization to working memory endpoints by @abrookins in #38
- Fix authentication event loop corruption by converting get_current_user to async by @abrookins in #40
Full Changelog: server/v0.9.2...server/v0.9.3
0.9.2
What's Changed
- Token-based authentication by @abrookins in #35
Full Changelog: server/v0.9.1...server/v0.9.2
0.9.1
What's Changed
- Fix docker-compose file by @abrookins in #34
Full Changelog: server/v0.9.0...server/v0.9.1
0.9.0
Memory Evolution
-
Working Memory (formerly Short-term Memory):
- Renamed from "short-term memory" to "working memory" to better reflect its purpose
- Enhanced with automatic promotion system that moves structured memories to long-term storage in background
- Added support for arbitrary JSON data storage alongside memory structures
- Improved automatic conversation summarization in working memory, based on token limits
-
Long-term Memory Promotion:
- Implemented seamless flow from working memory to long-term memory via background task processing
- Agent only has to think about working memory, long-term memory is managed automatically (but can be managed manually, too)
- Use any LangChain
VectorStore
subclass for long-term storage, defaults toRedisVectorStore
- Structured memories are automatically promoted with vector embeddings and metadata indexing
- Deduplication and compaction systems for long-term memory management
- Background task worker system using for reliable, scalable memory processing
Client SDK and Tooling
- Working and long-term memory available as tools for LLM integration (LLM can choose to persist a long-term memory or search for long-term memories, etc.)
- Higher-level tools support sending in a user's input and getting back a context-enriched prompt, via
/v1/memory/prompt
endpoint - Support for namespace isolation, user separation, and session management
Search and Retrieval
- Vector-based similarity search using OpenAI embeddings
- Rich filtering system by session, namespace, topics, entities, timestamps
- Hybrid search combining semantic similarity with metadata filtering
- RedisVL integration for high-performance vector operations with Redis
Enhanced Memory Classification:
- Semantic memories for facts and preferences
- Episodic memories for time-bound events with event dates (requires a timeframe)
- Message memories for long-term conversation records (optional)
- Automatic topic modeling and entity recognition either using BERTopic or a configured LLM
- Rich metadata extraction and indexing
Authentication and Security
- OAuth2/JWT Bearer token authentication with JWKS validation
- Multi-provider support (Auth0, AWS Cognito, Okta, Azure AD)
- Role-based access control using JWT claims
- Development mode with configurable auth bypass
Operational Features
- Comprehensive CLI Interface:
- Commands for server management (
api
,mcp
,task-worker
) - Database operations (
rebuild-index
) - Background task scheduling and management
- Health monitoring and diagnostics
- Commands for server management (