July 2025 Recap: Azure Database for PostgreSQL
August 6, 2025Table Talk: Sentinel’s New ThreatIntel Tables Explained
August 6, 2025AI is evolving fast—and so are the tools to build intelligent, responsive applications. In our recent Microsoft Reactor session, Catherine Wang (Principal Product Manager at Microsoft) and Roberto Perez (Microsoft MVP and Senior Global Solutions Architect at Redis) shared how Azure Managed Redis helps you create Retrieval-Augmented Generation (RAG) AI agents with exceptional speed and consistency.
Why RAG agents?
RAG applications combine the power of large language models (LLMs) with your own data to answer questions accurately. For example, a customer support chatbot can deliver precise, pre-approved answers instead of inventing them on the fly. This ensures consistency, reduces risk, and improves customer experience.
Where Azure Managed Redis fits with agentic scenarios
In this project, Azure Managed Redis is used as a high-performance, in-memory vector database to support Agentic Retrieval-Augmented Generation (RAG), enabling fast similarity searches over embeddings to retrieve and ground the LLM with the most relevant known answers.
Beyond this, Azure Managed Redis is a versatile platform that supports a range of AI-native use cases, including:
- Semantic Cache – Cache and reuse previous LLM responses based on semantic similarity to reduce latency and improve reliability.
- LLM Memory – Persist recent interactions and context to maintain coherent, multi-turn conversations.
- Agentic Memory – Store long-term agent knowledge, actions, and plans to enable more intelligent and autonomous behavior over time.
- Feature Store – Serve real-time features to machine learning models during inference for personalization and decision-making.
- These capabilities make Azure Managed Redis a foundational building block for building fast, stateful, and intelligent AI applications.
Demo highlights
In the session, the team demonstrates how to:
- Deploy a RAG AI agent using .NET Aspire and Azure Container Apps.
- Secure your Redis instance with Azure Entra ID, removing the need for connection strings.
- Use Semantic Kernel to orchestrate agents and retrieve knowledge base content via vector search.
- Monitor and debug microservices with built-in observability tools.
Finally, we walk through code examples in C# and Python, demonstrating how you can integrate Redis search, vector similarity, and prompt orchestration into your own apps.
Get Started
Ready to explore?
✅ Watch the full session replay: Building a RAG AI Agent Using Azure Redis
✅ Try the sample code: Azure Managed Redis RAG AI Sample