An overview of Redis for AI and search documentation
Redis stores and indexes vector embeddings that semantically represent unstructured data including text passages, images, videos, or audio. Store vectors and the associated metadata within hashes or JSON documents for indexing and querying.
This page is organized into a few sections depending on what you're trying to do:
How to's - The comprehensive reference section for every feature, API, and setting. It's your source for detailed, technical information to support any level of development.
Concepts - Explanations of foundational ideas and core principles to help you understand the reason behind the product's features and design.
Quickstarts - Short, focused guides to get you started with key features or workflows in minutes.
Tutorials - In-depth walkthroughs that dive deeper into specific use cases or processes. These step-by-step guides help you master essential tasks and workflows.
Integrations - Guides and resources to help you connect and use the product with popular tools, frameworks, or platforms.
Video tutorials - Watch our AI video collection featuring practical tutorials and demonstrations.
Benchmarks - Performance comparisons and metrics to demonstrate how the product performs under various scenarios. This helps you understand its efficiency and capabilities.
Best practices - Recommendations and guidelines for maximizing effectiveness and avoiding common pitfalls. This section equips you to use the product effectively and efficiently.
How to's
Create a vector index: Redis maintains a secondary index over your data with a defined schema (including vector fields and metadata). Redis supports FLAT and HNSW vector index types.
Quickstarts or recipes are useful when you are trying to build specific functionality. For example, you might want to do RAG with LangChain or set up LLM memory for your AI agent.
Retrieval Augmented Generation (aka RAG) is a technique to enhance the ability of an LLM to respond to user queries. The retrieval part of RAG is supported by a vector database, which can return semantically relevant results to a user's query, serving as contextual information to augment the generative capabilities of an LLM.