Turn any LLM into a self-extending knowledge agent powered by a graph-structured memory - complete with PDF-to-graph ingestion, budget-aware optimisation, and dual-engine orchestration.
-
Updated
Jun 15, 2025 - Python
Turn any LLM into a self-extending knowledge agent powered by a graph-structured memory - complete with PDF-to-graph ingestion, budget-aware optimisation, and dual-engine orchestration.
Enterprise-grade AI voice assistant with RAG-powered customer support, real-time phone integration, and advanced conversation management
🧮 PINN Enterprise Platform - AI-Powered Physics Simulations with CopilotKit-style Research Canvas UI. Complete serverless architecture with RAG-powered code generation, 3D visualization, and global edge deployment.
Terminal-based platform where specialized AI experts (Legal, Tech, Business) engage in real-time debates and collaborative problem-solving to provide multi-perspective analysis for complex decisions.
An AI-powered crypto analytics platform integrating forecasting, sentiment, and on-chain intelligence, built with FastAPI, MCP protocol, and MLflow in a monolithic architecture.
RAG-Chat-Assistant is a complete Retrieval-Augmented Generation (RAG) system packaged as a sleek web app. Built with a Flask backend, it enables users to drag and drop documents (PDFs, Word, TXT) and chat with an AI assistant that understands their content. It processes files by chunking and embedding them with Google Gemini models, stores embeddin
A multilingual Retrieval-Augmented Generation (RAG) system built for an assessment. It features text processing, intelligent document chunking, semantic search with multilingual embeddings, and conversation memory management. Leverages FastAPI, LangChain, and ChromaDB for efficient knowledge base querying.
Add a description, image, and links to the rag-system topic page so that developers can more easily learn about it.
To associate your repository with the rag-system topic, visit your repo's landing page and select "manage topics."