A Go-based system for syncing your notes to a vector database and querying them with AI assistance. The system consists of two main components:
- vector-sync: Monitors your notes directory and syncs changes to Pinecone vector database
- note-gpt: AI-powered query interface for your vectorized notes using Gemini
- π Real-time sync: Automatically detects file changes and updates the vector database
- π Semantic search: Query your notes using natural language
- π€ AI assistance: Get contextual answers from your notes using Google's Gemini
- π File watching: Monitors
.mdfiles in your notes directory - π² Tree structure: Maintains directory structure for efficient syncing
- πΎ Incremental updates: Only processes changed files
- Go 1.21 or higher
- Pinecone account and API key
- Google AI Studio API key for Gemini
- Local embedding server (Ollama recommended)
git clone https://github.com/ashmaster/vector-notes.git
# Install vector-sync dependencies cd vector-sync go mod tidy # Install note-gpt dependencies cd ../note-gpt go mod tidy
- Create a Pinecone account
- Create a new index with the following specifications:
- Dimension: 768 (for nomic-embed-text model)
- Metric: Cosine
- Index Name:
joyful-elm(or update the code to match your index name)
Install and run Ollama with the nomic-embed-text model:
# Install Ollama curl -fsSL https://ollama.ai/install.sh | sh # Pull the embedding model ollama pull nomic-embed-text # Start Ollama server (runs on localhost:11434 by default) ollama serve
Create .env files in both vector-sync/ and note-gpt/ directories:
vector-sync/.env:
PINECONE_API_KEY=your_pinecone_api_key PINECONE_HOST=https://your-index-host.pinecone.io NOTES_DIR=/path/to/your/notes/directory EMBEDDING_URL=http://localhost:11434/api/embed
note-gpt/.env:
PINECONE_API_KEY=your_pinecone_api_key PINECONE_HOST=https://your-index-host.pinecone.io NOTES_DIR=/path/to/your/notes/directory EMBEDDING_URL=http://localhost:11434/api/embed GEMINI_API_KEY=your_gemini_api_key
Replace the placeholder values:
your_pinecone_api_key: Your Pinecone API keyhttps://your-index-host.pinecone.io: Your Pinecone index host URL/path/to/your/notes/directory: Absolute path to your notes directoryyour_gemini_api_key: Your Google AI Studio API key
The vector-sync service monitors your notes directory and keeps the vector database synchronized:
cd vector-sync
go run main.goThis will:
- Build an initial tree structure of your notes
- Start watching for file changes
- Sync changes to Pinecone every 5 seconds
- Log all sync operations
The note-gpt service provides an interactive query interface:
cd note-gpt
go run cmd/main.goThis starts an interactive session where you can:
- Ask questions about your notes
- Get AI-generated answers with file citations
- Maintain conversation context across queries
Example interaction:
> What are the cake ingredients?
Based on your notes from "Cake ingredients.md", you'll need flour, sugar, eggs, butter, and baking powder.
> How much flour?
According to the same file, you need 2 cups of all-purpose flour.
The repository includes VS Code launch configurations. You can:
- Open the workspace in VS Code
- Go to Run and Debug (Ctrl+Shift+D)
- Select either "Launch Vector Sync" or "Launch note-gpt"
- Press F5 to start debugging
- File Watcher: Monitors
.mdfiles usingfsnotify - Tree Structure: Maintains file hierarchy in
Tree - Diff Detection: Compares client and server trees to find changes
- Vector Upsert: Embeds content and stores in Pinecone via
Vector
- Query Processing: Takes user input and vectorizes it
- Semantic Search: Finds top 2 relevant notes from Pinecone
- Context Building: Reads file contents and builds LLM context
- AI Response: Generates response using Gemini with conversation history
βββ vector-sync/ # Sync service
β βββ internal/
β β βββ config.go # Configuration management
β β βββ sync.go # Main synchronization logic
β β βββ tree.go # File tree operations
β β βββ node.go # Tree node structure
β β βββ watcher.go # File system watcher
β β βββ utils.go # Utility functions
β βββ pkg/
β β βββ vector.go # Pinecone integration
β β βββ embedding.go # Embedding API client
β βββ main.go # Entry point
βββ note-gpt/ # Query service
β βββ cmd/
β β βββ main.go # CLI interface
β βββ internal/
β β βββ app.go # Main application logic
β β βββ config.go # Configuration management
β βββ pkg/
β βββ vector.go # Pinecone query operations
β βββ embedding.go # Embedding API client
β βββ gemini.go # Gemini AI client
Both services provide detailed logging:
- File operations and sync status
- Vector database operations
- API calls and responses
- Error details with context
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
This project is licensed under the MIT License.