This repository is no longer actively maintained.
Development has moved to menloresearch/llama.cpp.
Please contribute directly to llama.cpp moving forward.
GitHub commit activity Github Last Commit Github Contributors Discord
Docs β’ API Reference β’ Changelog β’ Issues β’ Community
Under Active Development - Expect rapid improvements!
Cortex is the open-source brain for robots: vision, speech, language, tabular, and action -- the cloud is optional.
| Platform | Installer |
|---|---|
| Windows | cortex.exe |
| macOS | cortex.pkg |
| Linux (Debian) | cortex.deb |
All other Linux distributions:
curl -s https://raw.githubusercontent.com/menloresearch/cortex/main/engine/templates/linux/install.sh | sudo bashcortex start
Set log level to INFO
Host: 127.0.0.1 Port: 39281
Server started
API Documentation available at: http://127.0.0.1:39281
You can download models from the huggingface model hub using the cortex pull command:
cortex pull llama3.2
Downloaded models:
llama3.1:8b-gguf-q4-km
llama3.2:3b-gguf-q2-k
Available to download:
1. llama3:8b-gguf
2. llama3:8b-gguf-q2-k
3. llama3:8b-gguf-q3-kl
4. ...
Select a model (1-21):
cortex run llama3.2
In order to exit, type `exit()`
>
You can also run it in detached mode, meaning, you can run it in the background and can use the model via the API:
cortex run -d llama3.2:3b-gguf-q2-k
cortex ps # View active modelscortex stop # Shutdown serverLocal AI platform for running AI models with:
- Multi-Engine Support - Start with llama.cpp or add your own
- Hardware Optimized - Automatic GPU detection (NVIDIA/AMD/Intel)
- OpenAI-Compatible API - Tools, Runs, and Multi-modal coming soon
| Model | Command | Min RAM |
|---|---|---|
| Llama 3 8B | cortex run llama3.1 |
8GB |
| Phi-4 | cortex run phi-4 |
8GB |
| Mistral | cortex run mistral |
4GB |
| Gemma 2B | cortex run gemma2 |
6GB |
See table below for the binaries with the nightly builds.
# Multiple quantizations cortex-nightly pull llama3.2 # Choose from several quantization options
# Engine management (nightly)
cortex-nightly engines install llama-cpp -m# Hardware control
cortex-nightly hardware detect
cortex-nightly hardware activate- Quick troubleshooting:
cortex --help - Documentation
- Community Discord
- Report Issues
| Version | Windows | macOS | Linux |
|---|---|---|---|
| Stable | exe | pkg | deb |
| Beta | exe | pkg | deb |
| Nightly | exe | pkg | deb |
See BUILDING.md
- Open the Windows Control Panel.
- Navigate to
Add or Remove Programs. - Search for
cortexcppand double click to uninstall. (for beta and nightly builds, search forcortexcpp-betaandcortexcpp-nightlyrespectively)
Run the uninstaller script:
sudo cortex-uninstall.sh
The script to uninstall Cortex comes with the binary and was added to the /usr/local/bin/ directory. The script is named cortex-uninstall.sh for stable builds, cortex-beta-uninstall.sh for beta builds and cortex-nightly-uninstall.sh for nightly builds.
- For support, please file a GitHub ticket.
- For questions, join our Discord here.
- For long-form inquiries, please email hello@jan.ai.