Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
This repository was archived by the owner on Jul 4, 2025. It is now read-only.

Cortex.cpp: CLI #1109

freelerobot started this conversation in Architecture Specs
Sep 4, 2024 · 2 comments · 6 replies
Discussion options

Goal

  • Consistent syntax
  • Command chains for easier DX inference, e.g. cortex run

CLI Syntax

  1. Method names
  • Follows Docker when ever possible: cortex models pull
  • Otherwise follows OpenAI API & CRUD naming: GET DELETE LIST UPDATE
  • cortex models delete
  • cortex models remove
  1. The order of method vs required parameters
  • cortex engines uninstall <engine_id>
  • cortex engines <engine_id> uninstall
  1. Required vs optional variables
  • Required variables don't have a flag
  • cortex engines uninstall <engine_id>
  • cortex engines uninstall -e <engine_id>
  • cortex engines uninstall <engine_id> -x <OPTIONAL_VAR>

Suggestion:

cortex METHOD CMD <REQUIRED_VARS> -F <OPTIONAL_VARS>

Command Chains

  1. cortex run <model_id>: what sequence of calls does it make?
  2. What other chaining do we have?
  3. Do we automatically start engines based on the model?

Implementation

You must be logged in to vote

Replies: 2 comments 6 replies

Comment options

What is the syntax convention for our CLI?

CLI Syntax

  1. Method names
  • Follows OpenAI API: cortex models get
  • Follows Docker: cortex models pull
  1. The order of method vs required parameters
  • cortex models set <param>
  • cortex models <param> set

Command Chains

  1. cortex run <model_id>: what sequence of calls does it make?
  2. What other chaining do we have?
  3. Do we automatically start engines based on the model?

CLI Syntax

  1. Method names
  • Follows Docker: cortex models pull
  • We also support cortex models get to get local model information that we have already pulled
  1. The order of method vs required parameters
  • cortex models set <param>

Command Chains

  • cortex run <model_id>: what sequence of calls does it make?
    If model does not exist, pull model: cortex models pull <model_id>
    If engine does not exist, pull engine: cortex engines <engine_name> install
    Start model: cortex models start <model_id>
    Start chat: cortex chat <model_id> -m <msg>
  • What other chaining do we have?
    We only have run command
  • Do we automatically start engines based on the model?
    Yes, we do
You must be logged in to vote
5 replies
Comment options

freelerobot Sep 6, 2024
Maintainer Author

@vansangpfiev

  1. Can we make this syntax a bit more inconsistent:
  • not: cortex engines <engine_name> install
  • correct: cortex models start <model_id>
  1. Can we decide on syntax around required vs optional flags, e.g. when do we expect users to use a flag, vs not?

  2. I see this getting conflated as well get vs pull vs install. Was the decision:

  • pull: just for models, docker like
  • get: available for most methods, similar to a local api /get request
  • install: for when user is installing something from remote/internet?
Comment options

  1. Yes - I think we should follow the syntax that @dan-homebrew suggested
cortex <feature> <command> <subject>
cortex engines install <engine>

cc: @namchuai
2. All flags are optional for now (except cortex -v), I think

  • --verbose for logging
  • --version specify a version to download
  • --message to append message to chat command
  1. It is correct.
Comment options

freelerobot Sep 13, 2024
Maintainer Author

Maybe we can use a specific example:

Usage: cortex-nightly chat [OPTIONS] [model_id]
Positionals:
 model_id TEXT
Options:
 -h,--help Print this help message and exit
 -m,--message TEXT Message to chat with model

Confusing

  1. model id is required
  2. message is not
  3. Request fails if message is not provided

OAI indicates both params are required.
image

So should the proper syntax actually be:

  • cortex chat <model_id> <message> [-options] OR
  • cortex chat [-options] <model_id> <message>

cc @dan-homebrew

Comment options

@vansangpfiev Quick check: does the CLI library we use auto-generate the "UI" (e.g. with helper flags, etc)?

@0xSage These are standard CLI library features, I wouldn't over-litigate this - there's best practices out there already

Comment options

@dan-homebrew Yes - The "UI" is autogenerated by the CLI library
@0xSage The message is mandatory in the request body. But we are talking about the CLI, so if the message is not specified, can we just let user inputs it later? Something like:

.\cortex.exe models chat tinyllama
Inorder to exit, type `exit()`
> // input message here

The syntax should be:

  • cortex chat <model_id> -m <message> OR
  • cortex chat -m <message> <model_id>
Comment options

  1. Can we make cortex == cortex -h, instead of starting a server.
    Running docker is equivalent to docker --help
    Otherwise it's too jarring, and a bad onboarding experience for new devs.
    @dan-homebrew

  2. This means most subcommands, e.g. cortex models [no command] can also bring up the helper. Nice DX.

  3. Can I recommend we separate & group the command list when users run cortex [-h]
    Would really help readability.

Example:

> cortex
Options:
 -h,--help Print this help message and exit
 --verbose Verbose logging
 -v, —version Cortex version
Shortcuts:
 pull Download a model by URL (or HuggingFace Repo ID)
 run Start a model and interactive chat shell
Commands:
 chat Sends a chat completion request
 models Subcommands for managing models
 embeddings Subcommands for generating embeddings
 engines Subcommands for managing inference engines
API Server Commands:
 ps Show running models and their status
 start Start the API Server
 stop Stop the API server
You must be logged in to vote
1 reply
Comment options

Yes, this is straightforward and can be implemented in the final days as finishing touches
We can also put in the "Cortex" ASCII text

I am adding this in as a tail end task for Cortex v0.1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: app shell Installer, updaters, distributions
Converted from issue

This discussion was converted from issue #1093 on September 05, 2024 14:08.

AltStyle によって変換されたページ (->オリジナル) /