1
0
Fork
You've already forked llm-shell
0
Extensible, developer-oriented console for LLM communication
Ruby 100%
2025年11月11日 04:12:08 +00:00
.bundle First commit 2025年05月04日 16:56:33 +00:00
bin prompts: add ability to choose custom prompts ( #7 ) 2025年06月24日 16:15:32 +00:00
lib Disable echo at boot time 2025年11月11日 04:12:08 +00:00
libexec/llm-shell Exit via throw 2025年10月25日 14:34:45 +00:00
packages Upgrade llm.rb 2025年11月08日 00:34:31 +00:00
share/llm-shell Update GIF 2025年10月25日 15:20:25 +00:00
.gitignore Vendor (most) dependencies 2025年10月23日 19:18:53 +00:00
.gitmodules YAML -> TOML 2025年10月25日 12:17:20 +00:00
.projectile First commit 2025年05月04日 16:56:33 +00:00
.rubocop.yml Update .rubocop.yml 2025年10月24日 12:34:47 +00:00
Gemfile Update Gemfile 2025年10月25日 10:57:25 +00:00
llm-shell.gemspec Vendor (most) dependencies 2025年10月23日 19:18:53 +00:00
Rakefile Update GIF 2025年10月25日 15:20:25 +00:00
README.md Fix typo 2025年10月26日 06:18:17 +00:00

About

llm-shell is an extensible, developer-oriented command-line console that can interact with multiple Large Language Models (LLMs). It serves as both a demo of the llmrb/llm library and a tool to help improve the library through real-world usage and feedback.

Demo

Show

Features

General

  • 🌟 Unified interface for multiple Large Language Models (LLMs)
  • 🤝 Supports Gemini, OpenAI, Anthropic, xAI (grok), DeepSeek, LlamaCpp and Ollama

Customize

  • 📤 Attach local files as conversation context
  • 🔧 Extend with your own functions and tool calls
  • 🚀 Extend with your own console commands

Shell

  • 🤖 Builtin auto-complete powered by Readline
  • 🎨 Builtin syntax highlighting powered by Coderay
  • 📄 Deploys the less pager for long outputs
  • 📝 Advanced Markdown formatting and output

Customization

Tools

For security and safety reasons, a user must confirm the execution of all function calls before they happen

Tools are loaded at boot time. Custom tools can be added to the ${HOME}/.local/share/llm-shell/tools/ directory. The tools are shared with the LLM and the LLM can request their execution. The LLM is also made aware of a tool's return value after it has been called. See the tools/ directory for more examples:

class System < LLM::Tool
 name "system"
 description "Run a system command"
 param :command, String, "The command to execute", required: true
 def call(command:)
 ro, wo = IO.pipe
 re, we = IO.pipe
 Process.wait Process.spawn(command, out: wo, err: we)
 [wo,we].each(&:close)
 {stderr: re.read, stdout: ro.read}
 end
end

Commands

llm-shell can be extended with your own console commands that take precendence over messages sent to the LLM. Custom commands can be added to the ${HOME}/.local/share/llm-shell/commands/ directory. See the commands/ directory for more examples:

class SayHello < LLM::Command
 name "say-hello"
 description "Say hello to somebody"
 def call(name)
 io.rewind.print "Hello, #{name}!"
 end
end

Prompts

It is recommended that custom prompts instruct the LLM to emit markdown, otherwise you might see unexpected results because llm-shell assumes the LLM will emit markdown.

The prompt can be changed by adding a file to the ${HOME}/.local/share/llm-shell/prompts/ directory, and then choosing it at boot time with the -r PROMPT, --prompt PROMPT options. Generally you probably want to fork default.txt to conserve the original prompt rules around markdown and files, then modify it to suit your own needs and preferences.

Settings

The console client can be configured at the command line through option switches, or through a TOML file. The TOML file can contain the same options that could be specified at the command line. For cloud providers the key option is the only required parameter, everything else has defaults. The TOML file is read from the path ${HOME}/.config/llm-shell.toml and it has the following format:

# ~/.config/llm-shell.toml
[openai]
key = "YOURKEY"
[gemini]
key = "YOURKEY"
[anthropic]
key = "YOURKEY"
[xai]
key = "YOURKEY"
[deepseek]
key = "YOURKEY"
[ollama]
host = "localhost"
model = "deepseek-coder:6.7b"
[llamacpp]
host = "localhost"
model = "qwen3"

Usage

CLI

Usage: llm-shell [OPTIONS]
 -p, --provider NAME Required. Options: gemini, openai, anthropic, ollama or llamacpp.
 -k, --key [KEY] Optional. Required by gemini, openai, and anthropic.
 -m, --model [MODEL] Optional. The name of a model.
 -h, --host [HOST] Optional. Sometimes required by ollama.
 -o, --port [PORT] Optional. Sometimes required by ollama.
 -r, --prompt [PROMPT] Optional. The prompt to use.
 -v, --version Optional. Print the version and exit

Install

llm-shell can be installed via rubygems.org

gem install llm-shell

License

BSD Zero Clause
See LICENSE