Unified Agent CLI + reusable Go agent core.
- Why Mister Morph
- Quickstart
- Supported Models
- Daemon mode
- Console mode
- Telegram bot mode
- Slack bot mode
- Embedding
- Built-in Tools
- Skills
- Security
- Troubleshoots
- Debug
- Configuration
What makes this project worth looking at:
- 🧩 Reusable Go core: Run the agent as a CLI, or embed it as a library/subprocess in other apps.
- 🤝 Mesh Agent Exchange Protocol (MAEP): You and your amigos run multiple agents and want them to message each other: use the MAEP, a p2p protocol with trust-state and audit trails. (see docs/maep.md, WIP).
- 🔒 Serious secure defaults: Profile-based credential injection, Guard redaction, outbound policy controls, and async approvals with audit trails (see docs/security.md).
- 🧰 Practical Skills system: Discover + inject
SKILL.mdfromfile_state_dir/skills, with simple on/off control (see docs/skills.md). - 📚 Beginner-friendly: Built as a learning-first agent project, with detailed design docs in
docs/and practical debugging tools like--inspect-promptand--inspect-request.
Option A: download a prebuilt binary from GitHub Releases (recommended for production use):
curl -fsSL -o /tmp/install-mistermorph.sh https://raw.githubusercontent.com/quailyquaily/mistermorph/refs/heads/master/scripts/install-release.sh sudo bash /tmp/install-mistermorph.sh
The installer supports:
bash install-release.sh <version-tag>INSTALL_DIR=$HOME/.local/bin bash install-release.sh <version-tag>
Option B: install from source with Go:
go install github.com/quailyquaily/mistermorph@latest
mistermorph install # or mistermorph install <dir>
The install command installs required files and built-in skills under ~/.morph/skills/ (or a specified directory via <dir>).
When config.yaml does not already exist in the install target, install first tries to find a readable config in this order:
--configpath<dir>/config.yaml~/.morph/config.yaml
If none is found, install runs an interactive setup wizard (TTY only) before writing config.yaml:
- select LLM provider (
openai|gemini|cloudflare) - fill provider-specific required fields (
api_keyforopenai/gemini;account_id+api_tokenforcloudflare) - set model
- set Telegram
bot_token+group_trigger_mode - optionally set Slack
bot_token+app_token+group_trigger_mode
Use mistermorph install --yes to skip interactive prompts.
You can run without a config.yaml by using environment variables:
export MISTER_MORPH_LLM_API_KEY="YOUR_OPENAI_API_KEY_HERE" # Optional explicit defaults: export MISTER_MORPH_LLM_PROVIDER="openai" export MISTER_MORPH_LLM_MODEL="gpt-5.2"
Mister Morph also supports Azure OpenAI, Anthropic Claude, AWS Bedrock, and others (see assets/config/config.example.yaml for more options). If you prefer file-based config, use ~/.morph/config.yaml.
mistermorph run --task "Hello!"Model support may vary by specific model ID, provider endpoint capability, and tool-calling behavior.
| Model family | Model range | Status |
|---|---|---|
| GPT | gpt-5* |
✅ Full |
| GPT-OSS | gpt-oss-120b |
✅ Full |
| Grok | grok-4+ |
✅ Full |
| Claude | claude-3.5+ |
✅ Full |
| DeepSeek | deepseek-3* |
✅ Full |
| Gemini | gemini-2.5+ |
✅ Full |
| Kimi | kimi-2.5+ |
✅ Full |
| MiniMax | minimax* / minimax-m2.5+ |
✅ Full |
| GLM | glm-4.6+ |
✅ Full |
| Cloudflare Workers AI | Workers AI model IDs |
Run a Telegram bot (long polling) so you can chat with the agent from Telegram:
Edit the config file ~/.morph/config.yaml and set your Telegram bot token:
telegram: bot_token: "YOUR_TELEGRAM_BOT_TOKEN_HERE" allowed_chat_ids: [] # add allowed chat ids here
mistermorph telegram --log-level info
Notes:
- Use
/idto get the current chat id and add it toallowed_chat_idsfor allowlisting. - Use
/ask <task>in groups. - In groups, the bot also responds when you reply to it, or mention
@BotUsername. - You can send a file; it will be downloaded under
file_cache_dir/telegram/and the agent can process it. The agent can also send cached files back viatelegram_send_file, and send voice messages viatelegram_send_voicefrom local voice files infile_cache_dir. - The last loaded skill(s) stay "sticky" per chat (so follow-up messages won’t forget SKILL.md);
/resetclears this. telegram.group_trigger_mode=smartruns addressing LLM on every group message; acceptance requiresaddressed=true,confidence >= telegram.addressing_confidence_threshold, andinterject > telegram.addressing_interject_threshold.telegram.group_trigger_mode=talkativealso runs addressing LLM on every group message, but does not requireaddressed=true(it still uses the same confidence/interject thresholds).- Use
/resetin chat to clear conversation history. - By default it runs multiple chats concurrently, but processes each chat serially (config:
telegram.max_concurrency).
Run a Slack bot with Socket Mode so you can chat with the agent in Slack:
Edit the config file ~/.morph/config.yaml and set your Slack tokens:
slack: bot_token: "YOUR_SLACK_BOT_TOKEN_HERE" # xoxb-... app_token: "YOUR_SLACK_APP_TOKEN_HERE" # xapp-... allowed_team_ids: [] # optional allowlist allowed_channel_ids: [] # optional allowlist
mistermorph slack --log-level info
Notes:
- Requires both
xoxbbot token andxappapp token. - Group trigger and addressing controls mirror Telegram style (
strict|smart|talkative+ confidence/interject thresholds). - By default it runs multiple conversations concurrently, but processes each
team_id:channel_idconversation serially (slack.max_concurrencycontrols global concurrency). - See
docs/slack.mdfor setup and thread behavior details. - See
docs/bus.mdfor bus routing and ordering semantics.
Run a local HTTP daemon that accepts tasks sequentially (one-by-one), so you don’t need to restart the process for each task.
Start the daemon:
export MISTER_MORPH_SERVER_AUTH_TOKEN="change-me" mistermorph serve --server-port 8787 --log-level info
Submit a task:
mistermorph submit --server-url http://127.0.0.1:8787 --auth-token "$MISTER_MORPH_SERVER_AUTH_TOKEN" --wait \ --task "Summarize this repo and write to ./summary.md"
Run a local Console web UI for runtime inspection and file management.
Console currently includes:
- task list + task detail
- TODO files editor (
TODO.md,TODO.DONE.md) - contacts files editor (
ACTIVE.md,INACTIVE.md) - persona files editor (
IDENTITY.md,SOUL.md) - system diagnostics/config view
Build frontend:
cd web/console
pnpm install
pnpm buildStart daemon (task API source):
MISTER_MORPH_SERVER_AUTH_TOKEN=dev-token \ mistermorph serve --server-auth-token dev-token
Start Console backend + static hosting:
MISTER_MORPH_CONSOLE_PASSWORD=secret \ MISTER_MORPH_SERVER_AUTH_TOKEN=dev-token \ mistermorph console serve --console-static-dir ./web/console/dist
Open:
http://127.0.0.1:9080/console
More details: web/console/README.md.
Two common integration options:
- As a Go library: see
demo/embed-go/. - As a subprocess CLI: see
demo/embed-cli/.
For Go-library embedding with built-in wiring, use integration:
cfg := integration.DefaultConfig() cfg.BuiltinToolNames = []string{"read_file", "url_fetch", "todo_update"} // optional; empty = all built-ins cfg.Inspect.Prompt = true // optional cfg.Inspect.Request = true // optional cfg.Set("llm.api_key", os.Getenv("OPENAI_API_KEY")) rt := integration.New(cfg) reg := rt.NewRegistry() // built-in tools wiring prepared, err := rt.NewRunEngineWithRegistry(ctx, task, reg) if err != nil { /* ... */ } defer prepared.Cleanup() final, runCtx, err := prepared.Engine.Run(ctx, task, agent.RunOptions{Model: prepared.Model}) _ = final _ = runCtx
Core tools available to the agent:
read_file: read local text files.write_file: write local text files underfile_cache_dirorfile_state_dir.bash: run a shell command (disabled by default).url_fetch: HTTP fetch with optional auth profiles.web_search: web search (DuckDuckGo HTML).plan_create: generate a structured plan.
Tools only available in Telegram mode:
telegram_send_file: send a file in Telegram.telegram_send_voice: send a voice message in Telegram.telegram_react: add an emoji reaction in Telegram.
Please see docs/tools.md for detailed tool documentation.
mistermorph discovers skills under file_state_dir/skills (recursively), and injects selected SKILL.md content into the system prompt.
By default, run uses skills.mode=on, which loads skills from skills.load and optional $SkillName references (skills.auto=true).
Docs: docs/skills.md.
# list available skills mistermorph skills list # Use a specific skill in the run command mistermorph run --task "..." --skills-mode on --skill skill-name # install remote skills mistermorph skills install <remote-skill-url>
- Install audit: When installing remote skills, Mister Morph will preview the skill content and do a basic security audit (e.g., look for dangerous commands in scripts) before asking for user confirmation.
- Auth profiles: Skills can declare required auth profiles in the
auth_profilesfield. The agent will only use skills whose auth profiles are configured on the host, preventing accidental secret leaks (seeassets/skills/moltbookand thesecrets/auth_profilessections in the config file).
Recommended systemd hardening and secret handling: docs/security.md.
Known issues and workarounds: docs/troubleshoots.md.
There is an argument --log-level set for logging level and format:
mistermorph run --log-level debug --task "..."There are 2 arguments --inspect-prompt/--inspect-request for dumping internal state for debugging:
mistermorph run --inspect-prompt --inspect-request --task "..."These arguments will dump the final system/user/tool prompts and the full LLM request/response JSON as plain text files to ./dump directory.
mistermorph uses Viper, so you can configure it via flags, env vars, or a config file.
- Config file:
--config /path/to/config.yaml(supports.yaml/.yml/.json/.toml/.ini) - Env var prefix:
MISTER_MORPH_ - Nested keys: replace
.and-with_(e.g.tools.bash.enabled→MISTER_MORPH_TOOLS_BASH_ENABLED=true)
Global (all commands)
--config--log-level--log-format--log-add-source--log-include-thoughts--log-include-tool-params--log-include-skill-contents--log-max-thought-chars--log-max-json-bytes--log-max-string-value-chars--log-max-skill-content-chars--log-redact-key(repeatable)
run
--task--provider--endpoint--model--api-key--llm-request-timeout--interactive--skills-dir(repeatable)--skill(repeatable)--skills-auto--skills-mode(off|on)--max-steps--parse-retries--max-token-budget--timeout--inspect-prompt--inspect-request
serve
--server-bind--server-port--server-auth-token--server-max-queue
submit
--task--server-url--auth-token--model--submit-timeout--wait--poll-interval
console serve
--console-listen--console-base-path--console-static-dir--console-session-ttl
telegram
--telegram-bot-token--telegram-allowed-chat-id(repeatable)--telegram-group-trigger-mode(strict|smart|talkative)--telegram-addressing-confidence-threshold--telegram-addressing-interject-threshold--telegram-poll-timeout--telegram-task-timeout--telegram-max-concurrency
slack
--slack-bot-token--slack-app-token--slack-allowed-team-id(repeatable)--slack-allowed-channel-id(repeatable)--slack-group-trigger-mode(strict|smart|talkative)--slack-addressing-confidence-threshold--slack-addressing-interject-threshold--slack-task-timeout--slack-max-concurrency
skills
skills list --skills-dir(repeatable)skills install --dest --dry-run --clean --skip-existing --timeout --max-bytes --yes
install
install [dir]--yes
Common env vars (these map to config keys):
MISTER_MORPH_CONFIGMISTER_MORPH_LLM_PROVIDERMISTER_MORPH_LLM_ENDPOINTMISTER_MORPH_LLM_MODELMISTER_MORPH_LLM_API_KEYMISTER_MORPH_LLM_REQUEST_TIMEOUTMISTER_MORPH_LOGGING_LEVELMISTER_MORPH_LOGGING_FORMATMISTER_MORPH_SERVER_AUTH_TOKENMISTER_MORPH_CONSOLE_PASSWORDMISTER_MORPH_CONSOLE_PASSWORD_HASHMISTER_MORPH_TELEGRAM_BOT_TOKENMISTER_MORPH_SLACK_BOT_TOKENMISTER_MORPH_SLACK_APP_TOKENMISTER_MORPH_FILE_CACHE_DIR
Provider-specific settings use the same mapping, for example:
llm.azure.deployment→MISTER_MORPH_LLM_AZURE_DEPLOYMENTllm.bedrock.model_arn→MISTER_MORPH_LLM_BEDROCK_MODEL_ARN
Tool toggles and limits also map to env vars, for example:
MISTER_MORPH_TOOLS_BASH_ENABLEDMISTER_MORPH_TOOLS_URL_FETCH_ENABLEDMISTER_MORPH_TOOLS_URL_FETCH_MAX_BYTES
Secret values referenced by auth_profiles.*.credential.secret_ref are regular env vars too (example: JSONBILL_API_KEY).
Key meanings (see assets/config/config.example.yaml for the canonical list):
- Core:
llm.providerselects the backend. Most providers usellm.endpoint/llm.api_key/llm.model. Azure usesllm.azure.deploymentfor deployment name, while endpoint/key are still read fromllm.endpointandllm.api_key. Bedrock usesllm.bedrock.*.llm.tools_emulation_modecontrols tool-call emulation for models without native tool calling (off|fallback|force). - Logging:
logging.level(infoshows progress;debugadds thoughts),logging.format(text|json), pluslogging.include_thoughtsandlogging.include_tool_params(redacted). - Loop:
max_stepslimits tool-call rounds;parse_retriesretries invalid JSON;max_token_budgetis a cumulative token cap (0 disables);timeoutis the overall run timeout. - Skills:
skills.modecontrols whether skills are used (off|on; legacyexplicit/smartmap toon);file_state_dir+skills.dir_namedefine the default skills root;skills.loadalways loads specific skills;skills.autoadditionally loads$SkillNamereferences. - Tools: all tool toggles live under
tools.*(e.g.tools.bash.enabled,tools.url_fetch.enabled) with per-tool limits and timeouts.