Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

A bundler to run local LLMs with models, hosted on a remote device

Notifications You must be signed in to change notification settings

3phase/offline-ext-llm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

History

2 Commits

Repository files navigation

image

Local LLM bundler with external models:

  1. To run, first set variable for the remote location of the offline host path, like this:
export HOST_OFFLINE_PATH="/Volumes/Offline LLM/.ollama/models" # or whatever the external drive location might be.
  1. After that, issue
docker compose up -d --build
  1. Then go to localhost:8080, go to settings and set the connection to http://localhost:11434/v1.

About

A bundler to run local LLMs with models, hosted on a remote device

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

AltStyle によって変換されたページ (->オリジナル) /