Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

How to run offline now that backends are downloaded at runtime? #5918

etlweather started this conversation in General
Discussion options

I used LocalAI (great project!) to run LLMs offline. And I mean offline, not just on my machine but on a computer that isn't connected to the internet. So I download the container image from an internet computer, save it to a file and move it with a USB to the other machine. Worked great. But with the latest images, there are no backends... I thought the AIO images would have the backends but it appears not as it tried to download llama-cpp on first model load.

Is there a standard way to solve this? I suppose I can create a new image starting from the default LocalAI image, run the backend download command and save that new image. But I wanted to check first.

You must be logged in to vote

Replies: 2 comments

Comment options

I think the AIO download the models for you and sets everything up on first start.

You can pull the back-ends with docker and save them as an OCI file to install later using docker save -o output-file.tar image-name:tag then on the host inside the container localai backends install ocifile://

You must be logged in to vote
0 replies
Comment options

I used LocalAI (great project!) to run LLMs offline. And I mean offline, not just on my machine but on a computer that isn't connected to the internet. So I download the container image from an internet computer, save it to a file and move it with a USB to the other machine. Worked great. But with the latest images, there are no backends... I thought the AIO images would have the backends but it appears not as it tried to download llama-cpp on first model load.

Is there a standard way to solve this? I suppose I can create a new image starting from the default LocalAI image, run the backend download command and save that new image. But I wanted to check first.

easiest way is to backup your backends folder, you can literally copy that from an installation to another and it will just work.

Another way is install backends from OCI files manually, which is ideal for airgap:

For instance, you can pull the backends images (even with docker) and save them as standard images with docker save <image>. At that point, you can install these in LocalAI with local-ai backends install ocifile://<PATH_TO_TAR>.

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

AltStyle によって変換されたページ (->オリジナル) /