-
-
Notifications
You must be signed in to change notification settings - Fork 2.9k
How to run offline now that backends are downloaded at runtime? #5918
-
I used LocalAI (great project!) to run LLMs offline. And I mean offline, not just on my machine but on a computer that isn't connected to the internet. So I download the container image from an internet computer, save it to a file and move it with a USB to the other machine. Worked great. But with the latest images, there are no backends... I thought the AIO images would have the backends but it appears not as it tried to download llama-cpp on first model load.
Is there a standard way to solve this? I suppose I can create a new image starting from the default LocalAI image, run the backend download command and save that new image. But I wanted to check first.
Beta Was this translation helpful? Give feedback.
All reactions
Replies: 2 comments
-
I think the AIO download the models for you and sets everything up on first start.
You can pull the back-ends with docker and save them as an OCI file to install later using docker save -o output-file.tar image-name:tag then on the host inside the container localai backends install ocifile://
Beta Was this translation helpful? Give feedback.
All reactions
-
I used LocalAI (great project!) to run LLMs offline. And I mean offline, not just on my machine but on a computer that isn't connected to the internet. So I download the container image from an internet computer, save it to a file and move it with a USB to the other machine. Worked great. But with the latest images, there are no backends... I thought the AIO images would have the backends but it appears not as it tried to download llama-cpp on first model load.
Is there a standard way to solve this? I suppose I can create a new image starting from the default LocalAI image, run the backend download command and save that new image. But I wanted to check first.
easiest way is to backup your backends folder, you can literally copy that from an installation to another and it will just work.
Another way is install backends from OCI files manually, which is ideal for airgap:
For instance, you can pull the backends images (even with docker) and save them as standard images with docker save <image>. At that point, you can install these in LocalAI with local-ai backends install ocifile://<PATH_TO_TAR>.
Beta Was this translation helpful? Give feedback.