favicon-300
Skip to content

Local Deployment

This guide will walk through the complete process of installing docker and running OmniBox cloud service using docker compose on a freshly installed Debian 12 system, starting from the command line as the root user.

Deployment Requirements

  1. Have a Linux server with access to the root user and command line
  2. Possess basic computer knowledge
  3. Be able to access GitHub and ghcr.io
  4. By default, ports 8080, 8025, and 9000 are publicly accessible (or modify in the .env file)

Environment Overview

EnvironmentDescription
CPUi5-12600KF
RAM64GB
Storage1T NVMe
OSDebian 12
Server IP192.168.0.100

Install Docker

Install Docker using Tsinghua Mirror:

shell
export DOWNLOAD_URL="https://mirrors.tuna.tsinghua.edu.cn/docker-ce"
wget -O- https://raw.githubusercontent.com/docker/docker-install/master/install.sh | sh

Clone the Project

shell
GIT_LFS_SKIP_SMUDGE=1 git clone https://github.com/import-ai/omnibox.git
cd omnibox
cp example.env .env

Run the Project

shell
docker compose -f compose.yaml -f compose/deps.yaml up -d

If there are no errors, the project has started successfully.

Register the First Account

Visit http://192.168.0.100:8080 and register with any email, e.g., omnibox@qq.com, then go to http://192.168.0.100:8025 to view the email verification code.

After successful registration and login, you can start using it.

Configure Environment Variables

After the project is running, you still need to configure some environment variables, otherwise parts involving AI and file upload/download will error.

There are 3 main parts:

  1. OBW_VECTOR: Vector search related
  2. OBW_GRIMOIRE: Web collection, LLM Q&A related
  3. OBB_S3_PUBLIC_ENDPOINT: File upload/download related

Edit .env:

shell
# >>> AI-related configuration >>>
OBW_VECTOR_EMBEDDING_API_KEY="sk-***"
OBW_VECTOR_EMBEDDING_BASE_URL="https://api.openai.com/v1"
OBW_VECTOR_EMBEDDING_MODEL="text-embedding-3-small"

OBW_GRIMOIRE_OPENAI_DEFAULT_API_KEY="***"
OBW_GRIMOIRE_OPENAI_DEFAULT_BASE_URL="https://api.openai.com/v1"
OBW_GRIMOIRE_OPENAI_DEFAULT_MODEL="gpt-4o"
OBW_GRIMOIRE_OPENAI_MINI_MODEL="gpt-4o-mini"
OBW_GRIMOIRE_OPENAI_LARGE_MODEL="gpt-4.1"
OBW_GRIMOIRE_OPENAI_LARGE_THINKING_MODEL="o3"
# <<< AI-related configuration <<<

# Here our server IP is 192.168.0.100, replace with your server's external address, ensure it can be directly accessed from the browser
OBB_S3_PUBLIC_ENDPOINT="http://192.168.0.100:9000"

After editing, run again:

shell
docker compose -f compose.yaml -f compose/deps.yaml up -d

FAQ

What is local deployment?

Local deployment refers to "deploying OmniBox cloud service on a local server or private cloud". This is completely different from the client-side application and is only suitable for professionals with computer knowledge.

What are the differences between local deployment and cloud service?

  • Local deployment currently does not support PDF, audio/video, and image parsing
    • These features involve 30+ models, require significant GPU memory, and have complex business logic, making them difficult to run locally
    • We plan to provide a simplified version for local deployment in the future
  • Local deployment does not support WeChat Assistant and QQ Assistant "小黑", this account can only connect to cloud services

What should I do if file upload fails after local deployment?

Please ensure that the browser you are using can directly access OBB_S3_PUBLIC_ENDPOINT. When uploading files, the upload will bypass the backend and go directly to S3 storage.

Can OmniBox be used offline?

With local deployment, you can use it completely offline with ollama.