Local Deployment
This guide will walk through the complete process of installing docker and running OmniBox cloud service using docker compose on a freshly installed Debian 12 system, starting from the command line as the root user.
Deployment Requirements
- Have a Linux server with access to the
rootuser and command line - Possess basic computer knowledge
- Be able to access GitHub and ghcr.io
- By default, ports 8080, 8025, and 9000 are publicly accessible (or modify in the .env file)
Environment Overview
| Environment | Description |
|---|---|
| CPU | i5-12600KF |
| RAM | 64GB |
| Storage | 1T NVMe |
| OS | Debian 12 |
| Server IP | 192.168.0.100 |
Install Docker
Install Docker using Tsinghua Mirror:
export DOWNLOAD_URL="https://mirrors.tuna.tsinghua.edu.cn/docker-ce"
wget -O- https://raw.githubusercontent.com/docker/docker-install/master/install.sh | shClone the Project
GIT_LFS_SKIP_SMUDGE=1 git clone https://github.com/import-ai/omnibox.git
cd omnibox
cp example.env .envRun the Project
docker compose -f compose.yaml -f compose/deps.yaml up -dIf there are no errors, the project has started successfully.
Register the First Account
Visit http://192.168.0.100:8080 and register with any email, e.g., omnibox@qq.com, then go to http://192.168.0.100:8025 to view the email verification code.
After successful registration and login, you can start using it.
Configure Environment Variables
After the project is running, you still need to configure some environment variables, otherwise parts involving AI and file upload/download will error.
There are 3 main parts:
OBW_VECTOR: Vector search relatedOBW_GRIMOIRE: Web collection, LLM Q&A relatedOBB_S3_PUBLIC_ENDPOINT: File upload/download related
Edit .env:
# >>> AI-related configuration >>>
OBW_VECTOR_EMBEDDING_API_KEY="sk-***"
OBW_VECTOR_EMBEDDING_BASE_URL="https://api.openai.com/v1"
OBW_VECTOR_EMBEDDING_MODEL="text-embedding-3-small"
OBW_GRIMOIRE_OPENAI_DEFAULT_API_KEY="***"
OBW_GRIMOIRE_OPENAI_DEFAULT_BASE_URL="https://api.openai.com/v1"
OBW_GRIMOIRE_OPENAI_DEFAULT_MODEL="gpt-4o"
OBW_GRIMOIRE_OPENAI_MINI_MODEL="gpt-4o-mini"
OBW_GRIMOIRE_OPENAI_LARGE_MODEL="gpt-4.1"
OBW_GRIMOIRE_OPENAI_LARGE_THINKING_MODEL="o3"
# <<< AI-related configuration <<<
# Here our server IP is 192.168.0.100, replace with your server's external address, ensure it can be directly accessed from the browser
OBB_S3_PUBLIC_ENDPOINT="http://192.168.0.100:9000"After editing, run again:
docker compose -f compose.yaml -f compose/deps.yaml up -dFAQ
What is local deployment?
Local deployment refers to "deploying OmniBox cloud service on a local server or private cloud". This is completely different from the client-side application and is only suitable for professionals with computer knowledge.
What are the differences between local deployment and cloud service?
- Local deployment currently does not support PDF, audio/video, and image parsing
- These features involve 30+ models, require significant GPU memory, and have complex business logic, making them difficult to run locally
- We plan to provide a simplified version for local deployment in the future
- Local deployment does not support WeChat Assistant and QQ Assistant "小黑", this account can only connect to cloud services
- WeChat Assistant is essentially a plugin based on Open API, see: Open API Documentation for interface documentation
- There are currently no plans to open source WeChat Assistant and QQ Assistant code
What should I do if file upload fails after local deployment?
Please ensure that the browser you are using can directly access OBB_S3_PUBLIC_ENDPOINT. When uploading files, the upload will bypass the backend and go directly to S3 storage.
Can OmniBox be used offline?
With local deployment, you can use it completely offline with ollama.