Appearance
Running Ollama with a User-Friendly Web Interface
This guide offers instructions for setting up a web-based interface for Ollama using Docker. It simplifies the management of local language models by providing a user-friendly front-end called Open WebUI.
Open WebUI
Open WebUI is a user-friendly web interface for Ollama that simplifies the interaction with LLMs. It offers several key features:
- Intuitive Interface: Provides a clean and easy-to-navigate interface for managing and interacting with LLMs.
- Model Management: Allows you to easily pull, list, and manage Ollama models.
- Chat Interface: Offers a chat-like interface for interacting with the loaded LLM.
- RAG (Retrieval Augmented Generation): Supports uploading documents and querying them using the LLM, enabling powerful knowledge-based interactions.
- Customizable Settings: Allows you to configure various settings, such as system prompts and model parameters.
- Cross-Platform Compatibility: Accessible from any device with a web browser.
Docker Compose
Using Docker Compose simplifies the deployment and management of Ollama and Open WebUI. The following compose.yaml
file defines the necessary services:
yaml
services:
ollama:
image: docker.io/ollama/ollama
volumes:
- ollama:/root/.ollama
open-webui:
image: ghcr.io/open-webui/open-webui
environment:
OLLAMA_BASE_URL: http://ollama:11434
WEBUI_AUTH: "False" # Disable authentication for simplicity in this example
depends_on:
- ollama # Ensure Ollama is running before starting Open WebUI
ports:
- 8080:8080 # Expose port 8080 for accessing the Web UI
volumes:
- open-webui:/app/backend/data
cloudflared:
image: docker.io/cloudflare/cloudflared
command: tunnel --no-autoupdate --url http://open-webui:8080
volumes:
ollama:
open-webui:
Environment variables
Open WebUI uses environment variables for configuration. Here are the descriptions for the ones used in the compose.yaml
file:
WEBUI_AUTH
- Default:
True
- Description: Enables or disables authentication for the Web UI. Setting it to
False
disables authentication, making it easier to access in a local development environment. Use with caution in production environments.
- Default:
OLLAMA_BASE_URL
- Default:
http://localhost:11434
- Description: Specifies the URL of the Ollama server. In this Docker setup, it points to the
ollama
service within the Docker network.
- Default:
For more environment variables and advanced configurations, refer to the official Open WebUI documentation: https://docs.openwebui.com/getting-started/advanced-topics/env-configuration
Start containers
To start the containers, navigate to the directory containing the compose.yaml
file and run:
sh
docker compose up -d
The -d
flag runs the containers in detached mode (in the background).
Pull models
After the containers are running, you can pull models using the following command:
sh
docker compose exec ollama ollama pull llama3.2:1b
This command executes ollama pull llama3.2:1b
inside the ollama
container, downloading the llama3.2:1b
model. You can replace it with any other available Ollama model.
Once the model is downloaded, you can access the Open WebUI by navigating to http://localhost:8080
in your web browser.