How to set up OpenWebUI in Windows
This comprehensive guide walks you through installing and configuring OpenWebUI on Windows using Anaconda. OpenWebUI is a powerful web-based interface for interacting with Large Language Models (LLMs). By following this tutorial, you'll learn how to create a dedicated Python environment, install OpenWebUI using pip, configure it to connect to your company's LLM API, and start having AI-powered conversations.
What is OpenWebUI?
OpenWebUI is an extensible, feature-rich, and user-friendly self-hosted web interface designed to work with various LLM providers. It offers a modern chat interface that supports multiple AI models, conversation history, user management, and API integrations. With OpenWebUI, you can connect to OpenAI-compatible APIs, including custom LLM endpoints, making it a versatile solution for both development and production environments.
How does OpenWebUI work?
OpenWebUI often connects to local LLMs serviced by VLLM, SGLang, LLama.cpp or other local AI inference engine. It is a good practice to route AI traffic through an AI gateway, such as Ozeki AI Gateway for control. A local AI gateway allows you to create individual API keys for your users, it allows you to access multiple local AI servers without the need of configuring each server in OpenWebUI, and you can extract usage statistics to see which users use the most credits.
Steps to follow
We assume Ozeki AI Gateway is already installed on your system. You can install it on Linux, Windows or Mac.
- Create Anaconda environment
- Install OpenWebUI with pip
- Start OpenWebUI server
- Access interface and create admin account
- Open Admin Panel settings
- Configure API connection
- Save connection and start new chat
- Test LLM connection
How to set up OpenWebUI video
The following video shows how to set up OpenWebUI in Windows with Anaconda step-by-step. The video covers creating a Python environment, installing OpenWebUI, configuring API connections, and testing your first AI conversation.
Step 1 - Create Anaconda environment
In Anaconda Navigator, click on "Environments" in the left sidebar. This section allows you to create and manage isolated Python environments for different projects (Figure 2).
If you don't have Anaconda installed, please check out our How to install Anaconda on Windows guide.
In the Environments section, locate and click the "Create" button. This will open a dialog where you can configure your new Python environment specifically for OpenWebUI (Figure 3).
Name the new environment "OpenWebUI". After entering the environment name, proceed to the Packages section (Figure 4).
Select Python as the package type and choose version 3.11 from the dropdown menu (Figure 5).
Click "Create" to initialize the environment. The system will install Python 3.11 and essential base packages required for the environment to function (Figure 6).
Wait as the system completes the setup. Once the environment is successfully created, it will be available for use and you can proceed with the installation (Figure 7).
Locate your newly created "OpenWebUI" environment and click the button next to it, then select "Open Terminal" from the dropdown menu (Figure 8).
Step 2 - Install OpenWebUI with pip
In the PowerShell terminal type the command: pip install open-webui
and press Enter. This command will use Python's package installer to download and install
OpenWebUI along with all its required dependencies (Figure 9).
The pip installation process will install the necessary dependencies for OpenWebUI. You'll see output in the terminal showing which packages are being downloaded and installed. Wait until the installation is fully completed (Figure 10).
Step 3 - Start OpenWebUI server
After installing, start OpenWebUI by typing open-webui serve
in the terminal and pressing Enter. This command launches the OpenWebUI web server
on your local machine. You'll see startup messages indicating that the server is
initializing, loading configurations, and starting the web interface (Figure 11).
You'll see messages in the terminal confirming that the web server has started successfully. The terminal will also show any incoming requests and system logs. Leave this terminal window open - closing it will stop the OpenWebUI server (Figure 12).
Step 4 - Access interface and create admin account
Open your preferred web browser, you'll need it to connect to the OpenWebUI interface (Figure 13).
Type http://localhost:8080 into your browser's address bar
and press Enter to access the OpenWebUI interface (Figure 14).
On the welcome page, you'll see a "Get Started" button. Click this button to begin the initial setup process (Figure 15).
Fill in the registration form to create your administrator account. You'll need to provide a name, email address, and password. After entering all required information, click the "Create Admin Account" button (Figure 16).
After creating your account, you'll be logged in and redirected to the OpenWebUI dashboard. This is the main chat interface where you'll interact with AI models (Figure 17).
Step 5 - Open Admin Panel settings
To configure the LLM connection, you need to access the admin panel. Look for your profile username in the bottom-left corner of the interface. Click on it to open a dropdown menu, then select "Admin Panel" (Figure 18).
In the admin panel, locate and click on "Settings". This will open the settings page where you can configure various options for your OpenWebUI installation, including the LLM API URLs (Figure 19).
Step 6 - Configure API connection
Find and click on "Connections" in the sidebar menu. This section allows you to configure connections to external LLM providers and APIs (Figure 20).
In the Connections section, click the plus (+) icon button to open the Add Connection form (Figure 21).
Enter your LLM API endpoint URL and the API key provided by your administrator. The API key authenticates your requests to the LLM service (Figure 22).
Replace http://video.ozeki.hu with the API URL of your own Ozeki AI Gateway installation, and enter the API key you have created in Ozeki AI Gateway.
Step 7 - Save connection and start new chat
Click the "Save" button at the bottom of the form. OpenWebUI will validate the connection details and attempt to establish a connection to your LLM API (Figure 23).
Return to the main chat interface by clicking the "New Chat" button. This will create a fresh conversation thread where you can begin interacting with the connected LLM model (Figure 24).
Step 8 - Test LLM connection
To test the LLM connection, type a question in the chat textbox and press Enter or click the send button (Figure 25).
You should see the AI's response appear in the chat window, confirming that OpenWebUI is successfully connected to your LLM API and working as expected (Figure 26).
Final thoughts
Now that OpenWebUI is set up, you're ready to start using AI-powered conversations through your own self-hosted interface. With this guide, you've completed the essential steps to create a Python environment, install OpenWebUI, and connect it to your company's LLM service. As you explore the platform, OpenWebUI will support your AI interaction needs, helping you leverage large language models for various tasks including content generation, code assistance, data analysis, and more.