# Open WebUI

[Open WebUI](https://docs.openwebui.com/) is an extensible, feature-rich, and user-friendly self-hosted AI platform designed to operate entirely offline. It supports various LLM runners like Ollama and OpenAI-compatible APIs, with built-in inference engine for RAG, making it a powerful AI deployment solution.

### Open WebUI and MCP

Open WebUI is suitable for supporting [Model Context Protocol (MCP)](https://modelcontextprotocol.io/docs/getting-started/intro) which is Anthropic's popular framework for connecting AI agents to external tools. Thus, the Open WebUI application offers a user-friendly interface just like that of the web-accessable ChatGPT, Gemini...etc., but with two major advantages. First, the user can access any kind of AI model (OpenAI, Google, Anthropic...) within the same interface. Second, it can be directly connected to external sources that the user wants the agent to see.

### Setting Up Agents in Open WebUI

To add model providers, open Open WebUI, select your user menu in the lower-left corner, and open Admin Panel. Then go to Settings > Connections. Add the API endpoint and API key for the provider you want to use. Open WebUI supports OpenAI-compatible API connections, so OpenAI models can be added directly, while other providers such as Anthropic or Google may require OpenAI-compatible endpoints and explicit Model IDs. For OpenAI models, specifying Model IDs is not required

The specific models can be selected from the LLM providers' sites.\
**Anthropic:**\
<https://docs.anthropic.com/en/docs/about-claude/models/overview>

Google Gemini:\
<https://ai.google.dev/gemini-api/docs/models>

### Setting Up MCP Servers in Open WebUI

Adding MCP servers to Open WebUI has two steps:

1. Create the JSON configuration file that defines the MCP servers, including their source code paths, required packages, and optional port values. When Open WebUI starts, a startup script reads this JSON file and configures the servers from it. If you switch to a different JSON file, update the script accordingly.
2. In Open WebUI, open Settings > Tools and add each MCP server as a tool. For each one:
   * set the URL to the server address and port, for example `http://0.0.0.0:8002`
   * enter a clear Name
   * set Visibility to Public

### Using Open WebUI

#### Turning on Tools

Tools can be enabled or disabled separately for each chat. To choose which MCP servers are available in a conversation, select the + button in the lower-left corner of the chat window and choose the tools you want to use. This is especially useful when multiple tools provide overlapping functionality. For best results, open Chat Controls near the profile icon in the upper-right corner of the chat and set Function *Calling* to *Native*.

#### System Prompts

System prompts provide persistent instructions that influence agent behavior. In Open WebUI, chats can be organized into folders, and each folder can have its own system prompt. Any chat placed in that folder inherits the folder’s system prompt behavior.

### Error Handling, Debugging

If Open WebUI cannot connect to MCP servers or fails to configure agents correctly, review the application logs. Open the Sessions view and select SEE LOGS to inspect startup and runtime messages and identify the root cause.
