How to setup Kimi CLI with Ozeki AI Gateway
This comprehensive guide demonstrates how to install and configure Kimi CLI to work with Ozeki AI Gateway on Windows systems. By connecting Kimi CLI to Ozeki AI Gateway, you can route requests through your local gateway and use local AI models. This tutorial covers the complete setup process, from installing Kimi CLI using PowerShell to configuring the TOML configuration file.
What is Kimi CLI?
Kimi CLI is a command-line interface tool that enables developers to interact with Kimi AI directly from their terminal. It provides an agentic coding experience where you can delegate programming tasks, file operations, and code generation without leaving your command-line environment.
Steps to follow
We assume Ozeki AI Gateway is already installed on your system. You can install it on Linux, Windows or Mac.
- Install Kimi CLI via PowerShell
- Launch Kimi CLI to initialize configuration
- Open and modify config.toml
- Re-launch Kimi CLI
- Send a test prompt
PowerShell install command:
Invoke-RestMethod https://code.kimi.com/install.ps1 | Invoke-Expression
Config file:
Config file C:\Users\{User}\.kimi\config.toml:
default_model = "local-model"
default_thinking = false
default_yolo = false
[providers.local-llm]
type = "openai_legacy"
base_url = "http://localhost/v1"
api_key = "api-key"
[models.local-model]
provider = "local-llm"
model = "Kimi-K2.5"
max_context_size = 256000
[loop_control]
max_steps_per_turn = 100
max_retries_per_step = 3
max_ralph_iterations = 0
reserved_context_size = 50000
[services]
[mcp.client]
tool_call_timeout_ms = 60000
How to setup Kimi CLI with Ozeki AI Gateway video
The following video shows how to install and configure Kimi CLI to work with Ozeki AI Gateway step-by-step. The video covers the PowerShell installation, config.toml file setup, and testing the integration with a sample prompt.
Step 1 - Install Kimi CLI via PowerShell
Open PowerShell on your Windows system by searching for "PowerShell" in the Start menu. You'll use PowerShell to run the Kimi CLI installation command (Figure 1).
Execute the Kimi CLI installation command in PowerShell. This command downloads and installs Kimi CLI on your system (Figure 2).
Invoke-RestMethod https://code.kimi.com/install.ps1 | Invoke-Expression
Step 2 - Launch Kimi CLI to initialize configuration
In a PowerShell or Command Prompt window type "kimi" to start Kimi CLI for the first time. This initial startup creates the configuration directory at C:\Users\{User}\.kimi\ where you'll edit the config.toml file in the following steps (Figure 3).
kimi
Kimi CLI displays its welcome page in the terminal window. The configuration folder has now been created. You can proceed to exit Kimi CLI so you can configure it to work with Ozeki AI Gateway (Figure 4).
Exit Kimi CLI by typing the /exit command or pressing Ctrl+C to close the interface (Figure 5).
/exit
Step 3 - Open and modify the config.toml file
Navigate to the Kimi CLI configuration directory at C:\Users\{User}\.kimi\ and open the config.toml file with Notepad or any text editor. This file controls which AI provider and model Kimi CLI connects to (Figure 6).
Replace the contents of config.toml with the Ozeki AI Gateway configuration below.
Set the base_url to point to your Ozeki AI Gateway endpoint and update the
api_key to your gateway API key. The model field should match
the AI model name available in your Ozeki AI Gateway installation (Figure 7).
default_model = "local-model" default_thinking = false default_yolo = false [providers.local-llm] type = "openai_legacy" base_url = "http://localhost/v1" api_key = "api-key" [models.local-model] provider = "local-llm" model = "Kimi-K2.5" max_context_size = 256000 [loop_control] max_steps_per_turn = 100 max_retries_per_step = 3 max_ralph_iterations = 0 reserved_context_size = 50000 [services] [mcp.client] tool_call_timeout_ms = 60000
Save the changes to config.toml. You can usually do this by pressing Ctrl+S or by selecting File > Save. Close the text editor once the file has been saved (Figure 8).
Step 4 - Launch Kimi CLI again
Open a new terminal window, navigate to the directory where you want to work, and type "kimi" to start Kimi CLI with the updated configuration (Figure 9).
kimi
Step 5 - Send a test prompt
Test your Kimi CLI installation by entering a simple prompt. For example, ask Kimi to create a basic HTML file or write a simple script. This verifies that Kimi CLI is properly connected to Ozeki AI Gateway and can process requests (Figure 10).
Kimi CLI processes the prompt and displays the response in the terminal window (Figure 11).
Troubleshooting
Kimi command not found
If you receive a "command not found" error when trying to run Kimi CLI, ensure you've added the npm installation path to your PATH environment variable and opened a new terminal window after making the changes.
Connection errors
If Kimi CLI cannot connect to Ozeki AI Gateway, verify that:
- Ozeki AI Gateway is running and accessible at the configured URL
- The
base_urlin config.toml is correct - The
api_keymatches your gateway API key - The configured AI model is available in your Ozeki AI Gateway installation
Configuration file issues
If Kimi CLI does not pick up your settings, ensure that:
- The config.toml file is saved in the correct location: C:\Users\{User}\.kimi\
- The TOML syntax is valid (correct section headers, no missing quotes or equals signs)
- You have restarted Kimi CLI after saving the file
Final thoughts
You have successfully installed and configured Kimi CLI to work with Ozeki AI Gateway. You can now use Kimi CLI from the command line to delegate coding tasks, automate file operations, and leverage AI assistance in your development workflow. All requests will be routed through your Ozeki AI Gateway, allowing you to use alternative AI models and maintain control over your API infrastructure.