Testing LLM Services
Explore multiple ways to verify large‑language‑model integrations with Ozeki AI Gateway: a web GUI, a Python command‑line script, and Postman. Each method lets you send test prompts, view responses, and inspect logs to ensure your providers are correctly configured.
How to test LLM services using the Ozeki AI Gateway GUI
The GUI‑based testing tool lets you pick any configured provider, send a prompt, and instantly see the AI’s reply. It also shows transaction logs so you can confirm request details, token usage and response times—all without leaving the web interface.
How to Test LLM Services Using the Ozeki AI Gateway GUIHow to test LLM services using a Python command line script
The llm‑tester Python script provides a quick command‑line way to hit
the gateway’s API. By passing the endpoint, API key and model, you can script
repeated prompts, view JSON responses, and verify that the gateway routes
correctly to your provider.
How to Test LLM Services Using Postman
Postman offers a visual environment for constructing the exact HTTP request the gateway expects. Set a Bearer token, add the JSON payload, send the call, and inspect the full response and headers—perfect for debugging and sharing reusable request collections.
How to Test LLM Services Using Postman