How to Test LLM Services Using Postman

This guide demonstrates how to test LLM services through Postman by sending API requests to Ozeki AI Gateway. You'll learn how to configure Postman with the correct endpoint, authentication, and payload format to send test prompts and verify AI service responses.

What is Postman?

Postman is a popular API development and testing platform that allows developers to send HTTP requests to web services and APIs. When working with Ozeki AI Gateway, Postman provides a convenient way to test your gateway's API endpoints, verify authentication tokens, and examine request and response data in detail.

Request format

To test LLM services through Ozeki AI Gateway using Postman, you need to send a POST request to the chat completions endpoint with proper authentication and a JSON payload. Here's the basic request structure:

POST http://localhost/v1/chat/completions

Authorization: Bearer Token sk-123

Payload:
{
  "model": "ai-model",
  "messages": [
    {
      "role": "user",
      "content": "Where is Budapest?"
    }
  ],
  "stream": false,
  "max_tokens": 500
}

Steps to follow

We assume Ozeki AI Gateway is already installed on your system. You can install it on Linux, Windows or Mac.

  1. Download and install Postman
  2. Create new POST request
  3. Configure authentication
  4. Set request body
  5. Send test request
  6. View response
  7. Check gateway logs

How to test LLM services with Postman video

The following video shows how to test LLM services using Postman step-by-step. The video covers downloading Postman, configuring the request with proper authentication, sending test prompts to Ozeki AI Gateway, and reviewing the responses.

Step 0 - Configure Ozeki AI Gateway prerequisites

Before you can test LLM services with the llm-tester script, you need to have Ozeki AI Gateway properly configured with the following components:

These components work together to enable authentication and routing of AI requests through your gateway. If you haven't set up these prerequisites yet, follow the linked guides to complete the configuration before proceeding with the llm-tester script.

Step 1 - Download and install Postman

Navigate to the Postman website in your web browser. The Postman platform provides both desktop and web-based versions for API testing and development (Figure 1).

Open Postman website
Figure 1 - Open Postman website

Download the Postman application for your operating system. Postman is available for Windows, macOS, and Linux. The installer will automatically detect your operating system and provide the appropriate download (Figure 2).

Download Postman
Figure 2 - Download Postman

After the download completes, locate Postman in your Downloads folder and launch the application (Figure 3).

Run Postman application
Figure 3 - Launch Postman application

Step 2 - Create new POST request

In the Postman interface, locate the request method dropdown menu which defaults to GET. Click on this dropdown and select POST as the request type. POST requests are used to send data to an API endpoint, which is necessary for submitting chat completion requests to Ozeki AI Gateway (Figure 4).

Select POST request type
Figure 4 - Select POST request type

In the URL field, enter the Ozeki AI Gateway endpoint URL. If you're running the gateway locally, use http://localhost/v1/chat/completions (Figure 5).

Enter Ozeki AI Gateway URL
Figure 5 - Enter the gateway endpoint URL

Step 3 - Configure authentication

Navigate to the Authorization tab below the URL field. From the Type dropdown menu, select Bearer Token. This authentication method is used by Ozeki AI Gateway to verify that requests come from authorized users (Figure 6).

Choose Bearer Token authorization
Figure 6 - Select Bearer Token authentication

In the Token field that appears, enter your API key that you obtained when creating a user account in Ozeki AI Gateway. This token authenticates your request and associates it with your user account in the gateway (Figure 7).

Enter API key
Figure 7 - Enter your API key

Step 4 - Set request body

Switch to the Body tab in the request configuration area. Select the raw radio button to enable sending raw JSON data in the request body. From the format dropdown that appears on the right, select JSON to ensure proper content type headers are set (Figure 8).

Select raw JSON body type
Figure 8 - Select raw JSON request body

Step 5 - Send test request

In the request body text area, enter the JSON payload for your chat completion request. The payload should include the model you want to use, the messages array with your prompt, and optional parameters like stream and max_tokens. After entering the payload, click the blue Send button to submit your request to Ozeki AI Gateway (Figure 9).

{
  "model": "ai-model",
  "messages": [
    {
      "role": "user",
      "content": "Where is Budapest?"
    }
  ],
  "stream": false,
  "max_tokens": 500
}

Enter payload and send request
Figure 9 - Enter JSON payload and send request

Step 6 - View response

After sending the request, the gateway processes your prompt through the configured AI provider and returns the response. The response appears in the lower section of the Postman interface, displaying the complete JSON response. A successful response with status code 200 indicates that your gateway is correctly configured (Figure 10).

View LLM response
Figure 10 - Review the AI service response

Step 7 - Check gateway logs

To view detailed transaction information about your test request, open the Ozeki AI Gateway web interface and navigate to the Logs page (Figure 11).

Open logs page in Ozeki AI Gateway
Figure 11 - Navigate to logs section

In the Logs section, navigate to the date folder corresponding to when you sent your Postman request (Figure 12).

Select log file
Figure 12 - Open the log file

Select the .jlog file that belongs to the API key you used to access the AI model with. The log file displays all transactions in a table format, showing detailed information including timestamps, models used, token consumption, and response times. You can click on individual transactions to view the complete request and response data in JSON format (Figure 13).

View transactions in log
Figure 13 - Review transaction details

Final thoughts

You have successfully tested LLM services using Postman with Ozeki AI Gateway. This method provides developers with a powerful way to understand the API request format, test authentication, debug issues, and validate gateway configurations. Postman's interface makes it easy to experiment with different models, parameters, and prompts while examining detailed request and response data.


More information