How to setup Kimi K2.5 on a single NVIDIA RTX 6000 Pro 96GB GPU

This guide shows how to setup and run Kimi K2.5 on a server system with a single Nvidia RTX 6000 Pro 96GB GPU system and run it at 20 Tokens per second speed.

Server configuration

Motherboard: Gigabyte MZ33-AR1
CPU: AMD EPYC 9755 ZEN5 TURIN 128 Core/256 Threads 1.9-2.7ghz, L3 512M
RAM: 768GB DDR5 at 4800Mhz (12x64GB)
GPU: NVIDIA RTX 6000 Pro Workstation 96GB GPU
SSD: WD_BLACK SN850X 8TB M.2 2280 PCIe Gen4 SSD
OS: Ubuntu 24.04.2 LTS

Prerequisites

Before starting, ensure you have:

  1. KT-Kernel installed:

    Note: Latest KTransformers' EPLB feature for Kimi-K2.5 will be supported soon.

    git clone https://github.com/kvcache-ai/ktransformers.git
    git checkout kimi_k2.5
    git submodule update --init --recursive
    cd kt-kernel && ./install.sh
    
  2. SGLang installed - Follow SGLang integration steps

    Note: Currently, please clone our custom SGLang repository:

    git clone https://github.com/kvcache-ai/sglang.git
    git checkout kimi_k2.5
    cd sglang && pip install -e "python[all]"
    // maybe need to reinstall cudnn according to the issue when launching SGLang
    pip install nvidia-cudnn-cu12==9.16.0.29
    
  3. CUDA toolkit - Compatible with your GPU (CUDA 12.8+ recommended)

  4. Hugging Face CLI - For downloading models:

    pip install huggingface-hub
    

Step 1: Download Model Weights

# Create a directory for models
mkdir -p /path/to/models
cd /path/to/models

# Download Kimi-K2.5 (RAW-INT4 for both CPU and GPU)
huggingface-cli download moonshotai/Kimi-K2.5 \
  --local-dir /home/user/models/Kimi-K2.5

Note: Replace /home/user/models/Kimi-K2.5 with your actual storage path throughout this tutorial.

Step 2: Launch SGLang Server

Start the SGLang server with KT-Kernel integration for CPU-GPU heterogeneous inference.

Launch Command (4x RTX 4090 Example)

python -m sglang.launch_server \
--host 0.0.0.0 \
--port 10002 \
--model /home/user/models/Kimi-K2.5 \
--kt-weight-path /home/user/models/Kimi-K2.5 \
--kt-cpuinfer 120 \
--kt-threadpool-count 1 \
--kt-num-gpu-experts 30 \
--kt-method RAWINT4 \
--kt-gpu-prefill-token-threshold 400 \
--reasoning-parser kimi_k2 \
--tool-call-parser kimi_k2 \
--trust-remote-code \
--mem-fraction-static 0.94 \
--served-model-name Kimi-K2.5 \
--enable-mixed-chunk \
--tensor-parallel-size 1 \
--enable-p2p-check \
--disable-shared-experts-fusion \
--context-length 131072 \
--chunked-prefill-size 131072 \
--max-total-tokens 150000 \
--attention-backend flashinfer'

It takes about 2~3 minutes to start the server.

See KT-Kernel Parameters for detailed parameter tuning guidelines.

Step 3: Send Inference Requests

Once the server is running, you can send inference requests using the OpenAI-compatible API.

Basic Chat Completion Request

curl -s http://localhost:31245/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "Kimi-K2.5",
    "stream": false,
    "messages": [
      {"role": "user", "content": "hi, who are you?"}
    ]
  }'

Example Response

{
    "id": "2a4e83f8a79b4b57b103b0f298fbaa7d",
    "object": "chat.completion",
    "created": 1769333912,
    "model": "Kimi-K2.5",
    "choices": [
        {
            "index": 0,
            "message": {
                "role": "assistant",
                "content": " The user is asking \"hi, who are you?\" which is...",
                "reasoning_content": null,
                "tool_calls": null
            },
            "logprobs": null,
            "finish_reason": "stop",
            "matched_stop": 163586
        }
    ],
    "usage": {
        "prompt_tokens": 32,
        "total_tokens": 317,
        "completion_tokens": 285,
        "prompt_tokens_details": null,
        "reasoning_tokens": 0
    },
    "metadata": {
        "weight_version": "default"
    }
}

Crate a service file

If it runs properly, you can create a sercie file to run it as an Ubuntu service

/etc/systemd/system/kimi25.service:
[Unit] 
Description=Kimi 2.5 Server
After=network.target

[Service]
User=user
WorkingDirectory=/home/user/kimi2.5
Environment="CUDA_HOME=/usr/local/cuda-12.9" 
Environment=LD_LIBRARY_PATH="/usr/local/cuda-12.9/lib64:${LD_LIBRARY_PATH:-}"
ExecStart=bash -c 'source /home/user/miniconda3/bin/activate kimi25; \
python -m sglang.launch_server \
--host 0.0.0.0 \
--port 10002 \
--model /home/user/models/Kimi-K2.5 \
--kt-weight-path /home/user/models/Kimi-K2.5 \
--kt-cpuinfer 120 \
--kt-threadpool-count 1 \
--kt-num-gpu-experts 30 \
--kt-method RAWINT4 \
--kt-gpu-prefill-token-threshold 400 \
--reasoning-parser kimi_k2 \
--tool-call-parser kimi_k2 \
--trust-remote-code \
--mem-fraction-static 0.94
--served-model-name Kimi-K2.5
--enable-mixed-chunk \
--tensor-parallel-size 1 \
--enable-p2p-check \
--disable-shared-experts-fusion \
--context-length 131072 \
--chunked-prefill-size 131072 \
--max-total-tokens 150000 \
--attention-backend flashinfer'
Restart=on-failure 
TimeoutStartSec=600

[Install] 
WantedBy=multi-user.target

References

Ktransformers Kimi-K2.5 setup instructions
KT-Kernel Parameters

Conclusion

You have now successfully configured Kimi K2.5 to run on a single NVIDIA RTX 6000 Pro 96GB GPU, achieving approximately 20 tokens per second inference speed. By leveraging the KT-Kernel's CPU-GPU heterogeneous inference capabilities combined with SGLang, you can efficiently serve this large language model on a single workstation-class GPU. The systemd service configuration ensures your deployment remains stable and automatically restarts on failure. For production deployments, consider monitoring GPU and CPU utilization to optimize the --kt-cpuinfer and --kt-num-gpu-experts parameters based on your specific workload patterns.

More information