Launch promo ends May 17: 50% bonus prepaid credits + 1M trial tokens. Approved prepaid API keys are typically sent within 30 minutes after review.
OpenAI-compatible setup guide

Connect LighterHub to your coding agent.

Use the same three values in Cline, Hermes Agent, OpenClaw, Roo Code, and other OpenAI-compatible clients: base URL, model ID, and your LighterHub API key. LighterHub is currently a single-model setup.

Before you open the app.

Use this flow for direct LighterHub API access. RapidAPI is useful for marketplace testing, but coding agents usually need the direct OpenAI-compatible base URL and key.

1

Start a request

Buy prepaid credits, request trial tokens, or contact sales for reserved capacity.

2

Wait for review

LighterHub checks payment, region, workload fit, and Qwen3.6 capacity before sending credentials.

3

Paste three values

Use the base URL, model ID, and API key in your OpenAI-compatible client.

4

Run a small task

Start with one file edit or repo question before launching a long autonomous loop.

Choose your app.

Select the client you are setting up. The matching guide appears here, with the shared LighterHub fields repeated for less scrolling.

Cline setup

OpenAI Compatible in Cline settings.
ProviderOpenAI Compatible
Base URLhttps://api.lighterhub.app/v1
Model IDlighterhub-qwen

Steps

  1. Open Cline in VS Code and click the settings icon.
  2. Set API Provider to OpenAI Compatible.
  3. Paste the LighterHub base URL and your API key.
  4. Enter the model ID lighterhub-qwen.
  5. Verify the connection, then start with a small repo task.

Hermes Agent setup

Custom provider endpoint.
Providercustom
Base URLhttps://api.lighterhub.app/v1
Model IDlighterhub-qwen

Steps

  1. Open Hermes model settings or run hermes model.
  2. Set the custom endpoint to the LighterHub OpenAI-compatible URL.
  3. Paste the LighterHub base URL and API key.
  4. Set the model ID to lighterhub-qwen.
  5. Restart the gateway or open a new session so the model change applies.

Example config

model:
  provider: custom
  model: lighterhub-qwen
  base_url: https://api.lighterhub.app/v1
  api_key: ${LIGHTERHUB_API_KEY}
  api_mode: chat_completions

OpenClaw setup

Custom OpenAI provider config.
Adapteropenai-completions
Base URLhttps://api.lighterhub.app/v1
Model reflighterhub/lighterhub-qwen

Steps

  1. Add LighterHub as a custom provider under models.providers.
  2. Use the openai-completions adapter for the LighterHub /v1 endpoint.
  3. Set the default agent model to lighterhub/lighterhub-qwen.

Example config

{
  "models": {
    "mode": "merge",
    "providers": {
      "lighterhub": {
        "baseUrl": "https://api.lighterhub.app/v1",
        "apiKey": "${LIGHTERHUB_API_KEY}",
        "api": "openai-completions",
        "models": [
          {
            "id": "lighterhub-qwen",
            "name": "LighterHub Qwen3.6 35B-A3B",
            "reasoning": false,
            "input": ["text"],
            "contextWindow": 262000,
            "maxTokens": 8192
          }
        ]
      }
    }
  },
  "agents": {
    "defaults": {
      "model": { "primary": "lighterhub/lighterhub-qwen" }
    }
  }
}

Roo Code setup

Existing OpenAI-compatible installs.
ProviderOpenAI Compatible
Base URLhttps://api.lighterhub.app/v1
Model IDlighterhub-qwen

Steps

  1. Open Roo Code settings in VS Code.
  2. Set API Provider to OpenAI Compatible.
  3. Paste the LighterHub base URL, API key, and model ID.
  4. Run a small task first. Roo's docs require native tool calling for agent workflows, so treat this as best-effort with LighterHub chat completions.

Roo's official docs say Roo Code products shut down on May 15, 2026. Use this only for existing installs after confirming compatibility, or choose Cline, Hermes Agent, OpenClaw, or another OpenAI-compatible client.

Quick connection test.

If an app fails, first test the same key and model with a small OpenAI-compatible request.

curl https://api.lighterhub.app/v1/chat/completions \
  -H "Authorization: Bearer $LIGHTERHUB_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "lighterhub-qwen",
    "messages": [
      {
        "role": "user",
        "content": "Reply with one sentence confirming the connection."
      }
    ],
    "max_tokens": 64
  }'

Check balance and usage.

Use the same API key to view your current balance, recent usage summary, and a CSV export for your own records.

Balance

Shows remaining prepaid credit, lifetime spend, free-trial status, and pass status when an API pass is active.

Usage report

Returns request IDs, timestamps, model, status, latency, token counts, billable flag, balance metadata, and cost metadata.

Privacy boundary

Usage exports contain metadata only. Prompts and model responses are not stored in the customer export.

curl https://api.lighterhub.app/billing/balance \
  -H "Authorization: Bearer $LIGHTERHUB_API_KEY"
curl "https://api.lighterhub.app/billing/usage?hours=24&limit=200" \
  -H "Authorization: Bearer $LIGHTERHUB_API_KEY"
curl "https://api.lighterhub.app/billing/usage?hours=24&format=csv&limit=1000" \
  -H "Authorization: Bearer $LIGHTERHUB_API_KEY" \
  -o lighterhub-usage-24h.csv
Options: hours can be 1-168, limit can be 1-1000, and format can be json or csv.

Troubleshooting.

Most setup failures come from one of these field mismatches.

Do not add extra path segments

Use https://api.lighterhub.app/v1 as the base URL. Do not paste the full /chat/completions URL into app settings.

Use the exact model ID

Model-not-found errors usually mean the app used a different ID or selected a default OpenAI model.

Keep keys private

Do not commit API keys into repos, screenshots, issue reports, or shared config files.

Official references: Cline Hermes Agent OpenClaw Roo Code

Need help with a specific client?

Send the app name, the provider screen you are on, and the error text with the API key redacted. LighterHub can confirm the correct base URL, model ID, and access path.