Available

Various open models

Various open models — free model from NVIDIA NIM.

Various open models — Free API Specifications

Context 131K
Max Output 8K
Modality text
Rate Limit See provider page
Card Required No
OpenAI Compatible Yes

How to Configure Various open models for Free

Base URL https://integrate.api.nvidia.com/v1
How to get an API key Get API Key →

One-Click Config for Claude Code, Cursor & More

Claude Code

# Claude Code works via OpenRouter's Anthropic-compatible API.
# Note: Only paid Anthropic Claude models are supported (e.g. claude-sonnet-4.6, claude-opus-4).
# Browse available Claude models at: https://openrouter.ai/models?q=anthropic

# Add to ~/.zshrc or ~/.bashrc
export OPENROUTER_API_KEY="<your-openrouter-api-key>"  # Get at https://openrouter.ai/settings/keys
export ANTHROPIC_BASE_URL="https://openrouter.ai/api"
export ANTHROPIC_AUTH_TOKEN="$OPENROUTER_API_KEY"
export ANTHROPIC_API_KEY=""  # Must be explicitly empty to avoid conflicts

# Optional: pin specific models for each role
# export ANTHROPIC_DEFAULT_SONNET_MODEL="anthropic/claude-sonnet-4.6"
# export ANTHROPIC_DEFAULT_HAIKU_MODEL="anthropic/claude-haiku-4.5"

# Then simply run: claude

Cursor

# Cursor → Settings (⚙️) → Models → Add Model
# Enter the model name exactly as shown, then fill in:
#   Override OpenAI Base URL: https://integrate.api.nvidia.com/v1
#   OpenAI API Key: <your-api-key>   # Get at https://build.nvidia.com/explore/discover
# Click "Verify" to confirm the connection, then enable the model.
#
# Model name to add: Various open models

Codex

# Add to ~/.zshrc or ~/.bashrc
export OPENAI_BASE_URL="https://integrate.api.nvidia.com/v1"
export OPENAI_API_KEY="<your-api-key>"  # Get at https://build.nvidia.com/explore/discover

# Then run:
codex --model "Various open models"

Gemini CLI

# ~/.gemini/settings.json
{
  "apiKey": "<your-api-key>",
  "model": "Various open models"
}
# Get API key at https://build.nvidia.com/explore/discover

OpenCode

// ~/.config/opencode/opencode.json
{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "free-llm": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "Free LLM",
      "options": {
        "baseURL": "https://integrate.api.nvidia.com/v1",
        "apiKey": "<your-api-key>"
      },
      "models": {
        "Various open models": { "name": "Various open models" }
      }
    }
  }
}
// Get API key at https://build.nvidia.com/explore/discover

Hermes

# Step 1 — Edit config.yaml
# Windows: C:\Users\<you>\AppData\Local\hermes\config.yaml
# macOS/Linux: ~/.config/hermes/config.yaml

model:
  default: Various open models
  provider: custom
  base_url: ${CUSTOM_BASE_URL}
  api_key: ${CUSTOM_API_KEY}
  model_aliases:
    Various open models:
      model: "Various open models"
      provider: "custom"

# Step 2 — Edit .env (same directory as config.yaml)
# Windows: C:\Users\<you>\AppData\Local\hermes\.env
# macOS/Linux: ~/.config/hermes/.env

# ========================
# Custom API (OpenAI-compatible)
# ========================
CUSTOM_API_KEY=<your-api-key>        # Get at https://build.nvidia.com/explore/discover
CUSTOM_BASE_URL=https://integrate.api.nvidia.com/v1

OpenClaw

// ~/.openclaw/openclaw.json  (JSON5 format)
{
  "agents": {
    "defaults": {
      "model": {
        "primary": "Various open models",
      },
    },
  },
  "models": {
    "providers": {
      // Option A — Built-in provider (OpenAI, Anthropic, Google…)
      // Just add apiKey; OpenClaw handles the baseUrl automatically
      // "openai": { "apiKey": "<your-api-key>" },

      // Option B — Custom OpenAI-compatible base URL (e.g. OpenRouter, NVIDIA)
      "free-llm": {
        "baseUrl": "https://integrate.api.nvidia.com/v1",
        "apiKey": "<your-api-key>",  // Get at https://build.nvidia.com/explore/discover
        "api": "openai-completions", // openai-completions | anthropic-messages | …
        "models": [
          { "id": "Various open models", "name": "Various open models" },
        ],
      },
    },
  },
}
// Apply: openclaw gateway restart
// Verify: openclaw doctor --fix

Frequently Asked Questions about Various open models

Is Various open models free to use?

Yes. Various open models is available on a permanently free tier via NVIDIA NIM. No credit card is required — simply sign up and get your API key. The free tier includes a rate limit of See provider page.

What is Various open models best for?

Various open models is optimized for chat tasks. It supports text modalities, with a context window of 131K tokens and a maximum output of 8K tokens. Various open models — free model from NVIDIA NIM.

Is Various open models OpenAI-compatible?

Yes. Various open models uses an OpenAI-compatible API endpoint at https://integrate.api.nvidia.com/v1. You can use it with the OpenAI Python/JS SDK, or any tool that accepts a custom baseURL — including Claude Code (cc), Cursor, Codex, and OpenCode.

How do I get an API key for Various open models?

Visit NVIDIA NIM's API key page to register and generate a free API key. Once you have the key, use the configuration snippets above to set up Claude Code, Cursor, or your preferred AI coding tool.