freellm
.cc
Models
Providers
Config
Compare
Playground
Changelog
Compare Free LLM APIs Side by Side
勾选 2 个以上模型,点击 Compare 查看对比。
OpenRouter —
Owl Alpha
OpenRouter —
NVIDIA: Nemotron 3 Nano Omni (free)
OpenRouter —
Poolside: Laguna XS.2 (free)
OpenRouter —
Poolside: Laguna M.1 (free)
OpenRouter —
inclusionAI: Ling-2.6-1T (free)
OpenRouter —
Tencent: Hy3 preview (free)
OpenRouter —
Baidu: Qianfan-OCR-Fast (free)
OpenRouter —
Google: Gemma 4 26B A4B (free)
OpenRouter —
Google: Gemma 4 31B (free)
OpenRouter —
Google: Lyria 3 Pro Preview
OpenRouter —
Google: Lyria 3 Clip Preview
OpenRouter —
NVIDIA: Nemotron 3 Super (free)
OpenRouter —
MiniMax: MiniMax M2.5 (free)
OpenRouter —
Free Models Router
OpenRouter —
LiquidAI: LFM2.5-1.2B-Thinking (free)
OpenRouter —
LiquidAI: LFM2.5-1.2B-Instruct (free)
OpenRouter —
NVIDIA: Nemotron 3 Nano 30B A3B (free)
OpenRouter —
NVIDIA: Nemotron Nano 12B 2 VL (free)
OpenRouter —
Qwen: Qwen3 Next 80B A3B Instruct (free)
OpenRouter —
NVIDIA: Nemotron Nano 9B V2 (free)
OpenRouter —
OpenAI: gpt-oss-120b (free)
OpenRouter —
OpenAI: gpt-oss-20b (free)
OpenRouter —
Z.ai: GLM 4.5 Air (free)
OpenRouter —
Qwen: Qwen3 Coder 480B A35B (free)
OpenRouter —
Venice: Uncensored (free)
OpenRouter —
Google: Gemma 3n 2B (free)
OpenRouter —
Google: Gemma 3n 4B (free)
OpenRouter —
Google: Gemma 3 4B (free)
OpenRouter —
Google: Gemma 3 12B (free)
OpenRouter —
Google: Gemma 3 27B (free)
OpenRouter —
Meta: Llama 3.3 70B Instruct (free)
OpenRouter —
Meta: Llama 3.2 3B Instruct (free)
OpenRouter —
Nous: Hermes 3 405B Instruct (free)
NVIDIA NIM —
Various open models
Mistral (La Plateforme) —
Open and Proprietary Mistral models
Cohere —
Command A (111B)
Cohere —
Command R+
Cohere —
Command R7B
Cohere —
Embed 4
Cohere —
Rerank 3.5
Google Gemini —
Gemini 2.5 Flash
Google Gemini —
Gemini 2.5 Flash-Lite
Mistral AI —
Mistral Small 4
Mistral AI —
Mistral Medium 3
Mistral AI —
Mistral Large 3
Mistral AI —
Mistral Nemo (12B)
Mistral AI —
Codestral
Mistral AI —
Pixtral Large
Z AI (Zhipu AI) —
GLM-4.7-Flash
Z AI (Zhipu AI) —
GLM-4.5-Flash
Z AI (Zhipu AI) —
GLM-4.6V-Flash
Cerebras —
llama3.1-8b
Cerebras —
gpt-oss-120b
Cerebras —
qwen-3-235b-a22b-instruct-2507
Cerebras —
zai-glm-4.7
Cloudflare Workers AI —
@cf/meta/llama-3.3-70b-instruct-fp8-fast
Cloudflare Workers AI —
@cf/meta/llama-3.1-8b-instruct-fp8-fast
Cloudflare Workers AI —
@cf/meta/llama-3.2-11b-vision-instruct
Cloudflare Workers AI —
@cf/meta/llama-4-scout-17b-16e-instruct
Cloudflare Workers AI —
@cf/mistralai/mistral-small-3.1-24b-instruct
Cloudflare Workers AI —
@cf/google/gemma-4-26b-a4b-it
Cloudflare Workers AI —
@cf/qwen/qwq-32b
Cloudflare Workers AI —
@cf/deepseek-ai/deepseek-r1-distill-qwen-32b
Cloudflare Workers AI —
+ 42 more models
GitHub Models —
gpt-4.1
GitHub Models —
gpt-4.1-mini
GitHub Models —
gpt-4o
GitHub Models —
o3-mini
GitHub Models —
o4-mini
GitHub Models —
Llama-4-Scout-17B-16E
GitHub Models —
Llama-4-Maverick-17B-128E
GitHub Models —
Meta-Llama-3.3-70B
GitHub Models —
DeepSeek-R1
GitHub Models —
Mistral-Small-3.1
GitHub Models —
+ 35 more models
Groq —
llama-3.3-70b-versatile
Groq —
llama-3.1-8b-instant
Groq —
llama-4-scout-17b-16e-instruct
Groq —
llama-4-maverick-17b-128e-instruct
Groq —
qwen3-32b
Groq —
kimi-k2-instruct
Groq —
deepseek-r1-distill-70b
Groq —
whisper-large-v3
Groq —
whisper-large-v3-turbo
Hugging Face —
Meta-Llama-3.1-8B-Instruct
Hugging Face —
Mistral-7B-Instruct-v0.3
Hugging Face —
Mixtral-8x7B-Instruct-v0.1
Hugging Face —
Phi-3.5-mini-instruct
Hugging Face —
Qwen2.5-7B-Instruct
Hugging Face —
+ thousands of community models
Kilo Code —
bytedance-seed/dola-seed-2.0-pro:free
Kilo Code —
x-ai/grok-code-fast-1:optimized:free
Kilo Code —
nvidia/nemotron-3-super-120b-a12b:free
Kilo Code —
arcee-ai/trinity-large-thinking:free
Kilo Code —
openrouter/free
LLM7.io —
deepseek-r1-0528
LLM7.io —
deepseek-v3-0324
LLM7.io —
gpt-4o-mini
LLM7.io —
mistral-small-3.1-24b
LLM7.io —
qwen2.5-coder-32b
LLM7.io —
+ ~24 more models
ModelScope —
Qwen/Qwen3.5-35B-A3B
ModelScope —
Qwen/Qwen3.5-27B
ModelScope —
Qwen/Qwen-Image
ModelScope —
+ API-Inference-enabled models
NVIDIA NIM —
deepseek-ai/deepseek-r1
NVIDIA NIM —
nvidia/llama-3.1-nemotron-ultra-253b-v1
NVIDIA NIM —
nvidia/nemotron-3-super-120b-a12b
NVIDIA NIM —
nvidia/nemotron-3-nano-30b-a3b
NVIDIA NIM —
meta/llama-3.1-405b-instruct
NVIDIA NIM —
qwen/qwen2.5-72b-instruct
NVIDIA NIM —
google/gemma-4-31b
NVIDIA NIM —
mistralai/mistral-large-2-instruct
NVIDIA NIM —
nvidia/nemotron-nano-2-vl
NVIDIA NIM —
minimax/minimax-m2.7
NVIDIA NIM —
+ 90 more models
Ollama Cloud —
llama3.1:cloud
Ollama Cloud —
deepseek-r1:cloud
Ollama Cloud —
qwen2.5:cloud
Ollama Cloud —
gemma2:cloud
Ollama Cloud —
mistral:cloud
Ollama Cloud —
+ 400 more models
OVHcloud AI Endpoints —
Meta-Llama-3_3-70B-Instruct
OVHcloud AI Endpoints —
DeepSeek-R1-Distill-Llama-70B
OVHcloud AI Endpoints —
Qwen3-Coder-30B-A3B-Instruct
OVHcloud AI Endpoints —
Qwen2.5-VL-72B-Instruct
OVHcloud AI Endpoints —
Mistral-Nemo-Instruct-2407
OVHcloud AI Endpoints —
Qwen3Guard-Gen-8B
OVHcloud AI Endpoints —
Qwen3Guard-Gen-0.6B
OVHcloud AI Endpoints —
+ 30 more models
SiliconFlow —
Qwen/Qwen3-8B
SiliconFlow —
deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
SiliconFlow —
deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
SiliconFlow —
THUDM/glm-4-9b-chat
SiliconFlow —
THUDM/GLM-4.1V-9B-Thinking
SiliconFlow —
deepseek-ai/DeepSeek-OCR
SiliconFlow —
+ embedding/speech models
SiliconFlow —
Abbreviation
Select 2-4 models from the table above to compare them side by side.