Model

DeepSeek: DeepSeek V3.2

Performance and pricing snapshot from the global rankings; composite scoring matches the on-site model board.

Data updated:

About this model

DeepSeek V3.2 is DeepSeek’s general-purpose LLM line tuned for strong reasoning and coding at competitive per-token pricing. This page summarizes how it ranks on AI Hippo’s global board, lists aggregator quotes from our snapshot, and shows a minimal OpenRouter-compatible API call you can adapt in production.

Key metrics

Rank
25
Kind
LLM
Core metric
164k ctx
1M tokens (avg)
$0.32

Hippo's Quick Action

OpenRouter chat completions URL; set Authorization and body per docs.

Price calculator

Est. monthly cost (USD):

Price comparison (snapshot)

Source / aggregator Price / 1M tokens Latency
OpenRouter $0.32

Figures come from the imported leaderboard snapshot; live aggregator pricing and latency can change.

How to integrate

OpenRouter exposes an OpenAI-compatible Chat Completions endpoint. Use the tabs below to switch example languages. Replace the model id with the one from your provider page if you route elsewhere.

// Node.js 18+ — set OPENROUTER_API_KEY in your environment
const res = await fetch('https://openrouter.ai/api/v1/chat/completions', {
  method: 'POST',
  headers: {
    'Authorization': `Bearer ${process.env.OPENROUTER_API_KEY}`,
    'Content-Type': 'application/json',
  },
  body: JSON.stringify({
    model: "deepseek/deepseek-v3.2",
    messages: [{ role: 'user', content: 'Hello' }],
  }),
});
const data = await res.json();
console.log(data);

Store API keys in environment variables or a secret manager—never commit them to source control.

Pick one or two more models on global rankings and use Compare to view them side by side.

Run with Ollama

Paste into your terminal (install Ollama first):