Token Counter

Count tokens in your text for ChatGPT, Claude, Gemini, and other AI models. Estimate your API costs before making requests.

First use may take a moment while the tokenizer loads.

Results

0 Tokens
0 Characters
0 Words
0 Lines

Note: Uses o200k_base encoding (GPT-4o / GPT-5). Other providers use different tokenizers, so counts may vary slightly.

Related: For guidance on how document size maps to model context windows, see our LLM Context Window Comparison tool.

Quick Cost Estimate

Based on token count at current model prices (input only):

GPT-5.2 $0.000
GPT-5 mini $0.000
Claude Sonnet 4 $0.000
Claude Haiku 4.5 $0.000

* Input token cost only. Use our LLM Calculator for detailed pricing.

How Tokenization Works

Breaking Down Text

Tokenizers split text into smaller pieces called tokens. Common words are often single tokens, while rare words get split into subwords. "Understanding" might become ["under", "standing"].

Language Matters

English is the most token-efficient. Other languages, especially those with non-Latin scripts (Chinese, Arabic, Hindi), typically use 2-3x more tokens for the same content.

Code & Special Characters

Code often uses more tokens due to special characters and formatting. Spaces, newlines, and punctuation all count as tokens. Minimize whitespace to save tokens.

Analyst's Note

1. The Efficiency Gap (GPT-5 vs. GPT-4)

Users upgrading from legacy GPT-4 Turbo to the new GPT-5.2 or GPT-5 mini will see an immediate reduction in token usage. The GPT-5 architecture standardizes on the o200k_base tokenizer, which is significantly more efficient at encoding code (Python, JavaScript) than the older cl100k_base. Benchmarks show that code-heavy prompts consume 15-20% fewer tokens on GPT-5, effectively lowering API costs prior to any logic optimization.

2. Hidden Costs in Reasoning Models

Modern reasoning models like GPT-5.2 and DeepSeek V3.2 generate substantial "Chain of Thought" (CoT) tokens during processing. You are billed for these tokens even though they are discarded before the final response. For these models, visible output typically represents only 20-30% of total billed tokens. We recommend buffering your budget by 4x when using reasoning-heavy endpoints.

3. Multilingual RAG Costs

While Gemini 3 Flash has improved non-English efficiency, most tokenizers remain biased toward English. RAG pipelines for languages like Hindi, Arabic, or Chinese using Gemini 3 Flash will incur significantly higher costs, often 2x to 3x per semantic unit compared to English. Use this counter to audit multilingual prompts specifically for non-Latin scripts before deployment.

Token Examples

See how different text types tokenize

"Hello, world!"
≈ 4 tokens
"The quick brown fox"
≈ 4 tokens
"Artificial Intelligence"
≈ 2 tokens
"https://example.com"
≈ 4 tokens
"console.log('test')"
≈ 5 tokens
"こんにちは" (Hello in Japanese)
≈ 1 tokens

Model Context Limits

Maximum tokens (input + output) for popular models

Model Context Window ≈ Words
DeepSeek V3.2 128,000 tokens ~96,000 words
GPT-5.2 400,000 tokens ~300,000 words
Claude Opus 4.5 200,000 tokens ~150,000 words
Claude Sonnet 4 1,000,000 tokens ~750,000 words
Gemini 3 Pro 1,000,000 tokens ~750,000 words
Gemini 3 Flash 1,000,000 tokens ~750,000 words

Frequently Asked Questions

What is a token in AI/LLM context?

A token is a chunk of text that AI models process. In English, a token is roughly 4 characters or about 0.75 words. For example, the word 'hamburger' is split into 'hamb', 'urger' - 2 tokens Common words like "the" or "is" are typically 1 token each.

How accurate is this token counter?

This tool uses the official OpenAI tokenizer (o200k_base), providing 100% accurate counts for GPT-4o, GPT-5, and other OpenAI models. Claude, Gemini, and other providers use different tokenizers, so counts may vary by approximately 5%.

Why do different AI models count tokens differently?

Each AI provider develops their own tokenizer optimized for their models. OpenAI's GPT models use tiktoken, Anthropic's Claude uses a different tokenizer, and Google's models have their own. The differences are usually small but can affect pricing calculations.

How can I reduce my token count?

To reduce tokens: (1) Be concise - remove unnecessary words, (2) Remove extra whitespace and newlines, (3) Use shorter variable names in code, (4) Avoid repeating context, (5) Use abbreviations where appropriate. Every token saved reduces your API costs.

Do spaces and punctuation count as tokens?

Yes, spaces and punctuation are often included in tokens. A space before a word is typically merged with that word into a single token. Punctuation marks like periods, commas, and quotes are usually separate tokens.

How many tokens is 1000 words?

In English, 1000 words is approximately 1,333 tokens (using the ~0.75 words per token rule). However, this varies based on word complexity. Technical text with specialized terms may use more tokens, while simple prose uses fewer.