tokenspeed

feel LLM tokens-per-second rates  ·  github  · 

How fast is 10 tokens per second really?

30 tok/s
ucustom text…

Every local-LLM benchmark reports throughput: "47 tok/s on an M3," "180 tok/s on a 4090," "500 tok/s on Groq." Unless you've actually watched tokens stream at those rates, the numbers are hard to internalize. This is the rendering.

Four modes

What to try

Start at the default 30 and read along. Then hit 1 (5 tok/s — Raspberry-Pi-class local model), 5 (60 tok/s — typical hosted Claude or GPT), 7 (200 tok/s — Groq territory), 9 (800 tok/s — Cerebras-class, where the bottleneck is your eyeballs).

Now switch between c and t at the same rate. The difference is striking — and intentional.

What counts as a token

This approximates BPE-style tokenization, not any vendor-specific encoder (tiktoken, Claude's tokenizer, etc. — those disagree in the details anyway).

Short words are often one token; longer identifiers split into chunks (processUserInputprocess + User + Input); punctuation and operators usually count too.

Code is more token-dense than prose, so the same tok/s can feel very different depending on what's streaming. The benchmark number is honest; the perceptual effect varies a lot by content type — which is the gap this tool exists to expose.

English prose averages ~1.3 tokens per word, so 30 tok/s ≈ 23 words/s.