[フレーム]
BT

InfoQ Software Architects' Newsletter

A monthly overview of things you need to know as an architect or aspiring architect.

View an example

We protect your privacy.

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Unlock the full InfoQ experience

Unlock the full InfoQ experience by logging in! Stay updated with your favorite authors and topics, engage with content, and download exclusive resources.

Log In
or

Don't have an InfoQ account?

Register
  • Stay updated on topics and peers that matter to youReceive instant alerts on the latest insights and trends.
  • Quickly access free resources for continuous learningMinibooks, videos with transcripts, and training materials.
  • Save articles and read at anytimeBookmark articles to read whenever youre ready.

Topics

Choose your language

InfoQ Homepage News DeepSeek AI Unveils DeepSeek-OCR: Vision-Based Context Compression Redefines Long-Text Processing

DeepSeek AI Unveils DeepSeek-OCR: Vision-Based Context Compression Redefines Long-Text Processing

Oct 22, 2025 2 min read

Write for InfoQ

Feed your curiosity. Help 550k+ global
senior developers
each month stay ahead.
Get in touch
Listen to this article - 0:00
Audio ready to play
0:00
0:00

DeepSeek AI has developed DeepSeek-OCR, an open-source system that uses optical 2D mapping to compress long text passages. This approach aims to enhance the handling of text-heavy inputs by large language models (LLMs). The method, referred to as a “new paradigm for context compression,” suggests that visual encoding can more efficiently store and retrieve language compared to traditional tokenization.

DeepSeek-OCR comprises two key components: the DeepEncoder for visual compression and the DeepSeek3B-MoE-A570M as the decoder. It achieves 97% OCR precision with a compression ratio of less than 10×, condensing ten text tokens into one visual token. Even at a 20× ratio, it maintains around 60% accuracy, showing that meaningful content can be preserved even with significant token reduction.


Source: https://arxiv.org/pdf/2510.18234

The DeepEncoder architecture minimizes activation memory while effectively handling high-resolution inputs. By combining window and global attention mechanisms with a 16× convolutional compressor, it enables large-scale image processing without GPU memory issues. DeepSeek-OCR has outperformed advanced models like GOT-OCR 2.0 and MinerU 2.0, achieving greater precision with under 800 vision tokens per page.

The decoder, powered by a mixture-of-experts (MoE) design, allows specialized processing for different OCR subtasks while maintaining speed and accuracy. This enables the model to read charts, formulas, and multilingual documents with precision comparable to full-scale OCR suites, while consuming significantly fewer computational resources.

The research team positions DeepSeek-OCR as more than an OCR system — it is a potential foundation for memory mechanisms in next-generation LLMs. By storing long contexts as compressed vision tokens, models could effectively “remember” past information without inflating token counts.

Early reactions from the AI community highlight curiosity. One Reddit user wrote:

This looks like what Gemini 2.5 already has, unless they were using extra tools behind the scenes. I had text-heavy images that used fewer tokens than the actual transcribed text, and it was able to process them without issue.

Following the release, developers discussed the practical side of running the model locally. On Reddit, a user asked:

I wish I knew how to run these vision models on my desktop computer. They don't convert to GGUFs, and I'm not sure how else to run them, as I could definitely use something like this right now. Any suggestions?”

Another user stepped in with clarification:

Via Python transformers, but this would be full precision, so you need some VRAM. 3B should fit in most GPUs, though.

DeepSeek-OCR’s code and model weights are publicly available on GitHub, with the company inviting researchers to reproduce and extend its results. The system’s performance — compressing and decoding large textual documents through a visual channel — could influence how future LLMs balance efficiency and memory.

About the Author

Robert Krzaczyński

Show moreShow less

Rate this Article

Adoption
Style

Related Content

The InfoQ Newsletter

A round-up of last week’s content on InfoQ sent out every Tuesday. Join a community of over 250,000 senior developers. View an example

We protect your privacy.

BT

AltStyle によって変換されたページ (->オリジナル) /