Context Window

12 articles about context window.

Why 200K Context Models Outperform 1M When You Aggressively Clear Context

·2 min read

The biggest quality jump in AI agent workflows is not upgrading to a larger context window - it is being more aggressive about clearing context between tasks.

context-window200k-context1m-contextai-agentsprompt-engineering

Accessibility Tree Dumps Overflow LLM Context Windows - How to Fix It

·3 min read

Raw accessibility tree data can consume 24KB or more per dump, flooding AI agent context windows. The fix: write to temp files and return concise summaries instead.

accessibility-treecontext-windowllmmacosoptimizationdesktop-agent

Why Backend Tasks Still Break AI Agents - Tool Response Design Matters

·2 min read

AI agents fail on backend tasks not because models are weak but because tool responses are poorly designed. Write full data to files and return compact summaries instead.

tool-designbackend-tasksagent-reliabilitycontext-windowmcp

Hitting Claude's Context Limit Mid-Build and How CLAUDE.md Fixes It

·2 min read

When Claude Code hits the context limit during a build, you lose project context. A CLAUDE.md file prevents starting over by keeping essential specs persistent.

claude-codecontext-windowclaude-mddeveloper-workflowproductivity

MCP Tool Responses Are the Biggest Context Hog - How to Compress Them

·3 min read

MCP server tool responses silently eat your context window. Here is how to compress accessibility tree data and other MCP outputs before they fill your token budget.

mcpcontext-windowaccessibility-apioptimizationtoken-management

Claude CoWork's Token Limits Hit Different - Why Local Agents Are Better for Big Tasks

·2 min read

CoWork has context limits that force session restarts on large codebases. A local agent running natively on your Mac manages its own context window without the same constraints.

coworktoken-limitslocal-agentcontext-windowmacos

Why Explaining a Process Is Harder Than Running It - The AI Agent New Hire Problem

·2 min read

Every new AI agent session starts from zero - the eternal new hire that never builds institutional memory. Why process documentation is now a core skill.

ai-agentsinstitutional-memoryprocess-documentationcontext-windowproductivityonboarding

MCP Servers That Pipe Raw Data Beat REST API Wrappers

·3 min read

The most useful MCP servers send raw data into context - transcripts, accessibility trees, full documents. The ones that just wrap a REST API add a layer of abstraction nobody needs.

mcpcontext-windowraw-dataapi-designagent-tools

The 1M Context Trap for Opus - More Context Makes the Model Lazier

·3 min read

The 1M token context window is a double-edged sword. You can fit more information, but the model gets lazier and less precise the more context it has to process.

opuscontext-windowclaude-codeai-codingtokensproductivity

Why Scoped 50K Context Agents Outperform One Million Token Context

·2 min read

One million token context windows sound impressive, but scoped agents with 50K context each consistently outperform a single giant context for real development work.

context-windowparallel-agentsscoped-agentsllmproductivity

Using Opus as Orchestrator, Delegating to Sonnet and Haiku

·3 min read

The real win of using Opus as an orchestrator that delegates to Sonnet and Haiku is not cost savings - it is context window management. Opus burns through 80% of context just understanding the codebase.

opussonnethaikumodel-routingcontext-windowcost-optimization

Saving 10M Tokens (89%) on Claude Code with a CLI Proxy That Truncates Output

·3 min read

Claude already tries to tail output on its own, but by then the tokens are already in context. A CLI proxy that truncates command output before it hits the context window saved 89% of tokens.

claude-codetoken-optimizationcli-proxycost-reductioncontext-window

Browse by Topic