Open Source (PyPI)Ongoing since 2025-11Solo

ContextStrategies

Turn any codebase into AI-ready context with drift-safe tooling.

15.6K

Python LOC

9

Analyzers

4

UI Surfaces

v5.0.0

Version

CLITUIFastAPIStreamlitOpenRouterDuckDB

Overview

I built ctx-flatten because “copy files into the prompt” doesn’t scale. It turns a repo into a token-budgeted Markdown context you can paste into an LLM, then goes further: AI-powered focusing, drift-safe AST-based patching, nine static analysis dimensions, duplicate detection, and git-history dashboards—delivered as one modular Python package with CLI, TUI, FastAPI, and Streamlit surfaces.

Problem

LLM context is expensive and easy to waste.

I kept watching good engineering time get wasted on context assembly: copy files by hand, miss an import, paste too much, then burn tokens on irrelevant code.

Even worse, AI-generated patches are brittle when the target changes between context capture and application. Without drift checks, you can end up editing the wrong symbol or applying a change to stale code.

Solution

One tool for context, scope, patching, and insight.

I built ctx-flatten to produce an optimized Markdown context that respects `.gitignore` and optional token budgets. Then—when you want it—it can use OpenRouter-backed AI to focus context to a task, propose changes, and apply them with drift-safe structural checks.

The goal is inspectable workflows: context files, reports, and history artifacts you can read and version, not a black-box “agent” that edits your repo blindly.

Workflow

Flatten → Focus → Patch → Analyze

Generate context, narrow scope, apply drift-safe changes, and produce reports you can inspect.

Lifecycle

DISCOVERFILTERREDACTBUDGETRENDERWRITEDONE

Command Surface

CLI

ctx flatten . -o context.md

Flatten repo into Markdown context

CLI

ctx focus context.md focused.md --task "<task>"

AI focus context to task-relevant files

CLI

ctx patch <path|symbol> --task "<task>"

Apply drift-safe AI patch to a file or symbol

CLI

ctx analyze . -r report.md

Generate static analysis report

CLI

ctx history . --out-dir history_data --max-commits 50

Generate git history snapshots

CLI

ctx tui

Launch interactive TUI

Architecture

Modular monolith: one Python core powering four surfaces (CLI, TUI, API, dashboard) with optional AI, analysis, and history extras.

Interfaces

  • Typer CLI
  • Textual TUI
  • FastAPI API
  • Streamlit Dashboard

Core Engine

  • Repo Discovery
  • .gitignore Filtering
  • Token Budgeting
  • Markdown Renderer

AI Layer

  • pydantic-ai Agent
  • OpenRouter Provider
  • Focus + Patch Workflows

Analysis

  • Architecture Analyzer
  • Complexity
  • Security (Bandit)
  • Documentation
  • Performance
  • Code Smells
  • Patterns
  • Test Coverage

Outputs

  • Markdown Contexts
  • JSON Snapshots
  • DuckDB History DB
  • Reports

Capabilities

8 Core Capabilities

Token-Budgeted Flattening

core

Smart selection to stay within context limits

AI Focus (OpenRouter)

integration

Reduces context to task-relevant files via LLM agents

Drift-Safe Patching

security

Applies AI suggestions with structural safety checks

Directory Dependencies

core

Extracts specific folders and their required imports

Textual TUI

dx

Terminal UI for interactive context management

Git History Analytics

core

Snapshots + DuckDB dashboards for evolution tracking

Static Analysis Reports

core

9 categorized analyzers output actionable insights

Duplicate Detection

dx

Finds copy-pasted blocks across languages and repos

Tech Stack

Core

Python 3.9+

Single-package core engine

Rich

Pretty CLI output + diagnostics

Interfaces

Typer

CLI command surface

Textual

Interactive TUI

FastAPI

HTTP API surface

Streamlit

History dashboard UI

AI

pydantic-ai

Agent abstraction for focus/patch

OpenRouter

Multi-model LLM gateway

Analysis

Bandit

Security linting

McCabe

Cyclomatic complexity metrics

Storage

DuckDB

Portable analytics DB for history

JSON

Snapshot outputs and interchange

Tradeoffs & Decisions

Why a modular monolith instead of separate services?

All surfaces depend on the same deterministic core: discovery, filtering, budgeting, and rendering. Keeping it in one package avoids duplicated logic and drift. Optional features (AI, analysis, history, dashboard) stay isolated behind dependency groups.

Alternatives:Separate microservicesMultiple reposPlugin-only architecture

Why drift detection for patches?

I want “apply the change to this symbol” to be safe. Drift detection prevents edits from landing on the wrong file or an outdated structure, which reduces the chance of corrupting working code.

Alternatives:Blind text replacementManual patchingFull file overwrite

Why OpenRouter + pydantic-ai for AI features?

OpenRouter provides model choice without vendor lock-in. pydantic-ai gives me structured prompts and predictable tool boundaries so focus/patch workflows stay controllable and testable.

Alternatives:Direct OpenAI APIDirect Anthropic APILocal-only models

Why DuckDB for history analytics?

I wanted history analytics without running a server DB. DuckDB gives fast SQL over snapshot files while keeping everything local and portable.

Alternatives:SQLite onlyPostgreSQLFlat JSON only

Challenges

Token budgets force tradeoffs between completeness and relevance

I built token-budgeted selection + filtering so users can tune output for their model and task without hand-curating files.

AI patches can target stale code and break working trees

I added drift checks so the target must match expected structure before an edit is applied.

Multiple interfaces risk duplicating business logic

I kept one shared core so CLI, TUI, API, and dashboard stay consistent and testable.

Outcomes

  • I shipped four surfaces (CLI, TUI, API, dashboard) from one Python core via modular extras
  • I cut prompt bloat on large repos by generating token-budgeted contexts instead of naive full dumps
  • I made patch application safer by verifying structure (drift detection) before applying edits
  • I built nine categorized analyzers (security, complexity, architecture, tests, patterns, docs, performance, smells, coverage) for inspectable reports
  • I added history snapshots + DuckDB so insights can include evolution, not just a single point-in-time scan
  • I implemented duplicate detection across languages using content-agnostic hashing

Use it on your next repo

Install ctx-flatten, generate a context file, and keep your AI workflows grounded in real code with drift-safe patches and inspectable outputs.