← Back to projects

DisplayAnalysis

Detect the display artifacts that cause eye strain.

DisplayAnalysis is a Python tool that quantifies subtle display artifacts that are hard to see but easy to feel. Feed it a high-frame-rate capture (video or image sequence), and it computes temporal instability, flicker signatures, and uniformity metrics, then produces a PDF report that’s meant to be shareable and readable.

At a glance (from the codebase)

Metric What it looks like
Scale ~6.1K LOC
Entrypoints analyze-display, analyze-display-interactive, analyze-display-gui
Core pipeline src/display_analysis/analyze_display.py
Reporting src/display_analysis/reporting.py (multi-page PDF)
Input modes Video files and image sequences

Measured numbers (2026-01-16)

  • Package version: 1.1.0 (pyproject.toml).
  • Python requirement: >= 3.8.
  • cloc (core surface: src/, tests/, README.md, QUICKSTART.md, pyproject.toml): 3,161 LOC across 16 files.
  • cloc (full repo, excluding .git,node_modules,dist,build,.venv): 6,154 LOC across 30 files.
  • CLI defaults: center_crop_ratio=0.85, uniformity_block_size=16, skip=0 (arg defaults in src/display_analysis/analyze_display.py).
  • Python test files: 6 (tests/*.py).

Problem

Modern displays can use techniques like temporal dithering or PWM backlight dimming that operate below conscious perception, but still cause headaches and fatigue for some people.

I wanted an objective way to answer “is my display doing something weird?” without lab equipment: take a high-speed capture, compute metrics, and generate a report that’s shareable and understandable.

Constraints

  • Needs to work with consumer capture setups (phone slow-mo, high-FPS cameras).
  • FPS must be configurable because slow-motion metadata is often misleading.
  • ROI selection must be supported (full-frame analysis is slow and noisy).
  • Must work headless for automation, but also support an interactive setup flow.

Solution

Run a repeatable analysis pipeline:

  1. Validate input, determine effective FPS (or accept an override).
  2. Choose/confirm a Region of Interest (ROI).
  3. Extract frames, compute per-frame metrics.
  4. Detect periodic components via FFT (flicker/PWM signatures).
  5. Generate a PDF plus CSV/JSON exports for deeper analysis.

The output is quantified and explainable: numbers, charts, worst-case frames, and thresholds that translate “signal strength” into a practical risk assessment.

Architecture

flowchart TB
  subgraph UI["Interfaces"]
    CLI["CLI"]
    WIZ["Interactive mode"]
    GUI["Tkinter GUI (optional)"]
  end
 
  UI --> PIPE["Analysis pipeline<br/>frames → metrics → report"]
  PIPE --> TEMP["Temporal metrics<br/>MAD/RMS • dither pixel count"]
  PIPE --> FLICK["Flicker metrics<br/>FFT dominant frequency • modulation depth"]
  PIPE --> UNIF["Uniformity metrics<br/>CIELAB block uniformity"]
  PIPE --> OUT["Outputs<br/>PDF • CSV • JSON • heatmaps"]
flowchart LR
  Capture[Capture high-speed video] --> ROI[Select ROI + FPS]
  ROI --> Extract[Extract frames]
  Extract --> Metrics[Compute metrics + FFT]
  Metrics --> Report[Generate PDF report]

Evidence (placeholders)

  • Screenshot (TODO): case-studies/display-analysis/pdf-summary.png
    • Capture: first page of display_analysis_report.pdf (summary + key metrics).
    • Alt text: “PDF report summary page showing key temporal and flicker metrics.”
    • Why it matters: proves the output is shareable and readable without running the tool.
  • Screenshot (TODO): case-studies/display-analysis/pdf-flicker-fft.png
    • Capture: the PDF page that shows the FFT flicker plot (dominant frequency and magnitude).
    • Alt text: “PDF report page showing FFT-based flicker analysis plot.”
    • Why it matters: supports the claim that flicker is detected in the frequency domain.
  • Screenshot (TODO): case-studies/display-analysis/roi-selection.png
    • Capture: ROI selection/preview (GUI or interactive preview) showing the selected analysis region.
    • Alt text: “ROI selection preview showing the area used for analysis.”
    • Why it matters: supports the claim that ROI is a first-class input and affects results.
  • Screenshot (TODO): case-studies/display-analysis/cli-run.png
    • Capture: a CLI run showing effective FPS selection and the output directory artifacts created.
    • Alt text: “CLI output showing analysis run configuration and generated outputs.”
    • Why it matters: supports the claim that the tool works headless and emits multiple artifacts.

Deep dive: why it’s more than “run FFT”

1) ROI and FPS are first-class inputs

The tool is built around the idea that your capture setup matters. The CLI supports:

  • interactive configuration (--interactive / analyze-display-interactive),
  • optional ROI preview confirmation (or --skip-preview for headless runs),
  • and FPS overrides for slow-motion capture workflows.

2) Temporal vs spatial artifacts are treated differently

The core implementation (src/display_analysis/analyze_display.py) separates:

  • Temporal: frame-to-frame deltas (MAD/RMS), and “dither-ish” ±1 pixel toggling counts.
  • Spatial: per-frame texture/noise and block-based uniformity.
  • Frequency-domain: flicker extraction via FFT of ROI mean brightness.

3) Reports are designed to be read, not just generated

PDF generation is a real subsystem (src/display_analysis/reporting.py), including:

  • multi-page layout,
  • charts,
  • and a “worst-case examples” view so you can visually sanity-check what the metrics claim.

Tech stack

Area Choices
Language Python 3.8+
Vision OpenCV, scikit-image
Math NumPy, SciPy (FFT)
Reporting Matplotlib, PDF generation
Packaging Docker for headless/containerized use

Key decisions

  • FFT for flicker/PWM: frequency-domain analysis isolates periodic brightness changes.
  • CIELAB uniformity: perceptual color space makes uniformity/tint metrics meaningful.
  • Multiple run modes: CLI for automation, interactive mode for guided setup, Tkinter GUI for ROI confirmation.
  • PDF as primary report artifact: optimized for sharing and comparing runs.

Tradeoffs

  • Results depend on capture setup (FPS, motion, exposure), and the tool can only analyze what the camera captured.
  • Full-frame analysis is expensive; ROI is usually required for practical runtimes.

Security and reliability

  • Inputs are local files; the primary safety concern is correctness and reproducibility rather than network threat models.

Testing and quality

  • Tests live under tests/ and validate core metric/reporting behaviors.

Outcomes

  • Converts “I feel eye strain” into measurable signals you can compare across settings and devices.
  • Makes analysis shareable (PDF + raw exports) so results can be reviewed without running the tool.
  • Supports both automation (CLI) and human-in-the-loop setup (interactive mode / GUI).