DisplayAnalysis
Detect the display artifacts that cause eye strain.
3.8K
Python LOC
3
UI Modes
480Hz+
Video Support
6
Metric Types
Overview
I built DisplayAnalysis to quantify display artifacts that are hard to see but easy to feel. It analyzes high-frame-rate captures to detect temporal dithering, PWM backlight flicker, and uniformity issues, then generates a PDF report with plain-language explanations and risk assessments. Under the hood it uses FFT frequency analysis and perceptual (CIELAB) color metrics.
Problem
Display artifacts cause eye strain but are invisible.
Modern displays can use techniques like temporal dithering or PWM backlight dimming that operate below conscious perception—but still cause headaches and fatigue for some people.
I wanted an objective way to answer “is my display doing something weird?” without lab equipment: take a high-speed capture, compute metrics, and generate a report that’s shareable and understandable.
Solution
High-speed video analysis makes the invisible measurable.
I run a repeatable analysis pipeline: extract frames, compute temporal/spatial/color metrics, detect periodic flicker in the frequency domain, then render a PDF.
The output is quantified and explainable: you get numbers, charts, worst-case frames, and thresholds that translate “signal strength” into a practical risk assessment.
Workflow
Capture → Analyze → Report
Record your display with a high-speed camera, run the analysis, and receive a comprehensive PDF report with quantified metrics and risk assessments.
Lifecycle
Command Line Interface
analyze-display <input>
Run full analysis pipeline
analyze-display --interactive
Launch guided wizard mode
analyze-display-gui
Launch Tkinter GUI
run_analysis(input_path, **opts)
Programmatic API
Architecture
Modular pipeline architecture separating frame processing, metric calculation, and report generation.
User Interface
- CLI (argparse)
- Interactive Wizard
- Tkinter GUI
Orchestration
- analyze_display.py
- run_analysis()
- Frame Generator
Analysis
- Temporal Metrics
- Spatial Metrics
- Color Metrics
- FFT Flicker Detection
Output
- PDF Reports
- CSV Export
- JSON Export
- Heatmap PNGs
Analysis Capabilities
8 Detection Methods
Temporal Stability (MAD/RMS)
coreTracks sub-perceptual shimmer and pixel-level toggling
PWM Flicker Analysis
coreFFT-based frequency detection up to 480Hz+
Dither Pixel Detection
coreTracks ±1 pixel variations indicative of temporal dithering
Spatial & Color Uniformity
coreBlock-based CIELAB analysis for backlight bleed and tinting
Interactive Setup Wizard
dxGuided FPS and ROI selection for non-technical users
Tkinter GUI Interface
dxNative file selection and ROI confirmation preview
Worst-Case Frame Capture
coreAutomatically identifies and exports peak artifact frames
Docker noVNC Support
coreFull GUI access inside isolated containers via browser
Tech Stack
Language
Python 3.8+
Core implementation, cross-version support
Computer Vision
OpenCV 4.8+
Video/image I/O, frame extraction
scikit-image 0.21+
CIELAB color space conversion
Scientific Computing
NumPy 1.24+
Vectorized array operations, pixel analysis
SciPy 1.10+
FFT for flicker frequency detection
pandas 2.0+
Metrics tabulation and CSV export
Visualization
Matplotlib 3.7+
PDF reports, graphs, heatmaps
Infrastructure
Docker
Containerized CLI and GUI deployment
GitHub Actions
CI/CD across 5 Python versions
Tradeoffs & Decisions
Why FFT over simple threshold detection for flicker?
Frame differencing alone can’t separate motion/content changes from periodic display artifacts. FFT moves the signal into the frequency domain, which makes PWM-like periodic components show up clearly at specific Hz rates.
Why CIELAB over RGB for color uniformity?
RGB distances don’t map to perception. CIELAB is designed to be perceptually uniform, so a delta‑E of ~1 means roughly the same visible difference anywhere in the space.
Why three UI modes instead of one?
I wanted it to be usable by both technical and non-technical users. CLI enables automation, the interactive wizard reduces setup mistakes (FPS/ROI), and the GUI provides visual ROI confirmation.
Why PDF reports over interactive dashboards?
I optimized for shareability. PDFs are self-contained, printable, work offline, and make it easy to compare displays or share results with someone else.
Challenges
High-speed video files exceed available RAM for pixel-level temporal analysis
I stream frames through the pipeline instead of loading everything at once, and I surface actionable errors (use ROI selection or frame skipping) when memory becomes a constraint.
ROI preview requires display environment but tool runs headless in Docker/SSH
I detect when a display environment isn’t available and fall back gracefully, and I added an optional noVNC image when GUI access in a container is necessary.
OpenCV uses BGR while scikit-image expects RGB, causing color analysis errors
I made the color conversion pipeline explicit (BGR → RGB + normalization) so CIELAB computations stay correct and reproducible.
Users don’t know what FPS their slow-motion camera actually captured at
I added an interactive prompt with common camera examples plus an `--override-fps` escape hatch so analysis isn’t silently wrong.
Outcomes
- I detect temporal dithering patterns down to subtle pixel-level toggling using temporal analysis
- I identify PWM flicker frequencies by extracting dominant periodic components via FFT
- I generate multi-page PDF reports with per-frame metrics, worst-case captures, and heatmaps
- I support inputs from normal videos to high-speed captures (240–480Hz+)
- I ship three ways to run it: CLI automation, an interactive wizard, and a lightweight GUI for ROI selection
- I provide containerized Docker usage (including optional noVNC) for headless environments
Investigate your display
If you’re experiencing eye strain, headaches, or fatigue, use this tool to objectively measure your display’s flicker and dithering artifacts.