Optimizing Video Pipelines for Interlaced RGB SignalsInterlaced RGB remains relevant in specific broadcast, archival, and some professional video workflows despite the dominance of progressive formats. Optimizing a video pipeline for interlaced RGB requires careful handling at every stage — capture, color management, processing, encoding, and display — to preserve temporal integrity, minimize artifacts (like combing and chroma crawl), and ensure color fidelity. This article walks through practical strategies, trade-offs, and implementation details for engineers and technical operators working with interlaced RGB material.
What “Interlaced RGB” Means
Interlaced video splits each frame into two fields captured at different moments in time: one containing the odd scan lines, the other containing the even scan lines. When the color representation is RGB (separate red, green, and blue channels per pixel) rather than Y’CbCr, chroma subsampling issues are avoided but other challenges arise: each field contains full-color information but at half vertical resolution and a temporal offset relative to the complementary field.
Key fact: Interlaced RGB has full-color channels per line but temporal displacement between fields, which can cause motion artifacts if treated as progressive frames.
Pipeline Overview and Core Goals
Primary objectives when optimizing a pipeline:
- Preserve temporal relationships between fields.
- Avoid introducing deinterlacing artifacts where unnecessary.
- Maintain color accuracy through linearization, gamut mapping, and proper transfer functions.
- Minimize bitrate or storage overhead while retaining quality for intended delivery formats.
- Ensure efficient real-time or batch processing depending on workflow.
A typical pipeline includes: capture → color conversion & normalization → field-aware processing (filtering, scaling, stabilization) → deinterlacing or field-aware encoding → delivery/display-specific conversion.
Capture & Ingest
- Use hardware that supports field-accurate capture and can tag field order (top-field-first or bottom-field-first). Mislabeled field order is the most common cause of combing after any field-aware operations.
- Capture at full RGB precision (10–12 bits) where possible to preserve headroom for color grading and chroma operations.
- Preserve metadata: field order, interlace flags, color space (e.g., Rec.709, Rec.601), transfer characteristic, and pixel aspect ratio.
Practical tip: Maintain an ingest checklist that verifies field order and color space with short automated test patterns to catch misconfigurations early.
Color Management
- Keep color pipeline operations in a linear light working space when performing scaling, temporal filtering, or compositing to avoid non-linear artifacts.
- Convert from camera-native RGB to a standardized working space (e.g., linear Rec.709 RGB or ACES2065-1) with accurate inverse transfer functions.
- When moving between RGB and Y’CbCr (for compatibility with certain encoders or delivery formats), choose conversion matrices and chroma placement that match the interlaced semantics and preserve field alignment. Avoid chroma subsampling prior to any field-sensitive processing unless you intend to accept the subsampling artifacts.
Example priority: linearize RGB → process → convert to target transfer and color space.
Field-Aware Processing
Many image-processing operations assume progressive frames; applying them without adaptation to interlaced RGB causes cross-field contamination and motion artifacts.
- Filtering and Denoising:
- Use field-aware spatial filters that operate separately on each field to avoid mixing temporally distinct content.
- Temporal denoising should reference corresponding fields (e.g., past odd-field to present odd-field) rather than whole-frame neighbors.
- Scaling:
- Vertical resampling must respect field structure. For integer scaling factors that preserve field alignment, process per-field; for non-integer scaling, consider field-aware interpolation algorithms that reconstruct a temporary progressive representation per field or use motion-compensated resampling.
- Stabilization & Motion Estimation:
- Estimate motion using field-corrected sequences or perform motion estimation on progressive reconstructions created from field pairs with careful deinterlacing.
- Compositing:
- Composite elements should be field-matched to source footage. For overlays (graphics, text), ensure they are rendered at field rate if they interact with interlaced content to avoid jitter.
Practical example: A temporal denoiser implemented for interlaced footage should accept the field index and only compare like-indexed fields across frames.
Deinterlacing Strategies
Whether to deinterlace depends on final delivery. If the target is an interlaced broadcast chain, avoid deinterlacing altogether. For progressive delivery (web, file-based archival), choose a deinterlacing method that balances artifact removal and detail preservation.
Options:
- Bob deinterlacing: Upsamples each field to full frame height independently. Preserves temporal resolution and avoids combing but reduces vertical detail and can introduce flicker.
- Weave deinterlacing: Combines two fields into a full frame. Preserves vertical detail but creates combing on motion.
- Motion-compensated deinterlacing (MCI): Uses motion estimation to adaptively blend/weave/bob or synthesize missing lines. Best quality for complex motion but computationally expensive and requires robust motion vectors (which are harder to compute accurately on noisy or heavily compressed sources).
- Field-merging with intelligent edge-directed interpolation: Lower complexity than MCI, better than simple bob/weave for moderate motion.
Recommendation: For high-quality archival/progressive masters, use motion-compensated deinterlacing with RGB input in linear light; store both the original interlaced master and the deinterlaced progressive master.
Encoding Considerations
Many modern encoders expect or perform better with Y’CbCr input; however, converting RGB→Y’CbCr must be done carefully:
- Convert after field-aware processing and after any linear-light operations have concluded.
- Use full-range vs limited-range appropriately according to target distribution (full range typically for file-based storage; limited for broadcast).
- Chroma subsampling: 4:4:4 preserves full chroma but increases bitrate. For interlaced RGB sources, if the final consumer format is Y’CbCr 4:2:2 or 4:2:0, perform chroma subsampling after field-aware filtering to prevent cross-field chroma artifacts.
- Bit-depth: Keep high bit-depth (10–12 bits) through processing; many encoders support 10-bit YUV which is a practical compromise between quality and bitrate.
- GOP structure and field order: Encoders that consider field order (field-coded GOPs) should be configured accordingly to preserve temporal prediction efficiency.
Example: For broadcast MPEG-2/IMX/H.264 interlaced workflows, prefer 4:2:2 10-bit encodes with correct field-order flags and closed GOP settings as required by the delivery specs.
Display & Delivery
- For interlaced delivery (traditional broadcast), ensure metadata flags (interlace flag, field order) are intact and match the chain.
- For progressive delivery, provide a deinterlaced master and consider also delivering the original interlaced master for archival or repurposing.
- On consumer displays, modern TVs often deinterlace internally; test on target hardware as their deinterlacers vary widely and may introduce artifacts. When possible, provide progressive masters to avoid relying on device deinterlacers.
Quality Assurance & Testing
- Automated QA: Implement frame/field comparison tools that detect combing, field-order swaps, and incorrect chroma alignment. Use objective metrics (PSNR, SSIM) on deinterlaced outputs compared to ground-truth progressive captures when available.
- Visual tests: Use motion test patterns, slanted-edge charts, and chroma-check patterns to verify correct field handling and color fidelity.
- Regression tests: Keep short canonical samples (static, motion, high-frequency detail, and skin-tones) that can be re-run through the pipeline after any change.
- Real-time systems may need simpler deinterlacing (bob, edge-directed) or hardware acceleration (ASIC/FPGA/SoC deinterlacers) to meet latency targets.
- Use hardware that supports field-aware color conversion and chroma subsampling to reduce CPU/GPU load.
- For batch transcoding, prefer higher-quality algorithms (motion compensation, per-field linear processing) and parallelize by clip or scene where possible.
Practical Case Study (Broadcast-to-Web Transcode)
- Ingest interlaced 10-bit RGB with field-order metadata verified.
- Convert to linear working space (linear Rec.709 RGB).
- Perform field-aware denoising and stabilization.
- Apply color grading in linear space; apply LUTs that preserve skin tone accuracy.
- Motion-compensated deinterlace to progressive 1080p at target frame rate (e.g., convert 59.94i to 29.97p using MCI that preserves motion cadence).
- Convert to Rec.709 Y’CbCr 10-bit, 4:2:2 for master encode.
- Produce web delivery: re-encode to 8-bit 4:2:0 H.264/H.265 at appropriate bitrate and framerate; provide original interlaced master as archive.
Common Pitfalls
- Ignoring field order metadata — results in combing.
- Performing spatial filters across fields — causes temporal smearing.
- Converting to Y’CbCr and subsampling before field-aware processing — produces chroma crawl.
- Relying on consumer displays’ deinterlacers for quality-sensitive deliveries.
Summary
Optimizing pipelines for interlaced RGB requires treating fields as first-class citizens: capture and carry field metadata, perform color operations in an appropriate working space, apply field-aware spatial and temporal processing, and choose deinterlacing and encoding strategies aligned with final delivery needs. Balancing quality, compute cost, and delivery constraints will guide whether to preserve interlaced masters, produce progressive conversions, or deliver both.