The Big Picture - Interlaced vs. Progressive, Fields vs. Frames, 3:2 Pulldown and Inverse Telecine (Page 1 of 2)

 

Interlaced vs. Progressive

When you're watching your television, if you go up real close to it and watch carefully (but not too long, remember kids, it'll rot your eyes!) you'll notice that the picture sort of "shimmers." Old skool computer users will probably remember how back in the day, when resolutions higher than 800x600 were in the realm of super-highend workstations, you could sometimes get those higher resolutions if you really tried, but your picture would wind up flickering and shimmering, usually causing groans of disgust and a quick jump back down to a lower resolution. That's interlacing at work.

Progressive video means that every pixel on the screen is refreshed in order (in the case of a computer monitor), or simultaneously (in the case of film). Interlaced video is refreshed to the screen twice every frame - first every Even scanline is refreshed (the little gun at the back of your Cathode Ray Tube shoots all the correct phosphors on the even numbered rows of pixels) and then every Odd scanline. This means that while NTSC has a framerate of 29.97, the screen is actually being partially redrawn every 59.94 times a second. A half-frame is being drawn to the screen every 60th of a second, in other words. This leads to the notion of fields.

Fields vs. Frames

We already know that a Frame is a complete picture that is drawn onto the screen. But what is a field? A field is one of the two half-frames in interlaced video. In other words, NTSC has a refresh rates of 59.94 Fields per Second. So when you see console systems/games advertise "60 frames a second gameplay" that's actually only a half-truth (unless you hook your Dreamcast up to a computer monitor using the special cable). A TV can only display approximately 60 half frames a second, so your console is rendering 59.94 Frames, and then dropping half the scanlines to display on your TV.

This is why when you're normally watching Anime (traditionally produced on Film which is 24 Progressive Frames a second) and you see a pan that's done in 59.94 fields a second (such as in the new Sol Bianca) you immediately go "Whoa - that was smooth."

This has very important ramifications when it comes to working with digital video. When working on a computer, it's very easy to resize you video down from 720x480 to something like 576x384 (A simple reduction in framesize). However, if you're working with interlaced video, this is an extremely bad thing. What resizing video to a lower resolution basically does is it takes a sample of the pixels from the original source and blends them together to create the new pixels (again that's a gross simplification but it should suffice). This means that when you resize interlaced video you wind up blending scanlines together, which could be part of completely different images! For example:

Asuka Interlaced

Image in full 720x480 resolution

Interlacing close-up

Enlarged portion - notice the interlaced scanlines are distinct.

Interlaced Footage - resized to 576x384

Image after being resized to 576x384

Interlaced Footage resized - close-up... evil scanline blending

Enlarged portion - notice some scanlines have been blended together!

 This means that you're are seriously screwing up your video quality by doing this! If you want to avoid this, you have three options:

The first is probably the easiest but for some apparently the least obvious: just edit interlaced! Television is interlaced, it's not an inherently bad thing. If you edit your entire AMV as interlaced, you can then encode it to Interlaced DVD-quality MPEG2, output said file to a tape using a hardware DVD playback card, or output it via other hardware. If you then want to distribute it online, you can apply one of the next two options to it and distribute that. But remember that interlacing is not inherently bad.

The second option is deinterlacing. Deinterlacing is the process of interpolating the pixels in between scanlines of the same field (in other words, reconstructing a semi-accurate companion field which would turn the field into a full frame). This has several drawbacks. Not only does it take a lot of processing power to do, it's inherently error prone. You would need to use this if you have to keep 29.97 as your framerate but for some reason couldn't handle full resolution video and needed to resize it.

The third option is Inverse Telecining. It just so happens that this is the next section of the guide!