Thursday, July 05, 2007

Compression for Video:

Compression for Video:
An Introduction



Codec. You've heard the word. What does it mean? Its actually two words stuck together to reflect both an ideology and a curse in the digital world.

Codec means compression / decompression, and depending on whether you're the sender or on the receiving end of a particular file, a video or images compression will play a massive factor in your ability to handle it from either end.

There are many different companies and individuals that have developed their own compression algorithms.

For each compression algorithm, you can adjust its properties to create a more or less compressed file

The properties of a particular compressor can be further divided down into compression methods. Most codecs take advantage of someor all of these methods. They are:

· Spatial

· Temporal

· Lossless

· Lossy

· Asymmetrical & Symmetrical



Spatial Compression:

Also called Intra compression, this program, or algorithm, compresses the visual description of an video frame or file by looking for patterns or repetitions among the images pixels.

For example in a picture with a stretch of white sandy beaches, spatial compression will take notice that many of the pixels that comprise the beach area are a similar tan white shade. Rather than describing each of the thousand subtle variations within an area of sand, spatial compression records a much shorter description, such as "All of the pixels in this area are a light tan color"

As you increase the amount of spatial compression, the data rate and file size decrease, and the image loses sharpness and definition. With many codecs, the amount of spatial compression to add is controlled by the quality and data rate options. A low value for these options increases spatial compression, resulting in a larger data or bit rate, but a crisper, vibrant image

Temporal Compression:

Looks for ways to compress the description of the changes that occur during a sequence of frames. It does this by looking for a pattern and repetition over time.

For example, in a video clip with a person speaking in front of a static background, temporal compression notices that the only pixels changing over time are the pixels that make up the speakers face. All of the other pixels don't change (when the camera is motionless). Instead of describing every pixel in every frame, temporal compression describes all of the pixels in the first frame, and then foe every frame that follows, describes only the pixels that are different from the previous frame. This technique is called frame differencing.

If most of the pixels in the frame are different from the previous frame, the codec simply describes the new frame, allowing for every pixel to be described or rendered. Each whole rendered frame is referred to as a keyframe. Each new keyframe becomes a starting point for frame differencing. Most codecs allow you to set keyframes at specific intervals, while other codecs allow you to set keyframes at markers that you've set manually in your project timeline window. Some codecs automatically create a new keyframe on a frame that is significantly different from the previous frame. If you can specify or set fewer keyframes, the data or bit rate and filesize will decrease, however so will your picture quality.

The amount of temporal compression is often controlled by the codecs keyframe settings as well as its quality option. If you set low values for these options will increase temporal compression.

Lossless Compression:

Some codecs use lossless compression, which assures that all of the information in the image or clip is fully preserved after compression. This maintains the full quality of the original, which makes lossless compression useful for final cut editing or for moving clips between systems. However, preserving original image quality limits the degree to which you can control the data or bit rate as well as the file size. The resulting data may be too high for smooth playback on some systems.

Lossy Compression:

A lossy compression discards some of the original pixel info during compression. Take for example an image of a blue sky that contains 83 shades of blue, a lossy codec set to a loss of 25 percent of original quality may only display 64 shades. Lossy codecs usually let you specify how much picture quality you want to sacrifice to lower the data or bit rate and file size. In essence lossy codecs allow you to tailor the playback for a wide variety of audiences.

Lossy compression allows much lower bit rates and file sizes than lossless compression, so you tend to find lossy codec in use for video delivered for CD-ROM or the Internet.

Some codecs are always lossy, such as JPG, or always lossless, such as planar RGB. Other codecs may be set to either lossless or lossy depending on the options you set

Asymmetrical and Symmetrical Compression:

The codec you choose affects your workflow not only in file size or playback speed, but also in the amount of time a file takes to compress a given number of frames. Fast compression helps video, compositing and editing companies creating the images, while fast decompression makes viewing easier.

Many codecs take far more time compressing files than they do decompressing files for playback. This is the reason why a one-minute short may take 5 minutes to compress before playback is possible.

Compression can be likened to a warehouse. You can pack the warehouse as fast as you can unpack it, but if you spend more time organizing and arranging the items, you can fit more items into the warehouse.

Similarly, different codecs require various amounts of time to compress or decompress video. A codec is considered symmetrical when it requires the same amount of time to compress as to decompress a clip. A codec is asymmetrical when the times required to compress and decompress a clip are significantly different. For example, the Cinepak asymmetrical codec decompresses video relatively quickly, making it useful for video files that must play well on both high- and low-end computers, but to achieve this it requires more time when compressing. Symmetry varies depending on the codec and is generally not adjustable within a codec.

A codec is a specified mechanism that handles compression / decompression. Some of these codecs are built into the software you use. QuickTime, Video for Windows, MPG, TIFF, TGA, are some examples of these codecs. Various algorithms built into these codecs are supplied by their manufacturer, or downloadable off of the Internet. Techsmith, DiVx, Cinepak, and Sorenson are examples of these types of Algorithms. Some are better choices when compressing video to stream on the web, or CDR publication, while others work well for Desktop Presentations. Typically, broadcast shows will not be compressed, unless dealing with a limited color palette or severe time restrictions, which limit production and final rendering time.

Compositing:

The visual art of editing and compositing can play an enormous role in the amount of time a sequence of images takes to compress. Compositing allows the user to render each frame in pieces, or layers. By working with layers, an artist gains significantly increased flexibility in the technical and stylistic edits or fixes needed to produce a finished product. Working with a background layer and the main character on a separate layer, the artist can quickly adjust the contrast or brightness of the background without the adjustments influencing the shades of the featured character. An added benefit of working in layers within a compositing or editing package is that at render time temporal compression is applied to layers that don't move in relation to the previous frame on the same layer. A layer that is active with movement, will not have temporal compression applied. Similarly, working in layers also benefits spatial compression because the codec has an easier time separating unrelated objects by virtue of them being on distinct layers. Ultimately, this results in faster compression, increased spatial accuracy, as well as being more symmetrical with a lower bit rate on the playback end of things. The benefit of matting, allows the artist to combine the foreground and background images, seamlessly blending composites into a final rendered image.

System Requirements

If you're capturing video and you want your files to be lossless, a system must be capable of capturing images at a pixel resolution of 720 * 576. Hardware compression should be found on the video capture card itself to optimize performance. The hard disk must spin fast enough to capture frames as they arrive from the video card. If this is not possible, the hard disk will drop frames out of the sequence to keep up with the speed the video card is feeding it. A hard drives average access time, should be about 10 milliseconds and data transfer should support at least 3MB per second but ideally should be around 6MB per second. Use a separate hard disk when capturing data. If your capture software and editing software are accessing the same disk at capture time, performance will suffer. Your capture disk should be defragmented at all times so that the hard drive can capture the images in large blocks. Finally, your system should be equipped with sufficient Ram to be able to load a good chunk of video into memory while editing.

Labels:

0 Comments:

Post a Comment

<< Home