Glossary of Terms for Audio and Video Codecs
A list of video/audio terms and definitions to help you become more familiar with the terminology used in video/audio software and devices.
Advanced Audio Coding (AAC), also known as MPEG-2 Part 7, is a digital audio encoding and lossy compression format. AAC delivers high quality audio at a lower bitrate compared to other audio encoding standards like ISO/MPEG Audio Layer-3; MP3.
AC3, also known as Dolby Digital, is a perceptual digital audio coding technique that reduces the amount of data needed to produce high-quality sound. Perceptual digital audio coding takes advantage of the fact that the human ear screens out a certain amount of sound that is perceived as noise. Reducing, eliminating, or masking this noise significantly reduces the amount of data that needs to be provided. AC3 is the sound format for digital television (DTV), digital versatile discs (DVDs), high definition television (HDTV), and digital cable and satellite transmissions.
The aperture is the area of the lens that allows the light to pass through. Lenses with large apertures allow more light in than lenses with small apertures.
Most Liquid Video Technologies cameras come with an AGC circuit. This circuit examines the brightness level of the video signal to keep it at a consistent level. Automatic Gain Control (AGC).
Codec stands for Compressor and Decompressor. A software component which compresses and decompresses audio.
AVI Format (AVI)
Audio Video Interleave (AVI) is a file format defined by Microsoft. It is the most common format for audio/video data on PCs. AVI files can have one or more video streams and one or more audio streams. The video and audio streams can be compressed using various compressors. Each compression has one or more possible decompressors. This means that two files are not similar just because they have the same extension. You might be able to play one AVI file (because you have its decompressors) but you might not be able to play another (because you do not have its decompressors). Or you might have the audio decompressor but not the video decompressor. (In this case, you would hear the audio, but not the video).
Advanced Video Coding (AVC) is a digital video codec standard which is noted for achieving very high data compression. It is common to call the standard H.264/AVC (or AVC/H.264 or H.264/MPEG-4 AVC or MPEG-4/H.264 AVC).
B (Bi-directional) frames
B frames are encoded using both previous and next frames as reference and are better compressed than P frames and I frames.
This is a feature of a camera that automatically adjusts the image to compensate for bright lights, to give more detail on the darker areas of the image. For example, to focus on the detail of a person’s face that has sunlight behind them.
Balun stands for Balanced - Unbalanced. It is a device used to interface between balanced lines and unbalanced lines. For example, twisted pair to co-axial. To convert a signal from balanced to unbalanced requires a balun. For example, balun can be used to send video signal (which is unbalanced) over 300 feet of CAT5 cable by using a pair of balun at each end of the CAT5 run. The transceiver balun takes the unbalanced signal from camera and change it to balanced signal. The receiver balun takes the balanced signal and change to unbalanced signal to recover to its original format.
BNC is a bayonet style connector for coaxial cable that is most commonly used for CCTV installations.
There are two main types of lens used in CCTV cameras. The C mount lens has a flange back distance of 17.5mm. The CS mount lens has a flange back distance of 12.5mm. The C mount lenses therefore have a longer focal distance. The CS mount became widely used, because it its more practical for many of today's more compact cameras. Lenses are often supplied with a 5mm spacer ring (sometimes called a C ring) that allows a C mount lens to be used on a CS camera. Most modern cameras are CS.
This refers to cable that has a central conductor, surrounded by a shield sharing the same axis. The shield can be made from a variety of materials including, braided copper, or lapped foil. There are various standards for specific types of co-axial cable. The cable used for normal CCTV installations is called RG59.
A codec is technology (software or hardware) that compress and decompress data. By using codecs for compressing audio and video data into smaller packets that do not consume as much hard disk space or network bandwidth, multimedia applications can provide richer and fuller content.
Also known as an encoder, this is a module or algorithm to compress data. Playing that data back requires a decompressor, or decoder.
Also known as a decompressor, this is a module or algorithm to decompress data.
Also known as a decoder, this is a module or algorithm to decompress data.
DVSD AVI Format (AVI)
Standard digital video (DV) audio/video format encapsulated in an AVI encoded stream.
DVSD OGG Format (OGG)
Standard digital video (DV) audio/video format encapsulated in an OGG-encoded stream.
This is an electronic implementation of an auto iris. It uses electronics to simulate the effect of opening and closing the iris, by increasing or decreasing the effective shutter time of the camera.
Also known as a compressor, this is a module or algorithm to compress data. Playing that data back requires a decompressor, or decoder.
Focal length is the distance between the center of a lens or its secondary principal point and the imaging sensor. Lower lengths give a greater field of view and less magnification. Longer lengths give a narrower field of view and greater magnification. The table below gives an approximate value for the angle of the field of view for lenses of various focal lengths. 30? is considered to be a normal view, telephoto (longer) lenses have lower angles.
H.263 is an ITU standard, designed for low bitrate communications such as video-conferencing and video-telephony applications. At the same time it can be used for a wide range of bitrates and not just low bitrates. Its compression is 200 to1.
H.264 is a high quality video compression algorithm and is suited for all types of applications with different ranges of bit rates. H.264 compressed video data can be stored inside AVI or OGG files with the option of saving the file with or without the audio data.
I (Intraframe) frames
I frames are encoded without reference to another frame, providing support for random access. See also P frames and B frames.
Interframe compressors compress the differences between adjacent frames. Generally, the interframe algorithms provide higher compression ratios than intraframe algorithms. The trade-off to the higher compression ratios of interframe compressions is speed. Inter-frame compressors usually can not compress video in real time, while intraframe compression can do real time compression.
Each transmitted video image or sample is compressed independently.
IR is low frequency light below the visible spectrum. This is often used for covert or semi-covert surveillance to provide a light source for cameras to record images in dark or zero light conditions.
This is a mechanical device that adjusts to the amount of light passing through the lens of a camera.
Compression techniques can be categorized into two groups, lossless and lossy. When data has been compressed using a lossless compression technique, the result of the decompression will be bit for bit exactly the same as the original data before compression. Generally, lossless compression can not achieve anywhere near the compression ratios of that of lossy compression.
Compression techniques can be categorized into two groups, lossless and lossy. When data is compressed using a lossy compression technique, the result of the decompression will not exactly match the original. Most lossy compression techniques allow you to control how much loss (quality or q-factor).
This is a measure of the amount of light striking a surface. i.e. the luminus flux density at a surface. One lux is one lumen per square metre. Cameras in good lighting conditions or in daylight would normally be rated at 2 Lux or more. Cameras with a Lux rating of 0.2 Lux or less would be considered low-light cameras. It is not possible to get good color definition in low light levels, so in general low light cameras are always monochrome. Many low light cameras are also infra-red sensitive, so that infra-red illumination can be used. Particularly, it is useful in zero light conditions.
< 0.001 Starlight - overcast night
0.001 - 0.01 Starlight - clear night
0.01 - 0.1 Overcast Night
0.1 - 1 Moonlight
1 - 100 Dusk / Twilight
100 - 10,000 Overcast Day
10,000 - 1,000,000 Bright Sunlight
This refers to the part of a video signal that carries the monochrome information. i.e. brightness information.
This is a device that allows any of its camera inputs to be switched to one or more of its monitor outputs. The outputs can of course also be video recorders.
This refers to the process of manually setting the focus on a lens.
MPEG is a standard used for coding and compression of moving images. It was developed by the Moving Pictures Experts Group. It is now used widely for the compression of video images. However, MPEG isn't just one standard. They have developed several standards for different uses. For example MPEG-2 is used for DVD's and set top boxes. MPEG-4 was developed for multi-media applications for fixed and mobile web applications.
MP4 (ISO/IES 14496-14:2003) Format
MP4 is a multimedia format (container) that contains multiplexed audio and video streams. MP4 supports AAC encoded audio stream and H.264 or MPEG-4 encoded video stream.
MPEG (Moving Picture Experts Group)
The Moving Picture Experts Group sets the international standards for digitally encoding movies and sound. They have several standards for audio and/or video formats.
MPEG-2 is a high quality video compression and is primarily targeted for applications that require higher bitrates or high bandwidth usage, most commonly DVD videos, Super VCD (SVCD), box top DVR's and digital television.
Introduced in 1998, MPEG-4 is the designation for a group of audio and video coding standards and related technology agreed upon by the ISO/IEC Moving Picture Experts Group (MPEG). The primary uses for the MPEG-4 standard are web (streaming media) and CD distribution, conversational (videophone), and broadcast television.
This is a device that takes inputs from 2 or more video channels and combines them into one signal. This is often done by using time division multiplexing, which inserts frames from each channel in such a way that they can be split out again. Frequency division multiplexing uses different frequencies to achieve the separation of the signals.
This refers to a camera that is designed to record pictures and transmit them directly over a computer network or dialup internet connection. Network cameras normally do not have any analog video outputs. The images are encoded directly in one of the standard compression video format, such as JPEG or MPEG.
This is standard for TV signals developed by the National Television Standards Committee in the USA. The UK and Europe, use a similar, but different standard known as PAL.
OGG Format (OGG)
OGG is the name of Xiph.org's container format for audio, video, and metadata. Its default audio compressor is called Vorbis. Vorbis compression offers better quality than MP3 and yields a higher compression ratio. Ogg Vorbis is different from other formats because it is completely free, open, and unpatented. The video and audio streams can be compressed using various compressors. Each compression has one or more possible decompressors. This means that two files are not similar just because they have the same extension. You might be able to play one OGG file (because you have its decompressors) but you might not be able to play another (because you do not have its decompressors). Or you might have the audio decompressor but not the video decompressor. (In this case, you would hear the audio, but not the video).
P (Predictive) frames
P frames are encoded using the previous frame as reference. P frames are more highly compressed than I frames.
This is the standard for TV signals used in the UK. It stands for Phase Alternating Line.
This is a type of lens with a very small aperture. Normally, it is used for covert applications, where it can easily hide behind or within another object.
A pixel refers to an individual area on the surface of the imaging device, normally a CCD. It is made from photosensitive material which converts light into electrical energy. In the context of a display monitor, a pixel is also referred to as an individual area on the surface of the screen which converts electrical energy to visible light.
This refers to the measurement of voltage of a signal between the most negative and most positive points
SAD HADAMARD is a calculation of the sum of absolute difference indirectly by applying HADAMARD transform to the block before calculating SAD increasing the compression ratio.
This is the ratio between the signal strength and the noise levels on an audio or video signal.
This is a measure of the resolution of a video device. Higher number is higher resolution. 380 TVL is considered medium resolution. 480 TVL or greater is considered high resolution. 550 TVL resolution is considered one of the best but the higher you go in TVL resolution with a DVR the less frames per second you get per camera
Temporal compression is the process of only encoding the difference between successive frames, instead of the frames themselves. Any given frame is constructed from the prediction from a previous frame and may be used to predict the next frame.
Wavelet transforms are mathematical formulas that represent complex structures in an image, thereby compressing an extremely large amount of image data into a relatively small amount of compressed data. This compression technique allows applications to save compressed imagesor videos with higher compression ratios and better quality as compared to any other intraframe compression technique.
This refers to a type of lens that has the facility to change the focal length. This allows adjustment of the magnification and field of view of the camera.
Software component which compresses and decompresses video. Codec stands for Compressor and Decompressor.
This is a feature that detects motion within a video signal. Normally this is used to trigger recording of images. Advanced video motion detection systems have the ability to adjust the sensitivity and object size that will trigger the system. They also allow the image to be blocked out, such that only certain areas of the image are taken into account when scanning for motion.
Vorbis is an open and free audio compression (codec) project from the Xiph.org Foundation. It is frequently used in conjunction with the Ogg container and is then called Ogg Vorbis.
For further information or assistance please contact Liquid Video Technologies at the following number: 1-864-859-9848-6245.