IP camera basics: Image sensor and signal processing
Nowadays, autonomous/driver-less cars appear in the headlines of tech news section. Imaging technology is the one of core components in the driver-less cars. Smartphone, IP camera, web camera, camcorder, today people use them to capture crisp and clear image. Nikita Srivastava, the senior engineer from eInfochips recently published an article to explain how the cameras can output crisp and clear image by relying on image sensor and image signal processing pipeline.
Generally, there are internal and environmental factors will degrade the quality of an image shoot by camera. Internal factors including the imperfections caused by the lens, color filter and image sensor. External factors are environment conditions such as lighting and temperature. To solve these problems, a camera must use what is known as image signal processing in the image processor. Image signal processing also referred to as ISP, or ISP Pipeline. In order to understand it clearly, we start the explanation from the principle of image sensor.
1#. Principle of Image Sensor
As the core components of IP camera, the image sensor is a device that converts an optical image into an electrical signal. Today, there are two different types of image sensor coexist including CCD and CMOS, we will discuss this later. When light photons are collected in the photo-sites (one photo-site for each pixel), a tiny electrical charge is produced. Incident light is directed by the micro lens (a tiny lens placed over the pixel). The more light hits a photo-site, the more photons it records, and a higher electrical charge is generated. Different pixel photo-sites will register different electrical charges and, once the exposure is complete, each individual pixel photo-site’s electrical charge must be measured and then turned into a digital value by an analog-to-digital converter.
2#. Types of Image sensor
CCD (Charge-Coupled Device)
CCD image sensor had been widely used in analog cameras that captured image with low definition or standard definition resolution. In a CCD sensor (Charge-Coupled Device), the charge packet within each pixel is transferred through a limited number of output nodes to be converted to voltage, buffered and then send off the chip as an analog signal. The charges in the line of pixels nearest to the output amplifiers are amplified and converted to an output, followed by each line of pixels shifting its charges one line closer to the amplifier. This process is then repeated by transporting the charge across the chip until all the lines of pixels have had their charge amplified and converted into an output. An analog-to-digital converter (ADC) circuitry surrounding the sensor then turns each pixel's value into a digital value by measuring the amount of charge at each photosite and converting that measurement to binary form.
CMOS (Complementary Metal Oxide Semiconductor)
Today's most IP camera utilizes the CMOS image sensor which can capture image with megapixel resolution (you can find IP cameras offer 4K or even 8K ultra high definition resolution). CMOS image sensor is an active pixel sensor, and comparing with a CCD sensor,it has a different working principle. A CMOS imaging chip is a kind of image sensor that the charge-to-voltage conversion takes place in each pixel instead of in a common output structure. It uses red, green, and blue color filters and pass data through metal wiring and onto photo diodes.
We used to write an article to explain the difference between CMOS and CCD, you may read it.
As shown in above picture, there is a micro lens sits above the balyer filter to help each pixel shoot as much light as possible. The pixels do not sit precisely next to each other as there is a tiny gap between them. Any light that falls into this gap is wasted light, and will not be used for the exposure. The micro lens aims to eliminate this light waste by directing the light that falls between two pixels into one or other of them.
3#. How color image is produced
Usually, the photo-diodes employed in an image sensor are color-blind by nature, because they can only record shades of gray. To get color into the picture, as shown in Figure 1, they are covered with a filter on top. A Bayer filter mosaic is a color filter array (CFA) for arranging RGB color filters on a square grid of photo-sensors. Its particular arrangement of color filters is used to create a color image. The color image sensor uses Bayer filter to output raw Bayer image.
4#. Dimension size of image sensor
The optical format of image sensor will affect the imaging performance on many aspects. When checking the IP camera's specification, we can find many different dimension size of image sensor such as 1/4", 1/3", 1/2.8", 1/1.9" etc.
Generally, the larger the image sensor's dimension means the camera supports higher image resolution. 1/4" inch CMOS image sensor has been widely used in 720p IP cameras, while the 1/3" inch sensor is able to capture 2-megapixel resolution image.
The size of CMOS image sensor also effects its light sensitivity. The larger the CMOS image sensor, the better its light performance. Therefore, a 1/1.9" CMOS image sensor camera can provide better low light performance, it even can capture color image at night.
The size of image sensor is also correlated with the image depth and focal length. The smaller the image sensor, the lens's focal length more shorter (wider viewing angle), therefore can get background and foreground well-focused image, while big CMOS sensor can capture image with "background blur".
5#. Basic terminologies used
Image sensors in a camera detect and convey the information that constitute an image. A color image sensor uses what is known as the Bayer filter mosaic to provide a raw image.
ISP pipeline refers to a dedicated piece of hardware which further converts RGB image to YUV image with several corrections needed to achieve better image quality.
Image Signal Processing:
Image signal processing (ISP) is a method used to convert an image into digital form while performing some operations on them, in order to get an enhanced image or to extract some useful information. The two types of methods used for image processing are analog and digital image processing. Typical steps in ISP include importing the image, analyzing and manipulating it with data compression and enhancements to spot patterns that are not discernible to the naked eye. The final step involves converting the image to an output for further processing.
Effective pixels are the pixels that are actually capturing the image data. The total or actual pixels include all the pixels the sensor constituted. It includes approximate 0.1% of pixels left over after counting the effective pixels. These left over pixels as known as the "edge pixels" are not useless, they are used to determine the edges of an image and to provide color information. In conclusion, the “effective pixels” are those that capture incoming light, and will end up in the final image.
This refers to minimally processed original data from the image sensor. The raw image is so named because they have not been processed yet while containing all information necessary for further image processing.
YUV data refer to the output of an ISP pipeline. It encodes a color image or video taking human perception into account, allowing reduced bandwidth for chrominance components, thereby typically enabling transmission errors or compression artifacts to be more efficiently masked by the human perception than using a "direct" RGB-representation.
Get My Latest Posts
Subscribe to get the latest updates.
Your email address will never be shared with any third parties.