Image sensors convert an optical image into an electrical signal, i.e. image sensors sense the light entering into the camera lens and convert them into images. The most widely used image sensors available in the market are CCD and CMOS. CCD stands for Charge-Coupled Device while CMOS stands for Complementary Metal-Oxide Semiconductor. Even though both work on the same principle, their functioning is different. With the evolving technology, both sensors have their own significance. While some devices work well with CMOS sensors, few are best suited for CCD sensors.
Today, CMOS Images sensors are used more often than CCD Image sensors, even though CCD can provide better image quality. This is because they are easy to manufacture, support higher-speed image processing, consume less power, and are less expensive. Though CCD is more advanced and has better quality, CMOS sensors are continuing to improve in technology. CCD is mainly used in DSLR cameras and CMOS image sensors are widely used in digital cameras, mobile phones, tablets, etc.
The CMOS image sensor is a semiconductor device that transforms optical images into digital signals. It is used in digital cameras and other electronic optical devices. CMOS image sensor is a functional device that divides the light falling on its light-receiving surface into many small units and converts them into useful electrical signals. It constitutes a highly sensitive photodiode and is divided into a large number of light-sensitive small areas which are known as pixels. These pixels are used to build up a particular image. A collection of million pixels is known as a Megapixel. This unit is commonly used to represent the resolution of an image sensor.
When a photon falls on the area defined by a pixel, those photons are converted into electrons whose number depends on the intensity of the light falling on it. After each cycle, the number of electrons in each pixel is measured and the image is reconstructed.
CMOS sensors are generally specified by their physical sizes and the size of the CMOS sensor determines the light-collecting surface area of the sensor. The dimensions of the sensor is often measured in inches and is known as optical format.
The optical format is calculated by multiplying the diagonal length of the sensor with 3/2.
The charge in each pixel is collected using the electrodes present in each pixel. These electrodes are arranged in such a way that the charge in each pixel is transferred downwards along the pixel columns. When the charges reach the final row, they are transferred out of the CMOS and it is measured. This recreates the image taken. The frequency with which individual images are produced is known as the frame rate of a CMOS image sensor. The standard unit of frame rate is frames per second (fps).
The dynamic range of a CMOS image sensor refers to its ability to capture all levels of brightness in a single image, i.e., from the darkest shadows to the brightest highlights, without losing detail. In other words, it measures the sensor's ability to detect both very bright and very dark areas in an object and produce a detectable image. It is usually measured in decibels (dB) and is calculated using the formula,
Here Nsat is the maximum number of electrons the detector can accumulate and Nnoise is the maximum noise present in the device.
In traditional film cameras, the shutter opens and closes physically to expose the film to light for a specific duration, resulting in a uniform exposure of the entire frame. But in a CMOS image sensor, there are no physical shutters, instead, electronic methods are employed to control exposure.
In an electronic rolling shutter, the image sensor does not capture the entire frame simultaneously. Instead, it captures the frame row by row, or more accurately, from top to bottom or bottom to top. This process is similar to rolling a shutter over the image sensor, hence the name "rolling shutter."
When the shutter button is pressed to capture an image or start recording a video, the electronic rolling shutter activates. It begins scanning the image sensor one row at a time. Each row's exposure is brief, typically a fraction of a second.
A global shutter captures the entire frame simultaneously. Unlike the rolling shutter method, where rows of pixels are exposed sequentially, all pixels in a global shutter are exposed at the same moment.
When the shutter button is pressed to capture an image or start recording a video with a global shutter, all the pixels on the sensor are exposed to light at the same time for a brief period. This simultaneous exposure ensures that the entire frame is captured uniformly without distortion due to motion or camera shake. Global shutters eliminate the need for sequential scanning of rows, resulting in minimal or no motion-related distortions. The exposure duration is consistent for all parts of the image.
A mechanical shutter is a physical barrier positioned in front of the CMOS image sensor. It is designed to control the amount of light that reaches the sensor and the duration of exposure.
Initially, the mechanical shutter is closed, blocking all light from reaching the CMOS sensor. When the shutter button is pressed, the shutter opens, allowing light to pass through the camera's lens and reach the sensor.
The duration for which the mechanical shutter remains open determines the exposure time. This can range from fractions of a second to several seconds or more, depending on the settings chosen by the user. After the specified exposure time has elapsed, the mechanical shutter closes, preventing any more light from reaching the sensor. This marks the end of the exposure.
Signal-to-noise ratio (SNR) of a CMOS image sensor measures the quality of the image captured by the sensor by comparing the strength of the image information to the level of unwanted noise.
In a CMOS image sensor, the "signal" refers to the actual light information that the sensor detects from the scene being photographed and "noise" refers to any unwanted and random variations in the sensor's output that are not related to the actual scene being photographed.
SNR is a quantitative measure indicating the clarity and definition of the signal (image) relative to the noise. It's typically expressed in decibels (dB) and is calculated as follows:
Here signal power is the power or strength of the signal, which represents the image data's quality, and noise power is the power or strength of the noise, which represents the unwanted variations in the sensor's output.
Click here to know more about Readout Modes in Image Sensors.
Our Newsletters keep you up to date with the Photonics Industry.
By signing up for our newsletter you agree to our Terms of Service and acknowledge receipt of our Privacy Policy.
By creating an account, you agree with our Terms of Service and Privacy Policy.