What is Scientific & Industrial Camera?
A camera is essential for any imaging system whereas an Industrial camera is a special type of camera that is adapted to work in harsh conditions like high temperatures, pressure, and vibration. They are used to control the production cycle, track units on conveyors, detect ultra-small parts, etc. Therefore, in general, their scope is almost limitless. These cameras are designed to quantitatively measure how many photons hit the camera sensor and in which location. These photons generate photoelectrons, an electron ejected from an atom, molecule, or solid by an incident photon, which is stored in wells within the sensor pixels before being converted into a digital signal. Scientific cameras are essential for taking images of scientific research to understand the phenomena surrounding us. A key aspect of scientific cameras is that they are quantitative, with each camera measuring the number of photons or light particles that interact with the detector. Photons are particles that create the electromagnetic spectrum, from radio waves to gamma rays. Scientific cameras usually focus on the UV-VIS-IR region to quantify visual changes within scientific research.
The function of a scientific camera sensor is to count any detected photons, and then convert them into electric signals. This is a multi-step process that begins with the detection of photons. Scientific cameras use photodetectors, which convert any photons that hit the photodetector into the equivalent number of electrons. These photodetectors can be made of different materials depending on the wavelengths of the photons that are being detected, however, silicon is most common for the visible wavelength range. When photons from a light source hit this layer, they are subsequently converted into electrons. An example of a silicon photodetector can be seen in the figure below with a cross-section of a silicon-based camera sensor. Light first hits the microlens (top of image), which focuses the light onto the silicon pixel (bottom of image). The sensor area outside of this light path is full of integrated electronics and wiring.
To create an image, more than just the number of photons hitting the photodetector needs to be quantified. The location of the photon on the photodetector also needs to be known. This is done by creating a grid of many tiny squares on the photodetector, allowing for both detection and location of pixels. These squares are pixels, and technology has developed to the point where you can fit millions of them onto a sensor. For example, a sensor that is 1 megapixel contains one million pixels. A visualization of 1,000,000 pixels can be seen in the figure below: (A) A 10×10 grid of larger squares, which each contain a 10×10 grid of small squares. Each of these small squares contains a 10×10 grid of even smaller squares, resulting in 100x100x100 squares, or one million. (B) A magnification of one of the larger squares in A, in which there are 10,000 pixels. (C) A magnification of one of the smaller squares in B, which contain 100 pixels colored blue and green. The entirety of grid A makes up one megapixel.
To fit so many pixels onto a sensor, they have been developed to be very small, however, the sensor itself can be quite large in comparison. An example of a camera, which has a 15 μm square pixel size (an area of 225 μm2) arrange in an array of 4096 x 4096 pixels (16.8 million pixels), resulting in a sensor size of 27.6 x 27.6 mm (area of 761.8 mm2) with a diagonal of 39.0 mm. A compromise, however, needs to be made with the size of the pixels. Although decreasing the size of the pixel will increase the number that can be placed on a sensor, each individual pixel will not be able to detect as many photons. This introduces a compromise between resolution and sensitivity. In imaging, the resolution is defined as the shortest distance between two points on a specimen that can still be distinguished. Whereas sensitivity is the measure of how strongly your camera sensor responds to light. Alternatively, if the sensors are too big, or contain too many pixels, much greater computational power is required to process the output information, decreasing image acquisition. This adds additional issues such as large information storage, with researchers requiring very large storage to save multiple experimental images for long periods. Therefore, sensor size, pixel size , and pixel number all need to be carefully optimized for each scientific camera design.
Generating an Image
Each pixel within a sensor detects the number of photons that come into contact with it, after being exposed to a light source. This creates a map of photon count and photon location, known as a bitmap. A bitmap is an array of these measurements and is the basis behind all images taken with scientific cameras. The bitmap of an image is also accompanied by the metadata, which contains all image information, such as camera settings, time of the image, and any hardware/software settings.
The following are the processes involved in creating an image with a scientific camera, with each step visualized in the figure below here is the imaging process with a scientific camera. Photons impact the sensor, which here is depicted as a silicon photodetector split into a grid of pixels. The pixels produce a photoelectron from the energy of the photon. The rate of this production is referred to as quantum efficiency. These electrons are then collected in the well of each pixel and are counted. The electrons are then converted into gray levels by an analog-to-digital converter (ADC). ADCs follow a sequence when converting analog signals to digital ones. They first sample the signal, then quantify it to determine the resolution of the signal, and finally set binary values and send it to the system to read the digital signal. The gray levels are then displayed on a computer monitor, with the image appearance controlled by the software display settings, such as contrast, brightness, etc.
1. Any photons that hit the photodetector are converted into photoelectrons. The rate of this conversion is called quantum efficiency (QE), with 50% QE referring to 50% of the photons being converted into electrons.
2. Generated electrons are stored in a well within each pixel, providing a quantitative count of electrons per pixel. The maximum number of electrons stored within each well controls the dynamic range of the sensor, as described by well depth or bit depth.
3. The number of electrons per well are converted from a voltage into a digital signal (gray levels) with an analogue to digital converter (ADC). The rate of this conversion is described as gain, where a gain of 1.5 will convert 10 electrons into 15 gray levels. The color generated from 0 electrons is known as the offset.
4. The gray levels corresponding to the digital signals are arbitrary grayscale monochrome colors. These gray levels are dependent on the dynamic range of the sensor and the number of electrons in the well. For example, if a sensor is only capable of displaying 100 gray levels, then 100 gray levels would be bright white i.e. saturated. However, if the sensor is capable of displaying 10,000 gray levels, then 100 gray levels would be very dark. This is assuming that no scaling has been applied.
5. A map of these gray levels is displayed on the computer monitor. The image generated depends on the software settings, such as brightness, contrast, etc. that are included in the metadata
Gray levels are dependent on the number of electrons stored within the wells of a sensor pixel. They are also related to offset and gain. Offset is the gray level baseline, with the value corresponding to no stored electrons. Gain is the rate of conversion of electrons to gray levels, with 40 electrons converted to 72 electrons with a 1.8x gain, or 120 electrons with a 3x gain. In addition, to gain and offset, the imaging software display settings also determine how the converted gray levels and the image displayed on the computer monitor correspond to each other. These display settings affect the visualization of the gray levels, as they are arbitrary, and are reliant on the dynamic range of other values across the sensor. These predominant stages of imaging are consistent across all modern scientific camera technologies, however, there are variations between sensor type and camera models.
Types of Camera Sensors
Sensors are the main component of the camera and are continually being developed and optimized. Researchers are constantly looking for better sensors to improve imaging, providing better resolution, sensitivity, field of view, and speed. There are many different camera sensor technologies, which vary in properties such as high sensitivity over different wavelength ranges. These sensors include charge-coupled devices (CCD), electron-multiplied CCDs (EMCCD), intensified CCDs (ICCD), indium gallium arsenide semiconductors (InGaAs), and complementary metal-oxide-semiconductors (CMOS).
A Scientific & Industrial Camera is therefore essential for any imaging system. These cameras are designed to quantitatively measure how many photons hit the camera sensor and in which location. These photons generate photoelectrons, which are stored in wells within the sensor pixels before being converted into a digital signal. This digital signal is represented by gray levels and displayed as an image on a computer monitor This process is optimized at every stage to create the best possible image dependent on the light signal received. And Scientific & Industrial Microscopy Cameras are available in CMOS and sCMOS sensor types, allowing the user to balance important characteristics such as read noise, pixel size, and max frame rate when choosing a camera.