Modern digital cameras rely on optoelectronic sensor chips, as opposed to earlier cameras that used photographic film to collect images.
The sensor, which controls everything from image size to resolution to low-light performance to depth of field and dynamic range, is the heart and soul of a digital camera.
A sensor’s structure type (CCD or CMOS), chroma type (colour or monochrome), and shutter type (global or rolling shutter) can all be used to classify it.
Resolution, frame rate, pixel size, and sensor format are additional categories for them. Knowing all of these parameters, as well as the appropriate combination, aids in identifying the best sensor for the camera’s performance.
A solid-state device that converts incoming light (photons) into an electrical signal that can be viewed, analysed, or stored is known as an image sensor. Two cameras with the same sensor can have extremely different performance and specifications due to changes in sensor design. An image sensor chip is made up of several pixels that contain light-sensitive components such as micro-lenses and micro-electrical components.
A sensor is made up of various components. A clear window-like cover glass on the front of the box shields the sensor chip and wires. At the same time, it permits light to enter the delicate region. Packaging protects the sensor chip and wire bonds from physical and environmental harm while also shielding them. Additionally, it offers heat dissipation and communicates electronic signals.
The sensor can only detect photons that enter light-sensitive regions. Every tiny photosite stores information about the light it detects as it passes past the lens of the camera. Each tiny sensor in a sensor array is referred to as a photosite or pixel. A picture element is the term used to describe the data that each photosite offers. ‘Picture’ is shortened to ‘Pixel’. The size of a pixel is measured in micrometres (m).
The sensor in a DSLR camera contains millions of discrete pixels. Each pixel must also leave some room for various purposes in addition to measuring and reading light.
Various sensors
Digital cameras generally use two types of sensors.
CCD (Charged Couple Device)
CMOS (Complementary Methane Oxide Semiconductor)
Digital image capture uses two distinct technologies: CCD (charge coupled device) and CMOS (complementary methane oxide semiconductor) image sensors. Each has unique benefits, qualities, and flaws depending on its use. Light can be converted into an electrical charge and then processed into electronic signals by both types of imager sensors. The way the signals are processed is where the two technologies diverge most.
At Bell Labs in 1969, George Smith and Willard Boyle created the charge-coupled device. Nearly 20 years after its creation, CCD has reached the point of maturity in technology.
Nobukasu Teranishi created the pinned photodiode in 1980 at NEC in Japan. This significantly increased the image-to-noise ratio, making it possible to use the image sensor’s resolution. Eric Fossum, a NASA scientist, created the complementary metal-oxide semiconductor (CMOS) active pixel image sensor in 1995. Compared to CCD image sensors, it consumed up to 100 times less power and cost a lot less to produce.
Although CMOS sensors have long offered high quality, resolution, and light sensitivity, recent developments have brought some applications closer to CCD quality. CMOS sensors have replaced CCD in many cameras that formerly used them.
Sensory function
A silicon chip with many photosensitive spots serves as the sensor. Each tiny sensor in a sensor chamber is referred to as a photosite or pixel. Numerous hundreds or millions of tiny, light-sensitive squares (or occasionally rectangles) make up each pixel. As a result, photosites are frequently referred to as pixels. Each photosite equates to a single pixel in the final image.
The sensor’s pixels are organized horizontally in columns and vertically in parallel rows. The sensor’s size is determined by the number of rows and columns. The sizes are typically 1024 pixels wide by 1024 pixels high. The size of the pixels and their spacing, or pixel pitch, determine the resolution of the sensor.
Through the photoelectric effect, sensors absorb photons and turn them into electrons. An electrical charge that builds up during the course of exposure is stored with the electrons in a well. CCD sensors and CMOS sensors use distinct charge transfer techniques.
The operation of sensors can be divided into four stages
Converts light into charge.
Collecting the charge
Transferring
Converts to voltage
C.C.D sensor
There are some variations in practice even if the fundamental idea of reading a CCD sensor remains constant. They can be classified into two groups based on this distinction.
Interline Transfer (IT) CCD
Full Frame (FF) CCD
Interline Transfer (IT) CCD
In a CCD, pixels are arranged in vertical columns and parallel (horizontal) rows. Because each vertical row of pixels is connected to a row of vertical shift registers, these cells are not arranged side by side.The parallel (horizontal) shift register is reached by shifting the pixel signals along the parallel rows and vertical columns using the vertical shift register as a transport channel. The signal for the read-out unit is delivered via a parallel (horizontal) shift register.
Depending on the brightness of the incoming light, electrical charges build up in the photosensitive cells when the sensor is exposed to it. Each pixel loads and discharges its charge into the adjacent shift register once the picture is taken and read by the sensor.The charges can all be moved down one position in the shift register at that point. The initial charge then enters the horizontal shift register and can be sent to the read-out unit.
Each charge is loaded into a capacitor and amplifier element in the readout unit so they can transform the charge into a voltage. The intensity of this (analogue) voltage is converted into a digital signal, which is a binary number that can be read by a processor, via an AD (analog/digital) converter. Up until the final charge is converted, this readout procedure is carried out step by step.
All vertical shift registers can shift down another place and read the horizontal transport channel once the horizontal shift register is empty. Up until the last charge is delivered to the readout unit, this procedure is repeated.
Full Frame (FF) CCD
Even though the fundamentals of reading a CCD sensor are constant, there are minor variations. Vertical shift registers are integrated into the pixels of a full-frame (FF) CCD sensor rather than being used as separate shift registers to transport charge. As previously mentioned, the horizontal shift register continues to transfer charges to the read-out unit while pixels can transport their charges directly into the subsequent cell in a row.
Full frame should not be confused with the similarly named sensor format because a full frame CCD does not need the same size as a full frame sensor. Since all of the pixels can be positioned closely together, the term “full-frame CCD” alludes to the usage of the entire space available for the light-sensitive area.
The complete arrangement of charges is transferred to a frame storage (potential well) area with the same dimensions as the primary pixel array in the first step of reading a CCD sensor using the frame transfer (FT) technique. However, the frame storage is light-insensitive.The storage chamber is read normally in the following step. This procedure makes it possible to swiftly empty the light-sensitive sensor area so that a new image can be acquired while the emptying is happening. Digital video cameras frequently use frame transfer CCD sensors because the read-out times in this mode are so quick.
CMOS sensor
With an example, I believe it will be little simpler to explain how a CMOS sensor works. The best way to describe pixels is to imagine them as a collection of rainwater buckets. Imagine four buckets, each with three different colors, being set up to catch the familiar raindrops.
Of the four buckets, only the front two (or 1/3) can be opened. In other words, the rains falling in one area in Munil (3/1) will be splashed. To stop rainfall from splashing, a bucket-sized drain is positioned in this manner. To pump the raindrops that fill the bucket, a motor is positioned in the front (3/1) or non-opening portion of the device.
Coloured cloths are arranged in accordance with the colors on top of the buckets, also known as bucket tint. The raindrops coming down through it fill the bucket with colored raindrops according to the colors of the cloth. There won’t be an equal number of droplets collected in all four.
You can imagine photons as raindrops. The macro lens is located above the bucket. On the side that cannot be opened, a motor positioned on top of the cap drives the amplifier. The Bayer filter is visible beneath the hat as the colored cloth. The potential well is the area where the raindrops are located.
In a CMOS sensor, the pixel itself performs voltage amplification and charge to voltage conversion. As a result, a CMOS sensor’s processing speed can be significantly higher than a CCD sensor’s. Similar to CCD arrays, CMOS sensors include rows and columns of pixels organised in a rectangular grid. Shift registers in CMOS sensors are not present in CCD sensors. Essentially, each pixel unit consists of three transistors and a photodiode.
When photodiodes are exposed to light, they charge up electrically. After being transformed and amplified, those charges become electrical impulses. On the CMOS sensor, the voltage produced by each pixel is read in a parallel (horizontal) line.In other words, the pixel select switch is used to turn on the first row of pixels. As a result, the following column is connected to the output of the pixel via this pixel choose switch. The column select switch allows us to read the data of each pixel one at a time. The remaining rows go through the same process once more.
However, some CMOS sensors feature an AD (Analogue / Digital) converter after each column. A CMOS sensor often has an AD (Analogue / Digital) converter after the OUTPUT. These days, each pixel may contain an AD (analog/digital) converter.
Key differences between CCD sensors and CMOS sensors
The individual pixel structure and read out philosophies of a CMOS sensor and a CCD sensor are different. Large photodiodes are crammed onto each pixel of CCD sensors, giving them tremendous light sensitivity.The shift registers are also impacted by the nearby charges because all of the charges must pass through them first. Consequently, they are more inclined to make mistakes. On CCD sensors, bright spots produce smear effects.
CCD sensors only have a serial read-out principle, hence their read-out speed is lower than that of shift registers. Contrarily, the readout procedure for CMOS sensors is quicker because of parallel signal processing and the usage of multiplexers. However, because of their smaller photodiodes and stronger signals, they are more prone to read-out noise.
Lower dynamic ranges (dynamic range) are the result of smaller photodiodes. CMOS sensors are vulnerable to burnt highlights or dark shadows due to low dynamic range. The low power consumption and low production cost of CMOS sensors, however, are their revolutionary advantage in today’s cameras.
Function and Structure of CMOS Sensors
The basic functions of CMOS sensors can also be divided into two depending on their phase
Frontside Illumination
Backside illumination
Frontside Lighting Photons arriving from the front are intended to be collected by a typical sort of photodiode. Light must pass through numerous metal and dielectric layers in photodiodes with front illumination schemes before it reaches the real diode. These layers may reflect or obstruct light, which prevents it from reaching the photodiode and reduces performance while also introducing other issues like crosstalk.
Crosstalk is a term used to describe any situation in electronics where a signal sent through one circuit or channel of a transmission system causes an unfavorable outcome in another circuit or channel. This suggests that the metal structure is deflecting the light photons, which may unintentionally land on a neighboring photodiode. In order to steer the photons towards the appropriate photodiode and avoid this from happening, a reflecting coating (light tunnel) is constructed around the wiring components.
Flipping the photodiode from top to bottom is another method for reducing issues like crosstalk and boosting the photodiode’s light sensitivity. This concept uses metal and dielectric layers to gather light from the photodiode’s reverse side. Backside illuminated photodiodes are a new type of photodiode.
The structure with illumination from the backside improves quantum efficiency and lets light into the sensitive area. Not all image sensors are based on this architecture, and side illumination is simply an optional feature. Typically, photodiodes with backside illumination designs are used with the Sony Exmore R sensor.
Structure of the pixel
With today’s greater frame rates and improved image quality, CMOS sensors excel. A excellent CMOS sensor is made up of various parts.A photodiode and three transistors—one for pixel reset or activation, another for amplification and charge conversion, and a third for selection or multiplexing—basically make up each pixel unit. These pixel-specific components can be shared among pixels; they are not always required to be contained within a single pixel.
Light sensitivity only exists in the photodiode, which is a component of the photosite. The fill factor is the proportion of a pixel’s entire area to its light-sensitive area.Electronic circuits like amplifiers and noise-reduction circuits make up a modest portion of CMOS sensors. Micro lenses are used by manufacturers to boost the fill factor. Subsequent to the photodiode is a well. It serves as a repository for electrons.
The quantity of photons measured is directly related to the quantity of electrons generated in the well. The voltage is subsequently created from the well’s electrons.
Basic operation of Pixel
Quantum efficiency, saturation capacity, black noise, dynamic range (DR), and other factors have a significant impact on an image.
Quantum Efficiency (QE)
Light digitization begins with the conversion of photons into electrons. Quantum efficiency (QE) refers to the ratio of electrons to photons during the digitising process. Example If 6 photons land on a sensor and 3 electrons are produced, the sensor is said to have a 50% QE.
Saturation & Full Well Capacity
The photodiode’s construction enables incident light to produce a photocurrent, which is then translated into a voltage for reading. Despite the nearly linear relationship between a photodiode’s irradiance and the generated photocurrent, the photocurrent has a maximum value that it cannot surpass, regardless of the amount of photon energy present.
The saturation of a photodiode is determined by the greatest possible current. When all photogenerated charge carriers (free electrons and holes) are removed from the semiconductor, the photocurrent reaches saturation.The saturation level is impacted by pixel size. In other words, the sensor saturation level is inversely proportional to the size of the pixels on a high-resolution sensor. However, increasing pixel size raises the saturation level.
Each sensor element in CCD sensors can hold a maximum amount of charge known as the full well capacity when incident photons are converted into electric charge.Modern camera sensors are made to perform a certain function. Nearby locations will experience a charge that exceeds the brightest parts of the scene’s entire charge capacity (full well capacity).
The surplus electrons won’t be stored if the well receives more electrons than its saturation capacity. An individual pixel’s storage capacity for electrons is known as saturation capacity. It relates to the highest number of photons (saturation irradiance) produced by such electrons.The likelihood of some pixels actually overflowing is very high. Compared to normal pixels, these pixels hold less information about the scene. It is often advised to select an exposure setting for a shot because of this. The result is a bright section of the scene that is slightly below saturation.Underexposure, on the other hand, causes a lot of noise. But balancing these competing objectives is necessary for better exposure. The dynamic range of the sensor is defined by the relationship between noise and saturation, which also establishes the range of light rays that may be acceptably caught in a single exposure.
Dynamic Range (DR)
The dynamic range (dynamic range) of the sensor, the range of brightness from black to white, and the camera’s ability to record details in both bright and dark portions of the scene are all determined by the amount of electrons that can be gathered per well. Typically, a sensor with full well capacity has a wide dynamic range.Low noise sensors help increase dynamic range and enhance definition in dimly light environments. It is important to keep in mind that the camera’s entire dynamic range is only accessible at base ISO. The ratio of saturation irradiance to the lowest measurable irradiance is known as dynamic range (DR). In dB, dynamic range is expressed.
The size of the pixel (photosite).
Numerous pixels Sensors are made up of identically sized pixels. The space between pixels is measured in terms of “pixel pitch.” The distance between individual pixels, or the distance from one pixel’s centre to the next pixel’s centre, is measured in microns and is known as a pixel size. There can be spaces between certain pixels. Because of this, determining “pixel pitch” does not provide the “actual” width of a pixel.
Let’s look at another illustration to show how the image is impacted by the size of the sensors and the pixels.Put a glass, cup, or bucket in the rain to catch the raindrops while it rains. These fronts will amass a significant amount of precipitation. You can imagine photons as raindrops. As photons enter the sensor, it also absorbs them and releases electrons. The number of electrons can fit depends on the pixel size. It’s analogous to claiming that my bucket has more water in it.
It is commonly accepted that larger pixels capture more light and that larger pixels result in less sensitive sensors to light. The signal capacity per pixel (also known as full well capacity) is higher with larger pixels.One camera has huge pixels, and the other has pixels that are half the size of the first. as long as the lens is able to gather light and as long as its diameter is the same. As long as the pixel is not overflowing, this yields an equivalent performance. Every bucket (pixel) contains the same number of photons, or rain.Depending on the camera technology, sensor and camera electronics noise can make one camera perform better or worse against another, with a camera with larger pixels sometimes performing better in low light and a camera with fewer pixels sometimes performing better.
Put a glass, cup, or bucket in the rain to catch the raindrops while it rains. These fronts will amass a significant amount of precipitation. You can imagine photons as raindrops. As photons enter the sensor, it also absorbs them and releases electrons. The number of electrons can fit depends on the pixel size. It’s analogous to claiming that my bucket has more water in it.
Microlens
Modern image sensors include unique microlenses above each pixel to maximise the light available. A collection of lenses called microlenses aid in gathering and concentrating the light that reaches the sensor onto each photodiode. A normal single-element lens is the same size as a microlens.
Photodiode
Photons become electrons through photoionization. That process transforms light into electricity. It can be found in the centre of each pixel. The wire and circuit layers that lie between them support and connect the operation of the photodiode and microlens.
The photosensitive region is smaller than the whole pixel area, and distribution structures like chip substrates and supply electronics (mainly ultra-thin metal wires) make up a pixel. A photodiode is a semiconductor component that has a highly unique property that makes it possible for it to detect light. Typically, silicon is used to create diodes.
Amplifier
The corners of each pixel on the CMOS sensor contain an amplifier. However, there is only one amplifier located beneath the chambers of the CCD sensor, which means that each individual pixel does not have a separate amplifier.The amplifier at each photosite performs a variety of tasks. The nearest sensor in the first row is read for the charge, which is then amplified and sent to the following sensor. The sensor cavity, which is a pixel or photosite, is left empty when the charge in a photosite is released.The following photosite in the first row follows suit. In this manner, each photosite’s amplifier may read and process the subsequent charge.
The same process is carried out up till one sensor chamber line is used up. As a result, the higher row becomes empty and all the charged rows migrate downward from one row to the next.The entire operation is frequently completed in less than a second. Before transmission, the amplifier unit tags each charge from the sensor cavities so that each bit of information may be put back together in the exact same order to create the image.
Potential Well
In a region resembling a bucket, the free-flowing electrons are gathered and counted. Potential Well refers to this location. The number of electrons that can be gathered by each pixel well is limited. Full well capacity is the name given to this limit.When it rains, briefly open and close each of the three buckets to catch the droplets. When glancing inside the bucket, there are no raindrops to be seen. Try one more, this time leaving the buckets open for longer. Nearly all of the buckets are now visible to be filled.
If we attempt to measure precisely, we might discover that there are 5000 droplets, another 4800, and 4000 drops in the fourth bucket. However, most of the time you will think they are all the same after looking at them. In this manner, the light that strikes a proton (an picture) enters the potential well as an electron.
Silicon substrate
All of the components of a sensor are bonded to a silicon substrate, which is a base constructed of silicon semiconductor. Both the shape and the technology of both sensors have greatly improved. Compared to CCDs, CMOS devices run at substantially lower power levels. More affordable than CCD is CMOS technology. While a CCD requires a separate amplifier and other supporting components, a CMOS sensor needs an amplifier for each photocell in the cavity.
Image Sensor Format (Size)
The physical dimensions or surface area of a sensor dictate the number and size of pixels. The actual size of the sensor is just as important as the size of the canvas on which a picture is painted. A camera’s sensor determines the quality, quantity, and size of images it can collect.Increasing the number and size of pixels allows you to take photographs in low light, with less noise, a wider dynamic range, and more information. As the surface area of a photoset or pixel rises, it collects more light (photons).But bigger doesn’t always mean better!. Many digital cameras are now commercially accessible, and they all have a variety of sensor sizes. They are classed as full-frame, APS-C format, or crop sensor depending on their size.
The diagonal measures the distance between each camera sensor. The diagonal distance, commonly known as the hypotenuse, is measured as a straight line from the upper right corner to the lower left corner. A full frame 35mm format camera has a standard sensor size, such as 36mm 24mm.Crop factor is a dimensionless reference number for image sensors. Crop frame image sensors are smaller than full frame camera sensors. The physical dimensions of a sensor, such as its width and height, are measured in mm. A Pythagorean theorem can be used to simply compute this distance.
Diagonal distance =
sqrt((width ^ 2) + (height ^ 2))
sqrt = square root
For example, the diagonal distance or hypotenuse of a full frame of 36 mm by 24 mm can be found as follows
Square root ((24 ^ 2) + (36 ^ 2)). The result is about 43.3 mm.
Camera crop factor = 43.3 / camera sensor diagonal distance
A full frame camera will have a crop factor of 1, 43.3mm / 43.3mm.
Full frame sensor crop factor = 1
APS-H sensor crop factor = 1.29 APS-C sensor crop factor = 1.5 to 1.6 depending on the model.
Foveon sensor crop factor = 1.73
Micro 4/3 sensor crop factor = 2
Fisheye view reminds us that illusion is not flat. Photography – Abin Alex | Camera: canon Ee. OS 5D Mark IV, Focal Length: 15mm, Aperture; f/11, shutter speed; 1/250, ISO: 100
© 2013 Abin Alex. All rights reserved. Reproduction or distribution of this article without written permission from the author is prohibited. Abin Alex is the director and founder of the Creative Hut Institute of Photography and Film. In addition, he is the founding chairman of the National Education and Research Foundation. He is a well-known Indian visual storyteller and researcher. He served as Canon’s official Photomentor for eight years. He has trained over a thousand photographers and filmmakers in India.