iPhone vs. Polaroid: In-Depth Comparison
How does a film camera capture an image?
To understand the Polaroid camera, we first have to understand the basics of a film camera. The Polaroid wouldn't be possible without the invention and evolution of the film camera. The film in a film camera is light-sensitive because it consists of a silver halide emulsion. The way film works is that the shutter opens and light bouncing off colored objects (from the scene you are photographing) hits the film, causing the silver halide structures to undergo a change.
Film has silver halide crystalline structures suspended in gelatin over the celluloid. When you see a film photo and notice its grainy look, it's because of that structure. To understand these basic principles, let's look at black and white film.
How does film develop into Black and White Photography?
When we photograph a scene, the shutter opens, and a photon (a particle of light) comes into contact with the silver halide crystal structure on the film, ejecting an electron from the valence band of the halide into the conduction band of the crystal.
This electron will then combine with a moving silver ion to form an atomic silver. The place where this occurs is the latent image center. This process of turning a silver ion into a silver atom needs to happen three or four more times, so the latent image center is stable. The latent image is invisible to the eye while it is still in the camera.
However, when you develop the film, these latent images will be amplified and stabilized to create a color negative. It is then no longer light-sensitive, and the atomic silver will become metallic silver, creating visible dark areas due to its color.
The amount of silver atoms produced is proportional to the amount of exposed light. More light equals more dark parts on the film negative. When you are finished, you have a negative image of the original scene. It looks the complete opposite of what we see. Each color is represented by different densities of metallic silver. They are tonal representations rather than colored representations.
At this point, you have the negative, and you have to turn it into a positive, which is an actual depiction of the scene. You do this by inverting the image, by scanning your negatives yourself, taking them elsewhere for scanning and printing, or even making your own prints.
This electron will then combine with a moving silver ion to form an atomic silver. The place where this occurs is the latent image center. This process of turning a silver ion into a silver atom needs to happen three or four more times, so the latent image center is stable. The latent image is invisible to the eye while it is still in the camera.
However, when you develop the film, these latent images will be amplified and stabilized to create a color negative. It is then no longer light-sensitive, and the atomic silver will become metallic silver, creating visible dark areas due to its color.
The amount of silver atoms produced is proportional to the amount of exposed light. More light equals more dark parts on the film negative. When you are finished, you have a negative image of the original scene. It looks the complete opposite of what we see. Each color is represented by different densities of metallic silver. They are tonal representations rather than colored representations.
At this point, you have the negative, and you have to turn it into a positive, which is an actual depiction of the scene. You do this by inverting the image, by scanning your negatives yourself, taking them elsewhere for scanning and printing, or even making your own prints.
With color photography, it's even more complicated. Color film uses silver halide just like black and white photography, but color film has three layers of emulsion. Each layer has a different light-sensitive dye mixed with silver halide that is sensitized so that each layer only captures either red, green, or blue light.
But how do we permanently capture the color? We need to record light and then turn that representation into pigment, which requires understanding subtractive primaries.
Color Photography: Light and Pigment
First, we need to understand that additive color is light, like red, blue, and green. They create secondary colors of cyan, magenta, and yellow. That's where subtractive colors come in; subtractive color is pigment. Remember, with light, when you combine them all together, you get white light, but with pigment, when you combine them all, you get black.
Let's circle back. We want all the colors of white light to be absorbed through the film and activate the crystals, right? Without any bouncing out. That means we have to capture and absorb that specific color of light. Think of the film as an elevator: the light enters the top and travels down. We want each color of light to exit at its appropriate level.
Let's circle back. We want all the colors of white light to be absorbed through the film and activate the crystals, right? Without any bouncing out. That means we have to capture and absorb that specific color of light. Think of the film as an elevator: the light enters the top and travels down. We want each color of light to exit at its appropriate level.
Blue is at the top, red is in the middle, and green is at the bottom since they are all primary colors. Don't forget that colored light reflects off the same colored objects. That means you need to find a color that absorbs your target color.
Remember in art class when your teacher explained the opposite color is always across on the color wheel? Well, then we found that the colors don't bounce, they absorb. So these pigments will catch our colored light. The silver halide crystals are coated in a color-light sensitive dye.
That way, only that color will be removed at the appropriate level, and the rest of the light is filtered through. The light reaction occurs, and silver ions are changed to silver atoms. The colored film undergoes multiple processes, eventually leaving only the colored dye in the exact spot where the silver had been created.
So now, you have a negative color image of your scene. It looks crazy because of the odd color pigment. But all you have to do now is invert the image! You can do this by scanning your negatives yourself, taking them elsewhere for scanning and printing, or even making your own prints. Color correction can be done in several ways using levels, curves, or the color balance tool in either the scanning software or any kind of post-processing software. That's an amazing process! But it's so much work.
But then came the Polaroid and in a Polaroid camera, the process happens within ten minutes.
So now, you have a negative color image of your scene. It looks crazy because of the odd color pigment. But all you have to do now is invert the image! You can do this by scanning your negatives yourself, taking them elsewhere for scanning and printing, or even making your own prints. Color correction can be done in several ways using levels, curves, or the color balance tool in either the scanning software or any kind of post-processing software. That's an amazing process! But it's so much work.
But then came the Polaroid and in a Polaroid camera, the process happens within ten minutes.
Capturing an Image Through a Polaroid Camera
As before, you have a light-sensitive silver halide layer on the back. You take the picture, and it develops the negative, which binds to dye. The remaining dye will naturally be the inverse of that, forming a positive image.
It then transfers this positive through the white layer, which also acts as the base white color of the image. This is why the film takes a bit to even start showing the image. The 'development' of the positive is basically just bringing the unused dye to the front. At the same time, you can see a black layer on the back, which is applied immediately at ejection, since it's still light-sensitive for the next 10-20 seconds, and if any light got to it, it would show up on the final result. In order for it to be fully developed, it needs to stay in the dark.
Shot on Polaroid Camera
How does the iPhone capture an image?
Let’s dive into how an iPhone camera captures a photo, which involves a completely different process. When light enters through the camera lens, the 'target' in the iPhone camera is a complementary metal-oxide-semiconductor (CMOS) sensor. A CMOS sensor is a silicon-based sensor that receives light (photons) and converts the photons into electrons, then into a voltage, and finally into a digital value using an on-chip Analog-to-Digital Converter (ADC).
The image sensor is comprised of light-sensitive pixels that detect light. In the iPhone 15 Pro, the sensor has a 48-megapixel camera, which translates to 48 million pixels.
A microlens and Bayer Mosaic Filter are placed on top of the sensor. For the color sensor example shown below, the color filter array employed is a Bayer filter pattern. This filters out only the colored light that is above the sensor. This filter pattern uses 50% green, 25% red, and 25% blue. In each pixel, there is a photodiode. A photodiode absorbs photons and converts that absorbed energy into electricity. The iPhone then reads the electrical current row by row, one at a time.
The analog-to-digital converter interprets the buildup of electrons and converts them into digital values. Once all 48 million values are stored, the overall image is sent to the CPU for processing, then stored in the solid-state drive in the form of 0s and 1s.
What is really important about this process is how the iPhone interprets color. Semiconductor pixels don’t see color; they only capture the amount of light that hits them, so without a filter, you will get a black and white photo. The Bayer filter ensures that the light reaching each pixel is one of the three primary colors.
Using the Bayer Mosaic Filter, there are twice as many green filters as red or blue ones, catering to the human eye's higher sensitivity to green light. Since each pixel of the sensor is behind a color filter, the output is an array of pixel values, each indicating a raw intensity of one of the three filter colors. Thus, an algorithm is needed to estimate, for each pixel, the color levels for all color components, rather than a single component. This process is called demosaicing, and the simplest demosaicing algorithms average the input of nearby pixels to obtain an idea
of the full color.
Shot on iPhone 15 Pro Max
Film vs. iPhone Shoot out
So now that you know the difference between film and a CMOS sensor, let's see the physical differences in photographing the same scenes and printing them out.
The images below were captured with:
The images below were captured with:
- Polaroid Now + 2nd Gen, which allows you to use an app to manually adjust exposure and shutter speed on your Polaroid.
- iPhone 15 Pro Max, which has a 48 MP camera.
Shot on Polaroid Camera
Shot on iPhone 15 Pro Max
After looking at these photos, the real takeaway is that the faded nostalgic look of the Polaroid photos is visually appealing. The muted colors, crushed blacks, and grainy texture give a dreamy look that is actually quite enjoyable to experience. It's almost as if my eyes can delve deeper into the picture, making it easier to scan for the little details and imperfections.
The Polaroids feel almost like paintings, with softer depictions of the scene and natural blending of colors compared to those of the iPhone. Looking at the iPhone shots, they are still enjoyable, but the contrast of the colors and intense saturation pull my eyes in many different directions. Every spot of the photo seems important and sharp, competing for my attention.
The Polaroids feel almost like paintings, with softer depictions of the scene and natural blending of colors compared to those of the iPhone. Looking at the iPhone shots, they are still enjoyable, but the contrast of the colors and intense saturation pull my eyes in many different directions. Every spot of the photo seems important and sharp, competing for my attention.
Shot on Polaroid Camera
Shot on iPhone 15 Pro Max
Although the Polaroids were more eye-catching in the above photos, that changed once the lighting became more dynamic at the beach. The Polaroid could not handle the low light at the beach, even when we adjusted the settings in the app. The added cloud cover made it more difficult for the Polaroid camera to capture much of the scene compared to the iPhone, which easily exposed both the foreground and background of each photo.
Since we know how the iPhone sensor works, it's no surprise that it's capable of shooting in a low-light environment.
Since we know how the iPhone sensor works, it's no surprise that it's capable of shooting in a low-light environment.
We hope this helps you gain a better understanding of the mechanics behind a Polaroid camera and an iPhone. Be sure to check out our video above for a deeper dive.
Happy shooting!
Happy shooting!