There seems to be a fundamental lack of understanding about the role of software in digital photography that I see in many places.
A common comment I hear these days is "That's a good photo. It must be all done in PhotoShop." Usually from people who have little or no idea how such software works.
So, let's start from the beginning shall we?
Light bouncing off the subject enters the lens of the camera and is focussed onto a light sensitive sensor. The amount of light is controlled by the shutter speed which determines how long the light affects the sensor and a variable aperture that determines how much light is allowed through the lens.
On most cameras, the sensor itself is made up of many individual receptors each filtered to receive light of one colour, red, blue or green. In a standard Bayer array the green receptors out number the red or blue by a factor of 2:1 which simulates our own eye's increased sensitivity to the green part of the visible spectrum.
Each receptor can only record the amount of light that falls upon it and this information is recorded by the camera's image processor by turning a circuit on or off. The processor usually has 12 such circuits, or sometimes 14, per receptor and it is convenient for us to think of this record as a binary number with either 12 or 14 digits or "bits". ( In 12 bits, 000000000000 would represent no light falling on the receptor and 111111111111 would represent the maximum light the receptor can record. )
This process is repeated for each receptor on the sensor array ( Usually millions of them. ) and this number is passed on to a second processor which either converts the data into a RAW file format for conversion into an image using external software or it makes the conversion itself using software algorithms pre specified by the manufacturer.
The Processor knows which filter the receptor lies under so it can assign this binary number to one of three channels, red, blue or green. By analysing data from the neighbouring receptors it extrapolates to determine how much of the other two light colours is likely to have fallen on that same receptor and assigns binary numbers to the other two channels based upon that assumption.
So, for every receptor on the camera sensor we are given three 12 or 14 bit numbers. 011111111111 011111111111 000000000000 this would represent an amount of yellow coloured light falling on the receptor. ( Half red, half green. )
If the camera is set to produce JEPG. files the data is then reduced to just 8 bits per channel by throwing away some of the data to save space. This process is called compression and in this case it is irrevocable.
This is why a JPEG file is incapable of holding as much image information as a RAW file.
If the camera is set to produce RAW files then the "image" is still just a series of single channel binary numbers. The picture you see on the back of the camera is just a simulation of the JEPG image your camera could produce if it was set that way.
The Raw data needs to be converted into an image by software in a computer which will extrapolate the data with the help of an intelligent processing system called a "Brain". ( This is the part that is lacking in most cameras although sometimes it can be found, unconnected, just a few inches away. )
Such software is often provided by the camera manufacturer or by independent companies such as Adobe or Phase One.
Adobe's product "Lightroom" is on of the most popular such products so will serve as a suitable example.
In Lightroom the RAW data is extrapolated into it's three channel format and the data is displayed on a standard monitor through a simulation profile. (The data is still 12 or 14 bit although the monitor can usually only display 8 bit colour, this process is known as colour management and is a whole kettle of fish I don't want to get deeply into here.)
The software is capable of applying similar algorithms as the manufacture applied in camera but since the manufacturer has no idea what the camera was pointing at when you pressed the button that would be a very unintelligent way to make the conversion.
Statistically, the manufacturer may decide that your subject is probably a bunch of your friends in a karaoke bar in the Far East where as in my case it would be more likely to be a British or Scandinavian Landscape.
The way that the data needs to be interpreted in these two cases would be entirely different and it is this difference of interpretation that overwhelmingly lies behind many peoples disappointment in the result they achieve from their cameras.
So many times I have heard people say that they thought the colours were different or there was more detail when they saw the picture. The fact of the matter is that they are probably correct but the internal processor of the camera thought it was in a karaoke bar and tried to make it look that way.
The really sad thing is that many people start to doubt their own senses or memory because they bought into one of the greatest lies ever told. "The camera never lies..." It does.. It's designed to... It was even the same in the days of film. Different films recorded colour or tone differently and photographers chose their film according to the characteristics they needed.
It's exactly the same with digital cameras. You could line a dozen different cameras up, pointing at the same subject and trigger them all at the same time and the results would be different.
The camera you buy is often an indicator of your demographic and the manufacturers adjust their assumptions of the type of photograph you are likely to take according to your probable demographic.
My camera was designed almost specifically for travel and landscape photography. It is the most accurate for my kind of subject that I have ever used but it would take lousy pictures in a karaoke bar.
Coming back to our Lightroom conversion, the software allows us to tweak the algorithms to adjust the exposure, tonality and colour using the intelligent processor behind our eyes to produce an interpretation that may or may not match our perception of the scene we were pointing the camera at.
Some people choose to exaggerate certain aspects of the image for effect, some people strive for a faithful rendition. In practice most of us fall somewhere in between due to subjective issues.
The image displayed on the screen at this point is still just a simulation, the accuracy of which depends on whether you are using a calibrated monitor or not. The data needs to be converted into a file format that can be read by other programs. This could either be a compressed format such as JEPG for display on the internet or a file format such as a 16 bit TIFF which is capable of containing all the data recorded by the camera. (This is the option I usually use, only converting small low quality display images to JEPG for the internet.)
Now, at last, we have something that is closer to a picture even though it still needs to be interpreted by a computer to be displayed. (Let's hope the person viewing the image has also calibrated their monitor or they may see something very different from what you are seeing.)
This is the image that can now be edited in PhotoShop.
It's a long way from that light bouncing off the subject now.
PhotoShop is another software product made by Adobe. In the right hands it is capable of producing fantastic images or it is capable of correcting inaccuracies produced by limitations in the photoelectric systems of our cameras.
One person might use it to depict a unicorn winning the Grand National while another might use it to combine differently exposed frames to record the wider range of tones that our eyes can see but a camera sensor cannot.
One use is no more worthy than the other, it is purely down to the skill and intention of the photographer.
My intention is to record my impression of what it was actually like to be in a particular place at a particular time. Sometimes I can do that with one frame sometimes It will require more. Sometimes I may remove distracting elements, sometimes I may add or change something that I could not render properly at the time. In the end it all comes down to integrity. I have all the skills and technology needed to produce fantasy images if I wanted but I choose to illustrate reality instead.
.
A common comment I hear these days is "That's a good photo. It must be all done in PhotoShop." Usually from people who have little or no idea how such software works.
So, let's start from the beginning shall we?
Light bouncing off the subject enters the lens of the camera and is focussed onto a light sensitive sensor. The amount of light is controlled by the shutter speed which determines how long the light affects the sensor and a variable aperture that determines how much light is allowed through the lens.
On most cameras, the sensor itself is made up of many individual receptors each filtered to receive light of one colour, red, blue or green. In a standard Bayer array the green receptors out number the red or blue by a factor of 2:1 which simulates our own eye's increased sensitivity to the green part of the visible spectrum.
Each receptor can only record the amount of light that falls upon it and this information is recorded by the camera's image processor by turning a circuit on or off. The processor usually has 12 such circuits, or sometimes 14, per receptor and it is convenient for us to think of this record as a binary number with either 12 or 14 digits or "bits". ( In 12 bits, 000000000000 would represent no light falling on the receptor and 111111111111 would represent the maximum light the receptor can record. )
This process is repeated for each receptor on the sensor array ( Usually millions of them. ) and this number is passed on to a second processor which either converts the data into a RAW file format for conversion into an image using external software or it makes the conversion itself using software algorithms pre specified by the manufacturer.
The Processor knows which filter the receptor lies under so it can assign this binary number to one of three channels, red, blue or green. By analysing data from the neighbouring receptors it extrapolates to determine how much of the other two light colours is likely to have fallen on that same receptor and assigns binary numbers to the other two channels based upon that assumption.
So, for every receptor on the camera sensor we are given three 12 or 14 bit numbers. 011111111111 011111111111 000000000000 this would represent an amount of yellow coloured light falling on the receptor. ( Half red, half green. )
If the camera is set to produce JEPG. files the data is then reduced to just 8 bits per channel by throwing away some of the data to save space. This process is called compression and in this case it is irrevocable.
This is why a JPEG file is incapable of holding as much image information as a RAW file.
If the camera is set to produce RAW files then the "image" is still just a series of single channel binary numbers. The picture you see on the back of the camera is just a simulation of the JEPG image your camera could produce if it was set that way.
The Raw data needs to be converted into an image by software in a computer which will extrapolate the data with the help of an intelligent processing system called a "Brain". ( This is the part that is lacking in most cameras although sometimes it can be found, unconnected, just a few inches away. )
Such software is often provided by the camera manufacturer or by independent companies such as Adobe or Phase One.
Adobe's product "Lightroom" is on of the most popular such products so will serve as a suitable example.
In Lightroom the RAW data is extrapolated into it's three channel format and the data is displayed on a standard monitor through a simulation profile. (The data is still 12 or 14 bit although the monitor can usually only display 8 bit colour, this process is known as colour management and is a whole kettle of fish I don't want to get deeply into here.)
The software is capable of applying similar algorithms as the manufacture applied in camera but since the manufacturer has no idea what the camera was pointing at when you pressed the button that would be a very unintelligent way to make the conversion.
Statistically, the manufacturer may decide that your subject is probably a bunch of your friends in a karaoke bar in the Far East where as in my case it would be more likely to be a British or Scandinavian Landscape.
The way that the data needs to be interpreted in these two cases would be entirely different and it is this difference of interpretation that overwhelmingly lies behind many peoples disappointment in the result they achieve from their cameras.
So many times I have heard people say that they thought the colours were different or there was more detail when they saw the picture. The fact of the matter is that they are probably correct but the internal processor of the camera thought it was in a karaoke bar and tried to make it look that way.
The really sad thing is that many people start to doubt their own senses or memory because they bought into one of the greatest lies ever told. "The camera never lies..." It does.. It's designed to... It was even the same in the days of film. Different films recorded colour or tone differently and photographers chose their film according to the characteristics they needed.
It's exactly the same with digital cameras. You could line a dozen different cameras up, pointing at the same subject and trigger them all at the same time and the results would be different.
The camera you buy is often an indicator of your demographic and the manufacturers adjust their assumptions of the type of photograph you are likely to take according to your probable demographic.
My camera was designed almost specifically for travel and landscape photography. It is the most accurate for my kind of subject that I have ever used but it would take lousy pictures in a karaoke bar.
Coming back to our Lightroom conversion, the software allows us to tweak the algorithms to adjust the exposure, tonality and colour using the intelligent processor behind our eyes to produce an interpretation that may or may not match our perception of the scene we were pointing the camera at.
Some people choose to exaggerate certain aspects of the image for effect, some people strive for a faithful rendition. In practice most of us fall somewhere in between due to subjective issues.
The image displayed on the screen at this point is still just a simulation, the accuracy of which depends on whether you are using a calibrated monitor or not. The data needs to be converted into a file format that can be read by other programs. This could either be a compressed format such as JEPG for display on the internet or a file format such as a 16 bit TIFF which is capable of containing all the data recorded by the camera. (This is the option I usually use, only converting small low quality display images to JEPG for the internet.)
Now, at last, we have something that is closer to a picture even though it still needs to be interpreted by a computer to be displayed. (Let's hope the person viewing the image has also calibrated their monitor or they may see something very different from what you are seeing.)
This is the image that can now be edited in PhotoShop.
It's a long way from that light bouncing off the subject now.
PhotoShop is another software product made by Adobe. In the right hands it is capable of producing fantastic images or it is capable of correcting inaccuracies produced by limitations in the photoelectric systems of our cameras.
One person might use it to depict a unicorn winning the Grand National while another might use it to combine differently exposed frames to record the wider range of tones that our eyes can see but a camera sensor cannot.
One use is no more worthy than the other, it is purely down to the skill and intention of the photographer.
My intention is to record my impression of what it was actually like to be in a particular place at a particular time. Sometimes I can do that with one frame sometimes It will require more. Sometimes I may remove distracting elements, sometimes I may add or change something that I could not render properly at the time. In the end it all comes down to integrity. I have all the skills and technology needed to produce fantasy images if I wanted but I choose to illustrate reality instead.
.
Last edited: