Prerequisites : Basics of Fourier Transform, High pass filters in the context of Image processing.
I will be talking about this filtering method in terms of using it as an enhancement technique, but it can be leveraged for other uses also.
The Wikipedia Page and this blog explains this entire concept thoroughly.
First let me start with the illumination-reflectance model of image formation which says that the intensity of any pixel in an image (which is the amount of light reflected by a point on the object and captured by an imaging system) is the product of the illumination of the scene and the reflectance of the object(s) in the scene.
where is the image, is scene illumination, and is the scene reflectance. Reflectance arises from the properties of the scene objects themselves, but illumination results from the lighting conditions at the time of image capture.
This model should make sense because reflectance is basically the ratio of light reflected to the light received (illumination). But in case you are still not feeling like this equation makes sense then consider this image.
Here the image is the collection of values that tell us the intensity of light reflected from each point. For example the golf ball reflects more light, while the hole reflects less. These values form a matrix in the shape of the image - .
Now what causes the golf ball to reflect more light than the grass? it’s a property of the material which decides what wavelengths of light are reflected and which are absorbed. We call a measure of this property as reflectance and the matrix holds that value for all these points.
The imaging system - our camera, captures the light reflected from each point which is the product of the illumination hitting each point with the “reflectance” property of the material at these respective points.
So what we want is actually only that property - which will be faithfully reflected in the image matrix if the illumination on the objects are uniform. In the case we hide the golf ball under a shadow while shining light on the grass, the image will turn out to show the golf ball as not as white as it actually is.
Now in such images taken in non-uniform illumination setting, Illumination typically varies slowly across the image as compared to reflectance which can change quite abruptly at object edges.
If the light source is placed at such a position that it regularly illuminates the scene then we won’t have the problem of irregular illumination, but consider it being placed in a direction such that part of the scene is highly illuminated, while the remaining part of the scene is in the shadows. Now it should be obvious that due to the properties of light, the 2 areas are separated by fuzzy area where it’s not fully illuminated nor is it in the shadows. This is what I mean by the Illumination typically varies slowly across the image . For example check out the photo at the bottom of the post.
But objects stand out among the background because at the object edges the intensity of light reflected (as compared to the background) changes abruptly, as you can see the black text has a high contrast with the white background.
So as far as the image is concerned, the uneven illumination - is the noise (Multiplicative noise - because is multiplied to ) that we wish to remove. Because the illumination term is what causes part of the scene to be in the shadows, and is the signal that we need - it holds the information about everything ‘s visual properties (how much a particular point absorbs/reflects light)
In order to make this removal easy we shift the domain of the image from the spatial domain of to the domain - . Here,
The property’s of log makes the image such that these multiplicative components can be separated linearly in the frequency domain.
Illumination variations can be thought of as a multiplicative noise in the spatial domain while its linear noise in the log domain.
Now the question comes why linear, the point is that we need to be able to separate and , so that we can remove without harming
As we can see the Fourier transform - of the image (shifting it from spatial domain - to the frequency domain - ) does not separate the illumination and reflectance components. Rather the multiplicative components become convolved in the frequency domain as per the Convolution theorem.
But the log of the spatial domain - when shifted to frequency domain lets us express the the image (now ) as the linear sum of and because the fourier transform is linear.
Now that separability has been achieved, we can apply high pass filtering to ,
So as will have energy compaction among low frequency components, while will have the same for high frequency components. We can see that the term will become attenuated to a great degree (Depending on the filter and the cutoff frequency set).
so we will have
So now after the high pass filtering of the fourier of the log of actual image (Whew that was a stretch! lol)
we see that we get the fourier of the log of the reflectance component of the image - , and that was what we were trying to isolate all along.
so to get the reflectance back in the spatial domain we take the exponential of the inverse fourier transform of , so we get a close approximation to - .
To clarify why we do exponential and inverse fourier transform (IFT), we are just backtracking to the spatial domain, we are doing IFT to get back to the log of the spatial domain - . And then depending on the base we use to get the log domain (I assumed natural logarithm so I said exponential), if you use base 10, then we need to do to the power 10 in order to get back to (x,y).
And so we have successfully filtered out the multiplicative noise that is the slowly varying illumination - from the image - .
I got the code from the blog post I mentioned at the top of the post. My modifications were minor, I just added a line to convert the image to grayscale in case you wanted to operate homomorphic filtering on a color image.
and the results when I ran it on a irregularly illuminated image was pretty awesome -
I know it’s tough to argue that the right image is enhanced visually w.r.t human eyesight, but as you can see the “irregular illumination” has disappeared. This shows us how homomorphic filtering is just another tool in our toolbox and has its own use cases.
Written with StackEdit.
No comments:
Post a Comment