Imaging Systems and their relationship with Point Spread Function
Here I will be talking about what a PSF (Point Spread Function) is and what its relationship with imaging systems are, later I will talk about the space invariance property of imaging systems and how it helps us.
First of all perfect imaging systems just isn’t possible, to understand why watch this.
They explain with a simple single lens system, which focuses the light rays coming from an object (kept really far away so that the light rays hitting the lens can be assumed to be parallel to each other) to the focal point of the lens on the other side.
Even using aberration free lens (such that the point of convergence would be an actual point instead of a blurred point shape) the intensity of the light at the point would tend to infinity and the electric field would easily be large enough to ionize the surrounding air.
Here the lens is the imaging system, the object can be thought of as the input to the imaging system which gives us the image as the output.
Now that we know we will have some lower limit on the size of the image formed, what does this mean?
if we have 2 objects which are too close to each other, our imaging system may not be able to resolve these 2 distinct objects into 2 distinct images!
so the size of the smallest image it can resolve tells us how far apart these objects must be in order for it to be able to tell them apart. This ability of the imaging system to resolve details is known as Optical resolution
The image formed by a point source of light kept really far away from the imaging should give us that right? and that is the PSF!!
The point spread function (PSF) describes the response of an imaging system to a point source or point object.
- The PSF in many contexts can be thought of as the extended blob in an image that represents an unresolved object.
- The PSF is the impulse response of a focused optical system
- The PSF is in functional terms the spatial domain version of the Optical transfer function of the imaging system.
The first point explores the fact that the distortion present even with the point source will be present in even amounts with a larger object image combination. The way I think of it is that the image formed will be resolved to about the precision set by the PSF. You will have a fuzzy region around the formed image at about the size of the PSF.
Yeah we supply the impulse input to the imaging system and record its response which is the impulse response.
The optical transfer function is defined as the Fourier transform of the impulse-response of the optical system, also called the point spread function. So as we all know the Fourier transform of the impulse response of a LTI system gives us its frequency response, and here the Fourier transform takes the PSF from its spatial domain into the frequency domain.
The optical transfer function provides a comprehensive and well-defined characterization of optical systems, so the PSF also plays a important role here.
And this video explains how Aberrations increases as increases,
because of the diffraction of light
So if is too large aberrations become a problem,
But also how decreases ( better resolution) with larger .
here,
is the size of blurred point formed (essentially the resolution)
is the focal length of the lens
is the wavelength of the light
and is the size of the aperture.
Therefore we can see how the PSF gives us the measure of the imaging system, if the PSF of a imaging system is large and spread out then that means that imaging system has more aberrations etc.
But if the PSF is well contained it means the opposite and can tell us how the imaging system has negligible aberrations.
Thus the degree of spreading (blurring) of the point object is a measure for the quality of an imaging system.
But actually the PSF is much more than that, as we can see from the 4 points noted above.
As the PSF is the impulse response of the imaging system, this lets us calculate the output of the imaging system as the convolution integral of the system input with the PSF.
That is of course only if the imaging system in question is a additive linear two-dimensional imaging system. Oh and it should also be space invariant!
Here is the impulse response or the PSF, is the system input and is the output, but lets look at the mathematics behind this relationship.
is a two dimensional system, in its most general form, is simply a mapping of some input set of two-dimensional functions to a set of output two-dimensional functions where are spatial variables.
Also note we consider to be a additive linear two-dimensional imaging system.
This assumption of linearity is well founded as in non-coherent imaging systems such as fluorescent microscopes, telescopes or optical microscopes, the image formation process is linear in power and described by linear system theory. This means that when two objects A and B are imaged simultaneously, the result is equal to the sum of the independently imaged objects. In other words: the imaging of A is unaffected by the imaging of B and vice versa, owing to the non-interacting property of photons.
We can write the output G in terms of the superposition integral as follows,
Note that s and t are dummy variables used in the integral.
is the Dirac delta function
The input is written as the sum of amplitude weighted
Dirac delta functions by the sifting integral,
The imaging system’s response to the impulse input given by is (Impulse Response or The PSF)
Space Variance
Now generally if the impulse response is space variant then we have to stop at the superposition integral as the extent to which we can relate these quantities, But in the special case that the additive linear two-dimensional imaging system is space invariant, then the superposition integral reduces to the convolution integral.
Now comes the question what does it mean when we say the imaging system is Space Invariant?
well mathematically speaking we can just say - In the case when,
i.e the impulse response depends only on the factors & .
Intuitively, in an optical system this implies that the image of a point source in the focal plane will change only in location, not in functional form, as the placement of the point source moves in the object plane.
That’s it folks looks like I wrote more than a thousand words, Any sort of feedback is welcome.
Many thanks to,
Jiří Jan
Department of Biomedical Engineering
Brno University of Technology
Czech RepublicWilliam K Pratt
Author of Digital Image Processing: PIKS Scientific InsideAnd of course Wikipedia!
Written with StackEdit.
No comments:
Post a Comment