Image Processing

 

* Introduction :

 

It is an old dream that is how to obtain a method to save a picture , when this process

happened , the dream becomes grow to animate this picture , and as usual the human dreams are not

stopped . From this point issue the image processing science .

  1. What is an image ?

The Collins Concise English dictionary defines an image as:

The visual impression of somethings produced by a mirror, lens , etc.

This is a sensibly loose definition because it is now common for images to be formed from radiation

of all sorts.

Despite being invisible to the unaided human eye, radiation such as Radar, X-rays, Gamma rays,

Ultra sound, etc., have become important sources of coherent information about our world.Untill

recently all such data have ultimately had to be converted into 2-D images formed in visible portion

of the E.M. spectrum in order to be intelligible to humans.

 

18.1 Image Acquisition:

18.1.1 Image Acquisition :

The general aim of the image acquisition sub-system is :-

the transformation of optical image into on array of numerical data , which may be manipulated by computer , so the overall aim of machine vision may be achieved, in order to achieve this aim three major issues must be tackled, this are Representation, Transduction, Digitization.

1 -Representation of the image:-

To represent an image, we could choose a method that fulfills two important requirements:

  1. It should facilitate convenient, efficient processing by means of computer.
  2. It should encapsulate all the information that leafiness the relevant characteristic of the image.

To reach this conditions we particularize image representation into two important forms where the conventional optical sub-system will deliver a continuos 2-d function f(x,y), where the value of the function at any pair of spatial coordinates is the intensity of the light at the point, this requires that the representation is only an approximation to the original image.

I-Spatial Quantisation :-

Here the image is sampled at (M*N) discrete points in the image, each sample is called a picture cell (pixel).

M,N are integer, and it is quite common for M=N, and for M,N to be integer powers of 2, as the number of samples per unit distance is reduced, the light spatial frequency content is lost.

This type freqradation will only be properly seen as such if the signal is low-pass filtered prior to sampling such that its maximum frequency component does not exceed twice that of the sampling frequency, as specified by the Nyquest criterion of the spatial resolution below a critical limit causes the image to break up into obvious blocks owing to quantization ( also called Pixellition), this quantization is the counterpart of (Aliasing) in the spatial frequency domain.

II-Amplitude(Intensity) Quantization :-

Here each pixel must be assigned a numerical code which represents the intensity of the image function at that point.

The resolution of the code is determined by the number of quantization level (gray level) that are available between the (extremes) of intensity (black—white), the set of gray levels ranging from black to white is called the (Gray Scale) of the system, the number of the gray levels is usually an integer power of 2, such that :

black=0,,, white=2^I-1

where I is an integer, and there (2^I) gray levels in the gray scale .

The value of (I) is the specific application, but it is seldom necessary, or efficient for values of (I)to exceed eight, but six bits per pixels is generally considered to be equivalent to the intensity resolution of the human visual system.

Generally, the values of (I) ,(M),(N), should be as high as is necessary to encapsulate relevant information in the image, this parameter causes a problem because a single image contains (I*M*N) bits of data and so the processing efficiency is greatly aided by keeping I,M,N.

This causes conflict between resolution of the image and quantization of the image for example binary images :-

Here the intensity quantization is called binaisation, where an image is generated with only two gray levels black=0,,,white=1…

 

A threshold value (t) is used to partition the image into pixels with just two levels such that:-

if f(x,y)>= T then g(x,y)=1*** white

if f(x,y)< T then g(x,y)=0*** black

where g(x,y) is the binarised version of f(x,y).

Array Tessellation:-

There are two important kinds of [Array Tessellation] :- First (Uniform Tessellation) :-

  1. Orthogonal Tessellation
  2. Hexagonal Tessellation

The Orthogonal Tessellation, has an advantage, that is the separation between its units is not uniform, the Hexagonal Tessellation manipulates, but generally Engineers, Scientists, trained in Cartesian geometry, experiences some difficulty coming to terms with Hexagonal geometry, however, it has be remembered that in make in vision, the most efficient extraction of the minimum necessary data is the goal, see fig(1).

Second ( Non-uniform Tessellation) :-

Here is the very important method , the log polar mapping , see fig(2), this method each ring contains the same number of samples, the radius of the ring is proportional to the logarithm of its “RING INDEX NUMBER”, The radial line of the samples is known as wedge, the wedge index is used together with the ring index to address a sample uniquely.

The net result is an array which has very dense sampling at the center, a rapid fall-off in density towards the edge.

2-Transduction of the image :-

The Transduction of an optical radiation into electrical signals has two fundamental different mechanisms :

  1. Thermal Detectors:- here the absorbing of the photons causes a temperature rise ,generates output, the transduction method employed enclude thermistor, thermocouples, golaycells, and pyroelectric materials.Detectors of this type sense IR, in astronomical application because the attenuation due to interstellar dust is dramatic reduced at wavelengths greater than 2.2 micro seconds.

A single detector is (Raster-Seaned) over an area of sky, either by moving the whole telescope or a secondary mirror, however these methods are either too slow in response, fragile, or to be used in machine vision application.

  1. Quantum detectors:-here the energy of the absorbed photons being used to promote electrons from their stable state to a higher state ,then material enters a some measurable way, the wavelengths of the incident photons is related to the energy that is carried by the relation :-

e=h*c/l where l is the wavelength

h is Blank constant

c is the speed of light…

On collision with an electron, either all or none of this quantum of energy is transferred to the electron, therefor the photon energy eqn. Says that the maximum wavelength to which the quantum detector will respond is determined by the energy threshold, and therefor by material selection the other extreme of the response curve is limited by the ability of the material at short wavelengths.

 

The fundamental sensitivity of a quantum photodetector is measured by the quantum efficiency, this is the average number of electrons promoted by the arrival of the incident photon, in primary photosensitivity mechanisms, this is always a number less than unity, Quantum Detectors have two important kinds:-

  1. The external photoeffect quantum detector :-

Here the energy threshold of interest is the “Surface Work Function” of the material, when the incident photon has an energy greater than the threshold, electrons are emitted into a vacuum, and then collected and measured in a variety of ways, this mechanism finds applications mainly in non-imaging photomultiplier, or qualitive image intensifiers, also it used very high performance..

Quantitative imaging sensors called “Image Dissectors”,

  1. The internal photoeffet quantum detector:-

Here the incident photon promote an electron across an energy gap, there several modes of appertain for internal quantum detector, but the most important ones, are :-

* Photoconductive ** Photovlatic

  1. photoconductive internal quantum detectors:-

The resistance of the Photoconductive material drops at the presence of the light, owing to the generation of free charge carriers, an external bias must be applied across the material to measure this charge .

The semiconductor junction devices may be operated at as a photoconductive mode under conditions of reverse bias, this widens the depletion region which is established when the recombination of majority carriers across the junction reaches equilibrium, the minority carriers that flow under this condition from the dark current which is effectively constant for a given device, when the junction is illuminated the reverse photo current proportional to the incident radiation, the reverse bias produces a smaller depletion capacitance than the unbiased photocell which improves response times.

b- Photovlatic internal quantum detectors:-

This device consists of semiconductor junctions, they may respond to photons by generating a logarithmic voltage in open circuit mode, or a linear current in short circuit mode.

Under this circumstances no external bias is required and the devices are generally called [Photocells], their chive applications is in “Solar Panels”, for the generation of power and in nun-imaging discrete form , see fig(3-a,3-b).

Solid state technology :-

This technology often used the most two important solid states, which are photodiode,photmos,there are two techniques for dealing with this trancedures are:-

(1) Digital Controlled Analog Multiplexing :-

See bodes graph , consists of MOS. pass transistor connect to the photocells, by a digital control registers as shown in fig(4).

 

(2) Charge Coupled Analog Shifting :-

Consists of one MOS. Pass transistor per line connected to all the photosits in the line by the analogue, shift register as shown, when the pass transistor is on all photosits connects the analogue register which consists of a depletion region absorbs the charge in photosits, then by these charges, is shifted by changing the deep of the depletion region of the shifted registers, then it come out at the end it enters charges to voltage convenient to get O/P voltage signal.

These two techniques and the two sorts of transducing, produces for archeteures which of .

Self scanned photo diode(SSPD):-

Consists of diode photosits as transducers controlled by digital control register. the advantages of this that using photo diode as photosit resists the effect of blooming and smearing, but it has many disadvantages, which :

  1. Large space as every photosits need pass transistor
  2. High noise in output due to large capacitance of the O/P line.

Charge Coupled Device (CCD) :-

Consists of MOS. Photosits as transducers and analogue shift registers.

The disadvantages of such architecture is the use of the MOS. Photosits cause smearing unless read out time is very shorter than photo integration time, but it is preferable or it has small space units(only one pass transistor), beside small I/P are of the O/P which make noise law.

Charge Coupled Photo Diode (CCPD):-

Consists of photodiode sits + analogue shift registers techniques so it has both the advantages of CCPD,

(resistively of blooming ) and of CCD (small size and law noise).

Charge Injection Device (CID):-

Using MOS.photosits and digital shift registers. This sort is quite different as the digital shift registers moves the biasing voltages of MOS. Phoosits causing the charge injection line causing current flow which converts to output voltage by C/V converts this has an important advantage as its compact and also resistive to smearing as there is no charge shifting.

As we know the O/P data is contains data per frame, not per line and this data must transfers in sequences of known to be able to get it back so the CCD system is not so simple as given before, but it consists of three major architecture for dealing with the hole frame which are :- see fig (5).

Parallel-Serial architecture (P/S):-

The most basic form and its work as the following when the photon integration time complete all photo sits energized data is shifted down row by row , then every row is shifted serially by the out put register , to avoid smearing at this time photon integration must be stopped so we may use mechanical shutter or by making shifting time very shorter than photo integration time.

 

The Inter Line Transfer Architecture (ILT):-

Every column of photo sits have a shifted during requester at which data is loaded during blanking time and data are shifted horizontally and vertically during the next photon integration time to get output data.

Frame Transfer Architecture :-

When all photo sits energized data of the frame is shifted to another full array of photo sits which is away from light effect , during the integration time of photon data is shifted by P/S technique ,this architecture

avoid blooming and the use of mechanical shutter .

DIGITIZATION:-

The digitization is how to converge the signal delivered from camera into an array of numerical data so that the computer can deal with it , see fig(6).

Image Capture:-

First step is to sample the signal by using A/D converters, this requires a correct sampling rate such that to get the desired resolution also the synchronization circuits , sampling frequency is determine by using running crystal oscillators or PLL ,at which O/P frequency of the output try to match the frequency of line input synchronization pulses .

Second step is to restart the frame in computer , this requires transmitting digitized data into the memory of the computer .

Image Display:-

In some systems it is required to have also image display , so there must be D/A converts at the output from it , then it connects to the display monitor , see fig(5).

18.2 IMAGE PROCESSING :-

 

18.2.1 Introduction To Image processing:-

Image processing is required to modify and prepare the pixel value of a digitized image, to produce a form that is more suitable for subsequent operations.

There are two major branches of image processing , Image Enhancement, and Image Restoration,

Image Enhancement attempts to improve the quality of the image, while Image Restoration is to recover the original image after it has been degraded by known effects such as geometric distortion within a camera system.

18.2.2 Digital Convolution:-

For digital representation of the continuos signals, the integral can be replaced by summation, of a discrete numbers of points used to represent the signal we should always determine how often to sample in the T.D, in order to represent the signal correctly.

Shannon's sampling states, the abound limited signal can be represented or reconstructed if the image is simple at a frequency which is , the highest frequency present in the signal, this sampling frequency is referred to as [Nyquist Frequency].

 

18.2.3 Point Operation:-

A pixel or point operation is one in which the o/p image is of gray scales of the, pixel at the at the corresponding position in the input image.

A histogram is a graphical representation of the number of occurrence of each gray level intensity in an image.

18.2.3.1 Image brightness modification :-

Brightness is the simplest pixel operation, the need for this can be easily confirmed by looking at the histogram; all of the pixels are concentrated at one end of the ring of the gray levels, and the levels at the other end will be sparsely populated, see figure(6).

18.2.3.2 Contrast enhancement:-

The brightness operation does not alter the distribution of the pixel intensity values in the histogram in any way, so it does not adjust the image contrast, however, this can be improved by gray levels scaling whera multiplication operation is used to stretch the histogram to cover the complete range of gray level value.

18.2.3.3 Negation:-

It is sometimes helpful to be able to work with Negation Images, where black is mapped as white and vice-versa, this can be particularly useful when imaging photographs negatives, this can be achieved quite simple by subtracting the stored pixel value from the maximum gray level value being used, this is illustrated in fig(7).

18.2.3.4 Thresholding:-

There is often a need to threshold a gray scale image to obtain a binariesed version so that the image can be segmented into foreground and background regions.

Selecting of the value of the threshold (T), is a critical issue, an image which is well suited to binaries will feature two or more very clear peaks.

The[Bimodel Histogram] produced by a high scene, see fig(8).

On variation on the simple threshold is the dual or interval threshold operation.

18.2.3.5 Neighborhood Operation:-

A neighborhood operation is one in which the output pixel value is determined not only by its input value, but also by the influences of its neighboring pixels.

In neighborhood operations the shape of the input histogram may be fundamentally changed, and wide range of complex operations or image transforms can be implemented in this way, therefor, many, but not all neighborhood, the process uses digital convolution techniques discussed earlier.

18.2.3.5.1 Image Smoothing:-

It seeks to unwanted noise, form an image, while at the sometimes preserving all of the essential delarkls, that an observer would which to see in the original image, unfortunately, it is possible to retain all the ideal image detail, in such as smoothed image, and some degradation will occur, this degradation is usually in the form of Blurring where edges in the original image become less defined as a result of low- pass operation, the filtering operation can be implemented by convolving entire image with a simple 3*3 , or 5*5 window.

Neighborhood averaging: this is approach to something and operates by replying each pixel value by the average or mean of its immediate neighbors.

 

Geometry Operation :-

Distribution of pixels is changed to get desired effect if we know distribution, also it is required for

image enhancement.

Morpling : object transform its shape to another as in films

Display Adjustment : read date from frame in different order form that it was stored using inversion,

lateral inversion . see fig(9).

Image Warping :- .

Warping is used to correct for distortion introduced by image acquisition system, as in satellite imagery where image may be warped to produce apparent view of area of land as if an imaging satellite was positioned overhead, may produce lines of latitude, longitude which appears straight lines on rectangular grid .

Warping of image consist of two steps :-

  1. Transformation of pixels input image to new output image plane.
  2. Gray level interpolation to estimate gray levels of warped image.

Warping image can be modeled by selecting number of pixels in input image put with a grid of control points, dividing the image to rectangles, the control points are moved so that rectangles become arbitrary.

The more accurate form is bilinear interpolation, where output pixel value is calculated assuming that straight lines, like the renamed gray level that the desired output pixel location special case of warping image is easy on optical system, it is difficult science, interpolation must be performed.

Rotation : of an image, or part of it problems, if the angle of rotation is not integer multiple of 90

degree, see fig (10).Discuss warping magnification , radiation.

 

Temporal Operation:-

As in video-camera operative upon sequence of 2 or more images in order to achieve the desired effect.

The frame difference is the most used technique O/P images is the product of a point by point, subtraction an image from another, used as basis of automated visual inspection, a phototype scene is compared with a scene which is known to contain a reference image, as in printed circuit board inspection a newly constructed board is compared with a known good board missing components, see fig

shows two ways of keyboards, one of which is has faults next images show the result of the frame difference operator, and use of filter to remove noise.

Another application is motion detection objects which do not move between two frames will be canceled in O/P image and anything that moves amount, and direction of motion is estimated, this is the idea of scanty systems, also we can eliminate unwanted effects which are bad, e.g. a lighting gradient can be removed by storing its effects on a plane, with white background or a reference image and subtraction that reference from all subsequences.

prev.gif (339 bytes)    home.gif (447 bytes)