ppti.info Technology Digital Image Processing Book By Rafael C Gonzalez

# DIGITAL IMAGE PROCESSING BOOK BY RAFAEL C GONZALEZ

Sunday, June 30, 2019

ppti.info Digital Image Processing, 2/E is a completely self-contained book. The companion Second Edition. Rafael C. Gonzalez. Digital Image Processing (4th Edition) [Rafael C. Gonzalez, Richard E. The book is suited for students at the college senior and first-year graduate level with . Compre o livro Digital Image Processing na ppti.info: confira as Rafael C. Gonzalez received the B.S.E.E. degree from the University of His books are used in over universities and research institutions throughout the world.

 Author: TRUDY WELDER Language: English, Spanish, Japanese Country: Malawi Genre: Environment Pages: 110 Published (Last): 30.10.2015 ISBN: 714-9-72196-958-3 ePub File Size: 16.45 MB PDF File Size: 10.79 MB Distribution: Free* [*Regsitration Required] Downloads: 49270 Uploaded by: MECHELLE

Completely self-contained-and heavily illustrated-this introduction to basic concepts and methodologies for digital image processing is written at a level that truly. THE leader in the field for more than twenty years, this introduction to basic concepts and methodologies for digital image processing continues its cutting- edge. Digital image processing by Rafael C. Gonzalez, Richard E. Woods, 2nd Edition. Irfan jamil. Loading Preview. Sorry, preview is currently unavailable. You can.

So, all that needs to be done is to determine if the image of a circle of diameter 0. This can be determined by using the same model as in Fig. In other words, a circular defect of diameter 0. If, in order for a CCD receptor to be activated, its area has to be excited in its entirety, then, it can be seen from Fig.

Chapter 3 Problem Solutions Problem 3. First subtract the minimum value of f denoted f min from f to yield a function whose minimum value is 0: Problem 3. The question in the problem statement is to find the smallest value of E that will make the threshold behave as in the equation above.

In this truth table, the values of the 8th bit are 0 for byte values 0 to , and 1 for byte values to , thus giving the transformation mentioned in the problem statement. Note that the given transformed values of either 0 or simply indicate a binary image for the 8th bit plane. Any other two values would have been equally valid, though less conventional. Continuing with the truth table concept, the transformation required to pro- duce an image of the 7th bit plane outputs a 0 for byte values in the range [0, 63], a 1 for byte values in the range [64, ], a 0 for byte values in the range [, ], and a 1 for byte values in the range [, ].

Similarly, the trans- formation for the 6th bit plane alternates between eight ranges of byte values, the transformation for the 5th bit plane alternates between 16 ranges, and so on. Finally, the output of the transformation for the lowest-order bit plane al- ternates between 0 and depending on whether the byte values are even or odd.

Because the number of pixels would not change, this would cause the height of some of the remaining histogram peaks to increase in general. Typically, less variability in intensity level values will reduce contrast.

Because the number of pixels would remain constant, the height of some of the histogram peaks would increase. The general shape of the his- togram would now be taller and narrower, with no histogram components being located past The histogram equalization method has no provisions for this type of artificial intensity redis- tribution process. We have assumed negligible round-off errors. First, this equation assumes only positive values for r. Recognition of this fact is important.

Once recognized, the student can approach this diffi- culty in several ways. One good answer is to make some assumption, such as the standard deviation being small enough so that the area of the curve under p r r for negative values of r is negligible.

Another is to scale up the values until the area under the negative part of the curve is negligible. This is the cumulative distribution function of the Gaussian density, which is either integrated numerically, or its values are looked up in a table.

A third, less important point, that the student should address is the high-end values of r. One possibility here is to make the same assumption as above regarding the standard deviation.

Another is to divide by a large enough value so that the area under the positive part of the PDF past that point is negligible this scaling reduces the standard deviation. Another approach the student can take is to work with histograms, in which case the transformation function would be in the form of a summation. The is- sue of negative and high positive values must still be addressed, and the possible answers suggested above regarding these issues still apply.

The student needs to indicate that the histogram is obtained by sampling the continuous function, so some mention should be made regarding the number of samples bits used. The most likely answer is 8 bits, in which case the student needs to address the scaling of the function so that the range is [0, ].

Consider the probability density function in Fig. Because p r r is a probability density function we know from the discussion in Section 3. However, we see from Fig.

This implies a one-to-one mapping both ways, meaning that both forward and inverse transformations will be single-valued. Suppose that the neighborhood is moved one pixel to the right we are assuming rectangular neighborhoods.

This deletes the left- most column and introduces a new column on the right. The same concept applies to other modes of neighborhood motion: Thus, the only time that the histogram of the images formed by the operations shown in the problem statement can be de- termined in terms of the original histograms is when one both of the images is are constant. In d we have the additional requirement that none of the pixels of g x , y can be 0. Assume for convenience that the histograms are not normalized, so that, for example, h f rk is the number of pixels in f x , y having intensity level rk.

Assume also that all the pixels in g x , y have constant value c. The pixels of both images are assumed to be positive. Finally, let u k denote the intensity levels of the pixels of the images formed by any of the arithmetic oper- ations given in the problem statement. Under the preceding set of conditions, the histograms are determined as follows: In other words, the values height of the compo- nents of h sum are the same as the components of h f , but their locations on the intensity axis are shifted right by an amount c.

Note that while the spacing between components of the resulting histograms in a and b was not affected, the spacing between components of h prod u k will be spread out by an amount c. The preceding solutions are applicable if image f x , y is constant also. Their location would be affected as described a through d. When the images are blurred, the boundary points will give rise to a larger number of different values for the image on the right, so the histograms of the two blurred images will be different.

Figure P3. The values are summarized in Table P3. It is easily verified that the sum of the numbers on the left column of the table is N 2. A histogram is easily constructed from the entries in this table. A similar tedious procedure yields the results in Table P3. Table P3. Initially, it takes 8 additions to produce the response of the mask.

However, when the mask moves one pixel location to the right, it picks up only one new column. This is the basic box-filter or moving-average equation. To this we add one subtraction and one addition to get R new. Thus, a total of 4 arithmetic operations are needed to update the response after one move.

This is a recursive procedure for moving from left to right along one row of the image. When we get to the end of a row, we move down one pixel the nature of the computation is the same and continue the scan in the opposite direction. Because the coefficients of the mask sum to zero, this means that the sum of the products of the coefficients with the same pixel also sum to zero.

Carrying out this argument for every pixel in the image leads to the conclusion that the sum of the elements of the convolution array also sum to zero. This does not affect the conclusions reached in a , so cor- relating an image with a mask whose coefficients sum to zero will produce a correlation image whose elements also sum to zero.

Let f x , y and h x , y denote the image and the filter function, respectively. Then, the process of running h x , y over f x , y can be expressed as the following convolution: If h x , y is now applied to this image, the resulting image will be as shown in Fig.

Note that the sum of the nonzero pixels in both Figs. Since the sum remains constant, the values of the nonzero elements will become smaller and smaller, as the number of applications of the filter increases. In the limit, the values would get infinitely small, but, because the average value remains constant, this would require an image of infinite spatial proportions.

It is at this junction that border conditions become important. Although it is not required in the problem statement, it is instructive to discuss in class the effect of successive applications of h x , y to an image of finite proportions. The net effect is that, because the values cannot diffuse out- ward past the boundary of the image, the denominator in the successive appli- cations of averaging eventually overpowers the pixel values, driving the image to zero in the limit.

A simple example of this is given in Fig. We see that, as long as the values of the blurred 1 can diffuse out, the sum, S, of the resulting pixels is 1. Here, we used the commonly made assumption that pixel value imme- diately past the boundary are 0. The mask operation does not go beyond the boundary, however. In this example, we see that the sum of the pixel values be- gins to decrease with successive applications of the mask.

Thus, even in the extreme case when all cluster points are encom- passed by the filter mask, there are not enough points in the cluster for any of them to be equal to the value of the median remember, we are assuming that all cluster points are lighter or darker than the background points.

This conclusion obviously applies to the less extreme case when the number of cluster points encompassed by the mask is less than the maximum size of the cluster.

Thus, two or more dif- ferent clusters cannot be in close enough proximity for the filter mask to encom- pass points from more than one cluster at any mask position. It then follows that no two points from different clusters can be closer than the diagonal dimension of the mask minus one cell which can be occupied by a point from one of the clusters.

Since this is known to be the largest gap, the next odd mask size up is guaranteed to encompass some of the pixels in the segment. This average value is a gray-scale value, not bi- nary, like the rest of the segment pixels.

Denote the smallest average value by A min , and the binary values of pixels in the thin segment by B. Clearly, A min is less than B. Then, setting the binarizing threshold slightly smaller than A min will create one binary pixel of value B in the center of the mask. The phenomenon in question is related to the horizontal separation between bars, so we can simplify the problem by consid- ering a single scan line through the bars in the image.

The key to answering this question lies in the fact that the distance in pixels between the onset of one bar and the onset of the next one say, to its right is 25 pixels. Consider the scan line shown in Fig. The response of the mask is the average of the pixels that it encompasses. In fact, the number of pixels belonging to the vertical bars and contained within the mask does not change, regardless of where the mask is located as long as it is contained within the bars, and not near the edges of the set of bars.

The fact that the number of bar pixels under the mask does not change is due to the peculiar separation between bars and the width of the lines in relation to the pixel width of the mask This constant response is the reason why no white gaps are seen in the image shown in the problem statement. The averaging mask has n 2 points of which we are assuming that q 2 points are from the object and the rest from the background.

Note that this assumption implies separation be- tween objects that, at a minimum, is equal to the area of the mask all around each object.

The problem becomes intractable unless this assumption is made. This condition was not given in the problem statement on purpose in order to force the student to arrive at that conclusion. If the instructor wishes to simplify the problem, this should then be mentioned when the problem is assigned. A further simplification is to tell the students that the intensity level of the back- ground is 0. Let B represent the intensity level of background pixels, let a i denote the in- tensity levels of points inside the mask and o i the levels of the objects.

In addi- tion, let S a denote the set of points in the averaging mask, So the set of points in the object, and S b the set of points in the mask that are not object points. Let the maximum expected average value of object points be denoted by Q max.

If this was a fact specified by the instructor, or the student made this assumption from the beginning, then this answer follows almost by inspection. We want to show that the right sides of the first two equations are equal. All other elements are 0. This mask will perform differentiation in only one direction, and will ignore intensity transitions in the orthogonal direc- tion.

An image processed with such a mask will exhibit sharpening in only one direction. A Laplacian mask with a -4 in the center and 1s in the vertical and horizontal directions will obviously produce an image with sharpening in both directions and in general will appear sharper than with the previous mask.

In other words, the number of coefficients and thus size of the mask is a direct result of the definition of the second derivative. In fact, as explained in part b , just the opposite occurs.

## Rafael C Gonzalez Solutions

To see why this is so, consider an image consisting of two vertical bands, a black band on the left and a white band on the right, with the transition be- tween the bands occurring through the center of the image.

That is, the image has a sharp vertical edge through its center. As the center of the mask moves more than two pixels on either side of the edge the entire mask will en- compass a constant area and its response would be zero, as it should.

However, suppose that the mask is much larger. As its center moves through, say, the black 0 area, one half of the mask will be totally contained in that area. However, de- pending on its size, part of the mask will be contained in the white area. The sum of products will therefore be different from 0.

This means that there will be a response in an area where the response should have been 0 because the mask is centered on a constant area. The progressively increasing blurring as a result of mask size is evident in these results.

Convolv- ing f x , y with the mask in Fig. Then, because these operations are linear, we can use superposition, and we see from the preceding equation that using two masks of the form in Fig. Convolving this mask with f x , y produces g x , y , the unsharp result. The right side of this equation is recognized within the just-mentioned propor- tionality factors to be of the same form as the definition of unsharp masking given in Eqs. Thus, it has been demonstrated that subtract- ing the Laplacian from an image is proportional to unsharp masking.

The fact that images stay in the linear range implies that images will not be saturated at the high end or be driven in the low end to such an extent that the camera will not be able to respond, thus losing image information irretrievably. The only way to establish a benchmark value for illumination is when the variable daylight illumination is not present.

Let f 0 x , y denote an image taken under artificial illumination only, with no moving objects e. This be- comes the standard by which all other images will be normalized. There are nu- merous ways to solve this problem, but the student must show awareness that areas in the image likely to change due to moving objects should be excluded from the illumination-correction approach.

One way is to select various representative subareas of f 0 x , y not likely to be obscured by moving objects and compute their average intensities. We then select the minimum and maximum of all the individual average values, denoted by, f min and f max. The objective then is to process any input image, f x , y , so that its minimum and maximum will be equal to f min and f max , respectively.

Another implicit assumption is that moving objects com- prise a relatively small area in the field of view of the camera, otherwise these objects would overpower the scene and the values obtained from f 0 x , y would not make sense.

If the student selects another automated approach e. We support this conclusion with an example. Consider a one-pixel-thick straight black line running vertically through a white image. As the size of the neighborhood increases, we would have to be further and further from the line before the center point ceases to be called a boundary point.

That is, the thickness of the boundary detected increases as the size of the neighbor- hood increases. If the intensity is smaller than the intensity of all its neighbors, then increase it.

Else, do not nothing. In rule 1, all positive differences mean that the intensity of the noise pulse z 5 is less than that of all its 4-neighbors. The converse is true when all the differences are negative. A mixture of positive and negative differences calls for no action because the center pixel is not a clear spike. In this case the correction should be zero keep in mind that zero is a fuzzy set too.

Membership function ZR is also a triangle. It is centered on 0 and overlaps the other two slightly. This diagram is similar to Fig. This rule is nothing more that computing 1 minus the minimum value of the outputs of step 2, and using the result to clip the ZR membership function.

It is important to understand that the output of the fuzzy system is the center of gravity of the result of aggregation step 4 in Fig. This would produce the complete ZR membership function in the implication step step 3 in Fig. The other two results would be zero, so the result of aggregation would be the ZR function. This is as it should be because the differences are all positive, indicating that the value of z 5 is less than the value of its 4-neighbors.

Fuzzify inputs. Apply fuzzy logical 3. Apply d2 d4 d6 d8 aggregation method max. Defuzzify center of v gravity. It is a phase term that accounts for the shift in the function. The magnitude of the Fourier transform is the same in both cases, as expected. The last step follows from Eq. Problem 4. The continuous Fourier trans- form of the given sine wave looks as in Fig. In terms of Fig. For some values of sampling, the sum of the two sines combine to form a single sine wave and a plot of the samples would appear as in Fig.

Other values would result in functions whose samples can describe any shape obtainable by sampling the sum of two sines. But, we know from the translation property Table 4. This proves that multiplication in the frequency domain is equal to convolution in the spatial domain. The proof that multiplication in the spatial domain is equal to convolution in the spatial domain is done in a similar way.

Because, by the convolution theorem, the Fourier transform of the spatial convolution of two functions is the product their transforms, it follows that the Fourier transform of a tent function is a sinc func- tion squared. Substitut- ing Eq. Substituting Eq. We do this by direct substitution into Eq. Note that this holds for positive and negative values of k. We prove the validity of Eq. The other half of the discrete convolution theorem is proved in a similar manner. To avoid aliasing we have to sample at a rate that exceeds twice this frequency or 2 0.

So, each square has to correspond to slightly more than one pixel in the imaging system. This is not the case in zooming, which introduces additional samples. Although no new detail in introduced by zooming, it certainly does not reduce the sampling rate, so zooming cannot result in aliasing.

The linearity of the inverse transforms is proved in exactly the same way. There are various ways of proving this. The vector is cen- tered at the origin and its direction depends on the value of the argument. This means that the vector makes an integer num- ber of revolutions about the origin in equal increments. This produces a zero sum for the real part of the exponent. Similar comments apply the imaginary part.

Proofs of the other properties are given in Chapter 4. Recall that when we refer to a function as imaginary, its real part is zero. We use the term complex to denote a function whose real and imaginary parts are not zero. We prove only the forward part the Fourier transform pairs. Similar techniques are used to prove the inverse part. Because f x , y is imaginary, we can express it as j g x , y , where g x , y is a real function.

Then the proof is as follows: And conversely. From Example 4. If f x , y is real and odd, then F u , v is imaginary and odd, and conversely.

## Digital Image Processing

Because f x , y is real, we know that the real part of F u , v is even and its imaginary part is odd. If we can show that F is purely imaginary, then we will have completed the proof. If f x , y is imaginary and even, then F u , v is imaginary and even, and conversely.

We know that when f x , y is imaginary, then the real part of F u , v is odd and its imaginary part is even. If we can show that the real part is 0, then we will have proved this property.

Because f x , y is imagi- nary, we can express it as j g x , y , where g is a real function. If f x , y is imaginary and odd, then F u , v is real and odd, and conversely. If f x , y is imaginary, we know that the real part of F u , v is odd and its imaginary part is even. If f x , y is complex and even, then F u , v is complex and even, and conversely. Here, we have to prove that both the real and imaginary parts of F u , v are even.

Recall that if f x , y is an even function, both its real and imaginary parts are even. The second term is the DFT of a purely imaginary even function, which we know is imaginary and even.

Thus, we see that the the transform of a complex, even function, has an even real part and an even imaginary part, and is thus a complex even function.

This concludes the proof. The proof parallels the proof in h. The second term is the DFT of purely imaginary odd function, which we know is real and odd. Thus, the sum of the two is a complex, odd function, as we wanted to prove. Imagine the image on the left being duplicated in- finitely many times to cover the x y -plane.

The result would be a checkerboard, with each square being in the checkerboard being the image and the black ex- tensions. Now imagine doing the same thing to the image on the right. The results would be identical. Thus, either form of padding accomplishes the same separation between images, as desired. These can be strong horizontal and vertical edges. These sharp transitions in the spatial domain introduce high-frequency components along the vertical and horizontal axes of the spectrum.

This is as expected; padding an image with zeros decreases its average value. The last step follows from the fact that k 1 x and k 2 y are integers, which makes the two rightmost exponentials equal to 1.

The other part of the convolution theorem is done in a similar manner. Consider next the second derivative.

We can generate a filter for using with the DFT simply by sampling this function: In summary, we have the following Fourier transform pair relating the Laplacian in the spatial and frequency domains: Thus, we see that the amplitude of the filter decreases as a function of distance from the origin of the centered filter, which is the characteristic of a lowpass filter. A similar argument is easily carried out when considering both variables simultaneously.

From property 3 in Table 4. The negative limiting value is due to the order in which the derivatives are taken. The important point here is that the dc term is eliminated and higher frequencies are passed, which is the characteristic of a highpass filter.

As in Problem 4. For val- ues away from the center, H u , v decreases as in Problem 4. The important point is the the dc term is eliminated and the higher frequencies are passed, which is the characteristic of a highpass filter.

The Fourier transform is a linear process, while the square and square roots involved in computing the gradient are nonlinear operations. The Fourier transform could be used to compute the derivatives as differences as in Problem 4. The explanation will be clearer if we start with one variable. This result is for continuous functions. To use them with discrete variables we simply sample the function into its desired dimensions. The inverse Fourier transform of 1 gives an impulse at the origin in the highpass spatial filters.

However, the dark center area is averaged out by the lowpass filter. The reason the final result looks so bright is that the discontinuity edge on boundaries of the ring are much higher than anywhere else in the image, thus dominating the display of the result.

## Digital Image Processing

The order does not matter. We know that this term is equal to the average value of the image. So, there is a value of K after which the result of repeated lowpass filtering will simply produce a constant image.

Note that the answer applies even as K approaches infinity. In this case the filter will ap- proach an impulse at the origin, and this would still give us F 0, 0 as the result of filtering.

We want all values of the filter to be zero for all values of the distance from the origin that are greater than 0 i. However, the filter is a Gaussian function, so its value is always greater than 0 for all finite values of D u , v.

But, we are dealing with digital numbers, which will be designated as zero whenever the value of the filter is less than one-half the smallest positive number representable in the computer being used.

As given in the problem statement, the value of this number is c min. So, values of K for which for which the filter function is greater than 0. Because the exponential decreases as a function of increasing distance from the origin, we choose the smallest possible value of D 2 u , v , which is 1. This result guarantees that the lowpass filter will act as a notch pass filter, leaving only the value of the trans- form at the origin. The image will not change past this value of K.

The solution to the problem parallels the solution of Problem 4. Here, however, the filter will approach a notch filter that will take out F 0, 0 and thus will produce an image with zero average value this implies negative pixels. So, there is a value of K after which the result of repeated highpass filtering will simply produce a constant image.

We want all values of the filter to be 1 for all values of the distance from the origin that are greater than 0 i. This is the same requirement as in Problem 4. Although high-frequency emphasis helps some, the improve- ment is usually not dramatic see Fig. Thus, if an image is histogram equalized first, the gain in contrast improvement will essentially be lost in the fil- tering process. Therefore, the procedure in general is to filter first and histogram- equalize the image after that.

The preceding equation is easily modified to accomplish this: Next, we assume that the equations hold for n. From this result, it is evident that the contribution of illumination is an impulse at the origin of the frequency plane. A notch filter that attenuates only this com- ponent will take care of the problem. Extension of this development to multiple impulses stars is implemented by considering one star at a time. The form of the filter will be the same. At the end of the procedure, all individual images are combined by addition, followed by intensity scaling so that the relative bright- ness between the stars is preserved.

Perform a median filtering operation. Follow 1 by high-frequency emphasis. Histogram-equalize this result. Compute the average gray level, K 0. Perform the transformations shown in Fig. Figure P5. Problem 5. Draw a profile of an ideal edge with a few points valued 0 and a few points valued 1. The geometric mean will give only values of 0 and 1, whereas the arithmetic mean will give intermediate values blur.

Because the center of the mask can be outside the original black area when this happens, the figure will be thickened. For the noise spike to be visible, its value must be considerably larger than the value of its neighbors.

Also keep in mind that the power in the numerator is 1 plus the power in the denominator. It is most visible when surrounded by light values. The center pixel the pepper noise , will have little influence in the sums.

## Bestselling Series

If the area spanned by the filter is approximately con- stant, the ratio will approach the value of the pixels in the neighborhood—thus reducing the effect of the low-value pixel.

The center pixel will now be the largest. However, the exponent is now negative, so the small numbers will dominate the result. That constant is the value of the pixels in the neighborhood. So the ratio is just that value. For salt noise the image will become very light. The opposite is true for pepper noise—the image will become dark. The terms of the sum in the de- nominator are 1 divided by individual pixel values in the neighborhood.

Thus, low pixel values will tend to produce low filter responses, and vice versa. If, for example, the filter is centered on a large spike surrounded by zeros, the response will be a low output, thus reducing the effect of the spike.

The Fourier transform of the 1 gives an impulse at the origin, and the exponentials shift the origin of the impulse, as discussed in Section 4. Then, the components of motion are as follows: They can be found, for example, the Handbook of Mathematical Functions, by Abramowitz, or other similar ref- erence.

Any of the techniques dis- cussed in this chapter for handling uniform blur along one dimension can then be applied to the problem. The image is then converted back to rectangular co- ordinates after restoration. The mathematical solution is simple. Any of the methods in Sections 5. Set all pixels in the image, ex- cept the cross hairs, to that intensity value. Denote the Fourier transform of this image by G u , v.

Because the characteristics of the cross hairs are given with a high degree of accuracy, we can construct an image of the background of the same size using the background intensity levels determined previously. We then construct a model of the cross hairs in the correct location determined from the given image using the dimensions provided and intensity level of the cross hairs. Denote by F u , v the Fourier transform of this new image. In the likely event of vanishing values in F u , v , we can construct a radially-limited filter us- ing the method discussed in connection with Fig.

Because we know F u , v and G u , v , and an estimate of H u , v , we can refine our estimate of the blur- ring function by substituting G and H in Eq. The resulting filter in either case can then be used to deblur the image of the heart, if desired. But, we know from the statement of Problem 4. Therefore, we have reduced the problem to computing the Fourier transform of a Gaussian function. From the basic form of the Gaussian Fourier transform pair given in entry 13 of Table 4.

Keep in mind that the preceding derivations are based on assuming continuous variables. A discrete filter is obtained by sampling the continuous function. Its purpose is to gain familiarity with the vari- ous terms of the Wiener filter. This is as far as we can reasonably carry this problem. It is worthwhile pointing out to students that a filter in the frequency domain for the Laplacian operator is discussed in Section 4.

However, substituting that solution for P u , v here would only increase the number of terms in the filter and would not help in simplifying the expression. Furthermore, we can use superposition and obtain the response of the system first to F u , v and then to N u , v because we know that the image and noise are uncorrelated. The sum of the two individual responses then gives the complete response.

The principal steps are as follows: Select coins as close as possible in size and content as the lost coins. Select a background that approximates the texture and brightness of the photos of the lost coins. Set up the museum photographic camera in a geometry as close as possi- ble to give images that resemble the images of the lost coins this includes paying attention to illumination. Obtain a few test photos. To simplify experimentation, obtain a TV camera capable of giving images that re- semble the test photos.

This can be done by connecting the camera to an image processing system and generating digital images, which will be used in the experiment.

Obtain sets of images of each coin with different lens settings. The re- sulting images should approximate the aspect angle, size in relation to the area occupied by the background , and blur of the photos of the lost coins. The lens setting for each image in 3 is a model of the blurring process for the corresponding image of a lost coin. Digitize the impulse. Its Fourier transform is the transfer function of the blurring process.

Digitize each blurred photo of a lost coin, and obtain its Fourier trans- form. At this point, we have H u , v and G u , v for each coin. Obtain an approximation to F u , v by using a Wiener filter.

Equation 5. In general, several experimental passes of these basic steps with various different settings and parameters are required to obtain acceptable results in a problem such as this. The intensity at that point is double the intensity of all other points.

From the definition of the Radon transform in Eq. We do this by substituting the convolution expression into Eq. This completes the proof.

Chapter 6 Problem Solutions Problem 6. These are the trichromatic coefficients. We are interested in tristimulus values X , Y , and Z , which are related to the trichromatic coefficients by Eqs. Note however, that all the tristimulus coefficients are divided by the same constant, so their percentages relative to the trichromatic coefficients are the same as those of the coefficients.

Problem 6. Values in between are easily seen to follow from these simple relations. The key to solving this problem is to realize that any color on the border of the triangle is made up of proportions from the two vertices defining the line segment that contains the point. The line segment connecting points c 3 and c is shown extended dashed seg- ment until it intersects the line segment connecting c 1 and c 2.

The point of in- tersection is denoted c 0. Because we have the values of c 1 and c 2 , if we knew c 0 , we could compute the percentages of c 1 and c 2 contained in c 0 by using the method described in Problem 6. Let the ratio of the content of c 1 and c 2 in c 0 be denoted by R If we now add color c 3 to c 0 , we know from Problem 6. For any position of a point along this line we could determine the percentage of c 3 and c 0 , again, by using the method described in Problem 6.

What is important to keep in mind that the ratio R 12 will remain the same for any point along the segment connect- ing c 3 and c 0. The color of the points along this line is different for each position, but the ratio of c 1 to c 2 will remain constant. So, if we can obtain c 0 , we can then determine the ratio R 12 , and the percent- age of c 3 , in color c. The point c 0 is not difficult to obtain.

The intersection of these two lines gives the coordinates of c 0. The lines can be determined uniquely because we know the coordinates of the two point pairs needed to determine the line coefficients. Solving for the intersec- tion in terms of these coordinates is straightforward, but tedious. Our interest here is in the fundamental method, not the mechanics of manipulating simple equations so we do not give the details. At this juncture we have the percentage of c 3 and the ratio between c 1 and c 2.

Let the percentages of these three colors composing c be denoted by p 1 , p 2 , and p 3 respectively. Finally, note that this problem could have been solved the same way by intersecting one of the other two sides of the triangle. Going to another side would be necessary, for example, if the line we used in the preceding discussion had an infinite slope.

A simple test to determine if the color of c is equal to any of the vertices should be the first step in the procedure; in this case no additional calculations would be required. With a specific filter in place, only the objects whose color cor- responds to that wavelength will produce a significant response on the mono- chrome camera.

A motorized filter wheel can be used to control filter position from a computer. If one of the colors is white, then the response of the three filters will be approximately equal and high. If one of the colors is black, the response of the three filters will be approximately equal and low. We can create Table P6.

Thus, we get the monochrome displays shown in Fig. For a color to be gray, all RGB components have to be equal, so there are shades of gray. The others decrease in saturation from the corners toward the black or white point. Table P6. From left to right, the color bars are in accordance with Fig. The middle gray background is unchanged. Figure P6. For clarity, we will use a prime to denote the CMY components.

And from Eq. Note that, in accordance with Eq. Thus, we get the monochrome display shown in Fig. To generate a color rectangle with the properties required in the problem statement, we choose a fixed intensity I , and maximum saturation these are spectrum colors, which are supposed to be fully saturated , S. If we have more than eight bits, then the increments can be smaller.

Longer strips also can be made by duplicating column values. One is to approach it in the HSI space and the other is to use polar coordinates to create a hue image whose values grow as a function of angle. The center of the image is the middle of what- ever image area is used. Values of the saturation image decrease linearly in all radial directions from the origin. The intensity image is just a specified constant.

With these basics in mind it is not difficult to write a program that generates the desired result. It also is given that the gray-level images in the problem statement are 8-bit images. The latter condition means that hue angle can only be divided into a maximum number of values.

Another way of looking at this is that the entire [0, ] hue scale is compressed to the range [0, ]. From this we easily compute the values of the other two regions as being and The region in Figure P6. This also is true of the black background. The region in the center of the color image is white, so its saturation is 0. Therefore, the value of both darker gray regions in the intensity image have value 85 i.

Similarly, equal proportions of the secondaries yellow, cyan, and ma- genta produce white, so the two lighter gray regions have the same value as the region shown in the figure. The center of the image is white, so its value is Threshold the image with a threshold value slightly larger than 0.

The result is shown in Fig. It is clear that coloring all the black points in the desired shade of blue presents no difficulties. We already know how to take out the water from b. Re- moval of the red [and the black if you do not want to use the method as in b ] can be done by using the technique discussed in Section 6.

This was demonstrated in Problem 6. That is, as long as any one of the RGB components is 0, Eq. Consider RGB colors 1, 0, 0 and 0, 0. The HSI triplets for these colors [per Eq. Their HSI values [per Eqs. Saturation alone is not enough informa- tion to compute the saturation of the complemented color.

The hue component is the angle from red in a counterclockwise direction normalized by degrees. For a color on the top half of the circle i. For a color on the bottom half of the circle i. But from the definition of the CMY space in Eq. The red, green, and blue components should be transformed with the same mapping function so that the colors do not change.

The general shape of the curve would be as shown in Fig. The computations are best done in a spreadsheet, as shown in Table P6. Assume that the component image values of the HSI image are in the range [0, 1]. Call the component images H hue , S saturation , and I intensity. Similarly, all the squares are at their maximum value so, from Eq.

The hue component image, H , is shown in Fig. Recall from Fig. Thus, the important point to be made in Fig. As you can see, this indeed is the case for the squares in Fig. For the shades of red, green, and blue in Fig. When the aver- aging mask is fully contained in a square, there is no blurring because the value of each square is constant.

When the mask contains portions of two or more squares the value produced at the center of the mask will be between the values of the two squares, and will depend the relative proportions of the squares oc- cupied by the mask. To see exactly what the values are, consider a point in the center of red mask in Fig.

We know from a above that the value of the red point is 0 and the value of the green point is 0. Thus, the values in the blurred band be- tween red and green vary from 0 to 0. The values along the line just discussed are transitions from green to red. The reason for the diagonal green line in this figure is that the average values along that region are nearly midway between red and blue, which we know from from Fig.

If an RGB point z does not lie on the plane, and its coordinates are substituted in the preceding equation, the equa- tion will give either a positive or a negative value; it will not yield zero.

We say that z lies on the positive or negative side of the plane, depending on whether the result is positive or negative. Suppose that we test the point a given in the problem statement to see whether it is on the positive or negative side each of the six planes composing the box, and change the coefficients of any plane for which the result is negative. Then, a will lie on the positive side of all planes composing the bounding box. In fact all points inside the bounding box will yield positive values when their coordinates are substituted in the equa- tions of the planes.

Points outside the box will give at least one negative or zero if it is on a plane value. Thus, the method consists of substituting an unknown color point in the equations of all six planes. If all the results are positive, the point is inside the box; otherwise it is outside the box. The intersections of pairs of par- allel planes establish a range of values along each of the RGB axis that must be checked to see if the if an unknown point lies inside the box or not.

This can be done on an image per image basis i. These will produce three binary images which, when ANDed, will give all the points inside the box. In other words, the figure looks like a blimp aligned with the R-axis. To compute the gradient of each component image we take second-order partial derivatives. In this case, only the component of the derivative in the horizontal direction is nonzero. If we model the edge as a ramp edge then a profile of the derivative image would appear as shown in Fig.

The magnified view shows clearly that the derivatives of the two images are mirrors of each other. Thus, if we computed the gradient vector of each image and added the results as suggested in the problem state- ment, the components of the gradient would cancel out, giving a zero gradient for a color image that has a clearly defined edge between two different color re- gions.

This simple example illustrates that the gradient vector of a color image is not equivalent to the result of forming a color gradient vector from the sum of the gradient vectors of the individual component images. Chapter 7 Problem Solutions Problem 7. Problem 7. Level 0 of the prediction residual pyramid is the lowest resolu- tion approximation, [8. The level 2 prediction residual is obtained by upsam- pling the level 1 approximation and subtracting it from the level 2 approxima- tion original image.

We can generate the following table: If the filters are orthonormal,. Thus, the filters are orthonormal and will also satisfy Eq. In addition, they will satisfy the biorthogonality conditions stated in Eqs.

The filters are both orthonormal and biorthogonal. Using Eq, 7. The last two coefficients are d 1 0 and d 1 1 , which are computed as in the example. When the index is large, the resemblance is strong; else it is weak. Thus, if a function is similar to itself at different scales, the resemblance index will be sim- ilar at different scales.

The CWT coefficient values the index will have a char- acteristic pattern. As a result, we can say that the function whose CWT is shown is self-similar—like a fractal signal.

The CWT is often easier to interpret because the built-in redundancy tends to reinforce traits of the function or image. For example, see the self-similarity of Problem 7. To determine C , use Eq. To check the result substitute these values into Eq.

To construct the approximation pyramid that corresponds to the transform in Fig. Thus, the approx- imation pyramid would have 4 levels.

If the input is shifted, the transform changes. The filter bank of Fig. This is the case in the third transform shown. The functions are determined using Eqs. To order the wavelet functions in frequency, count the number of transitions that are made by each function. For example, V0 has the fewest transitions only 2 and lowest frequency content, while W2, AA has the most 9 transitions and correspondingly highest frequency content.

From top to bottom in the figure, there are 2, 3, 5, 4, 9, 8, 6, and 7 transitions, respectively.

They are 2. Because the detail entropy is 0, no further decomposition of the detail is war- ranted. Thus, we perform another FWT iteration on the approximation to see if it should be decomposed again.

This process is then repeated until no further decompositions are called for. The resulting optimal tree is shown in Fig. Chapter 8 Problem Solutions Problem 8. That is, all intensities are equally probable. Since all intensities are equally probable, there is no advantage to assigning any particular intensity fewer bits than any other.

Thus, we assign each the fewest possible bits required to cover the 2n levels.

This, of course is n bits and L a v g becomes n bits also: Problem 8. The maximum run length would be 2n and thus require n bits for representation.

Since a run length of 0 can not occur and the run-length pair 0, 0 is used to signal the start of each new line - an additional 2n bits are required per line. To achieve some level of compression, C must be greater than 1.

Note that the quantized intensities must be multiplied by 16 to decode or decompress them for the rms error and signal-to-noise calculations. Table P8. For instance, compute the differences between adjacent pixels and Huffman code them.

Then, using Eq. Processamento Digital de Imagens. Fale com a Editora! Detalhes do produto Capa dura: Seja o primeiro a avaliar este item Lista de mais vendidos da Amazon: Completely self-contained, heavily illustrated, and mathematically accessible, it has a scope of application that is not limited to the solution of specialized problems. Digital Image Fundamentals. Image Enhancement in the Spatial Domain.

Image Enhancement in the Frequency Domain. Image Restoration. Color Image Processing. Wavelets and Multiresolution Processing. Image Compression. Morphological Image Processing. Image Segmentation.

Representation and Description. Object Recognition. For technicians interested in the fundamentals and contemporary applications of digital imaging processing. Compartilhe seus pensamentos com outros clientes. Compra verificada.

This is a good introduction to image processing.

The first chapters introduce the readers to the fundamentals of human vision and how the brain processes light, contrast and color.

The examples are clear and understandable. As the book progresses, the subjects become more complex and mathematically-oriented. I found that I could follow up with those topics in which I had prior background, like intensity transformations, spatial filtering, filtering in the frequency domain, color image processing, image compression and image segmentation.

In these topics I understood most of what they were explaining, except maybe the last pages of each chapter. When it came to chapters where I was weaker or not familiar at all, I have to admit that I was able to keep up only with the first pages of each one.

More specifically: Wavelets and multithreshold processing, morphological image processing, representation, fuzzy logic and object recognition. These chapters became muddy very quickly, and I only got a general feeling about what they were explaining.

I plan to read them again once I have a practical need or use for those areas. The book does not contain code samples but it has quite a lot of practical use examples with pictures. Best Regards Ari. I have to say that this is one of the best international edition textbooks I have purchased so far, with a few caveats of course.

I have not worked enough of the homework to tell if the problems have been rearranged in the international version v. I will update this review if that is the case. If this is the only problem with the book, it will probably be the best international edition textbook I have purchased. Soli Deo Gloria Update This edition of the textbook seems to be missing all of the programming assignments from the US edition not just rearranged, completely omitted.

And rearranges all of the other homework problems. This significantly reduces my enthusiasm for the book. This book is a wonderful read about image processing.In its initial ten years, Perceptics introduced a series of innovative products, including: The proof consists of two parts. The dilation of F by B will also be all 0s, and the intersection of this result with f c will be all 0s also.

We also discuss use of the book web site. Repeat the process for the next code word, which begins with bits Because interest lies only on the boundary shape and not on other spectral characteristics of the specimens, a single illu- mination source in the far ultraviolet wavelength of.

The 4th Edition, which celebrates the book's 40th anniversary, is based on an extensive survey of faculty, students, and independent readers in institutions from 30 countries.

NORA from Iowa
I do fancy reading comics gently. Please check my other posts. I am highly influenced by horizontal bar.