Many image manipulation operations (such as blurring, edge detection etc) can be expressed universally as a convolution with a filter (different filters will represent different operations). The filter is a "table" of coefficients (we will represent it as a list of lists) which specifies how the nearby pixels need to be combined. We will assume for simplicity that width and height of our filters are always odd and that the center of the filter is placed on the "current pixel". The remaining elements of the table thus specify pixels relative to the current one. Here's the example:
blur = [ [ 0 , 1 , 0 ],
[ 1 , 1 , 1 ],
[ 0 , 1 , 0 ] ]
This filter is interpreted as follows: for each and every pixel at position x, y, we have to take that pixel's value (central cell of the filter), as well as values of the nearest pixels on the left , right, up, and down (i.e. (x-1,y), (x+1,y), (x,y-1), (x, y+1) ) and sum them all up with coefficients equal to 1. Of course we need to do this in each color channel. You can easily see that such operation, when applied to each an every pixel (x,y) in the image is almost precisely the blur operation discussed in your textbook and lecture slides, except that the previous blur function used the *average* (i.e. the sum of the pixels was divided by 5, the total number of pixels in each sum); we can (and should) do this when using the filter as well, so we could set non-zero elements to 0.2 instead of 1, or we can introduce an overall factor as described later.
Another filter might look as
motion_blur = [ [ 1, 0, 0, 0, 0 ],
[ 0, 1, 0, 0, 0 ],
[ 0, 0, 1, 0, 0 ],
[ 0, 0, 0, 1, 0 ],
[ 0, 0, 0, 0, 1 ] ]
Here the filter prescribes to sum each pixel at (x,y) with pixels at (x-1, y-1), (x-2, y-2), (x+1, y+1), and (x+2, y+2).
Your goal is to implement a function
filterImage(image, filter, factor, bias).
The defaults for factor and bias should be 1 and 0 respectively; the parameter image will take an object of class Image, and filter will be an arbitrary matrix (list of lists) similar to ones shown above.
The function will start by making a copy of the image. Then the copied image will be updated as follows (pseudocode):
for each x along the image width:
for each y along the image height:
assuming the filter is centered at pixel (x,y)
sum up the values of pixels from the original image as prescribed by the filter (using coefficients from the filter);
(you will need to loop over filter dimensions and extract those pixels!)
( you will need to do this in each color channel, either separately line by line or using fancier tricks as showcased in your textbook)
upon summation, multiply the result, in each channel, by the factor (so e.g. for the "blur" filter above, with all 1's, we could simply specify factor=1/5)
add bias to the result, also in each color channel
do not forget that color intensities must be integer
make sure the intensities are >= 0 and <= 255 (if not, trim back to the allowed range)
set the so obtained pixel value (three color intensities) as the value of the pixel (x,y) in the copied image
return the copied and modified image
You may want to start implementing the solution from the very inner (filter) loops. Assume that you are looking at pixel at x,y and that it's at the center of the filter. Write loops over filter width and height that would sum up all the neighboring pixels around the fixed pixel x, y, with the required coefficients. Now put this code inside the "main" outer loops that traverse the image itself (i.e. iterate over pixels x, y). In those loops, make sure you have correct ranges for x and y (you have to leave enough space on left/right/top/bottom to be always able to look at neighboring pixels as prescribed by a given filter: for instance if you traverse all the way from 0 to width, then you cannot look at pixels to the left when x = 0 and you cannot look at pixels to the right when x = width-1!)
Remember, this problem is very similar to the examples considered in class/textbook: you are still traversing the whole image; in the in-class examples, for each pixel (x,y) we "see" during the traversal we were manually pulling few nearby pixels, adding them up etc (and we had to have different functions when we wanted to combine different patterns of nearby pixels). Here, for each pixel (x,y) we just want to run additional traversal of the filter table itself that will automatically pull and combine whatever nearby pixels the filter prescribes.