BLOG.CSHARPHELPER.COM: Compare two images to find differences in C#
Compare two images to find differences in C#
When you select two images to compare and click the Go button, the program executes the following code to compare the images and see at which pixels they differ.
// Load the images. using (Bitmap bm1 = new Bitmap(txtFile1.Text)) { using (Bitmap bm2 = new Bitmap(txtFile2.Text)) { // Make a difference image. int wid = Math.Min(bm1.Width, bm2.Width); int hgt = Math.Min(bm1.Height, bm2.Height); Bitmap bm3 = new Bitmap(wid, hgt);
// Create the difference image. bool are_identical = true; Color eq_color = Color.White; Color ne_color = Color.Red; for (int x = 0; x < wid; x++) { for (int y = 0; y < hgt; y++) { if (bm1.GetPixel(x, y).Equals(bm2.GetPixel(x, y))) { bm3.SetPixel(x, y, eq_color); } else { bm3.SetPixel(x, y, ne_color); are_identical = false; } } }
// Display the result. picResult.Image = bm3;
this.Cursor = Cursors.Default; if ((bm1.Width != bm2.Width) || (bm1.Height != bm2.Height)) are_identical = false; if (are_identical) { MessageBox.Show("The images are identical"); } else { MessageBox.Show("The images are different"); } } } }
The code loads the two image files into Bitmaps. It finds the smaller of the Bitmaps' widths and heights, and makes a new Bitmap of that size.
Next the program loops over the pixels in the smaller area, comparing the images' pixels. If two corresponding pixels are equal, the program colors the result pixel white. If the two pixels are different, the program makes the result pixel red. When it has examined all of the pixels, the program displays the result image.
The result image highlights differences between the two input images no matter how small the differences are. If a pixel in one image has RGB values 50, 150, 200 and the corresponding pixel in the other image has RGB values 51, 150, 200, the result shows the difference plainly even though you won't be able to tell the difference with your eyes in the original images.
7/9/2011 10:45 AM
Mark wrote:
Nice example. Question though: if the images are photographs that were captured at slightly different frames of reference (in other words my camera moved ever so slightly between shots), what strategy would you take to get both images lined up before running this comparison? Reply to this
7/10/2011 7:40 AMRod Stephens wrote:
This technique doesn't really work for that. However, you could try transforming the image in various ways to see if you can minimize the difference. For example, try offsetting one image by 1 or 2 pixels in the +/- X and Y directions and adding up the total differences. You might also rotate the image slightly, although that takes a bit longer.
(I think this technique was originally developed to find differences in telescope images to find moving objects such as planets, asteroids, and novas. There the telescope should be in exactly the same orientation for each picture and even if it's not you can use the stars to align the images. I suppose you could use objects in the scene to align the images more generally if you can figure out how. I may have to think about that...) Reply to this
7/10/2011 12:11 PM
Mark wrote:
Interesting - I'll consider that. I'm thinking it would be really powerful to be able to align and then compare two selection areas of two images, rather than the whole images. In that case there would need to be some creative approach, probably using advanced math functions(?) to look through each selection area to find similarities between the two. Having identified the similar areas, a reference pixel could be established (ie top left most pixel included in region of similarity) then the comparison could align and proceed based on that pixel. Any thoughts on likelihood of success and angle of approach? Thanks! Reply to this
7/11/2011 7:47 AMRod Stephens wrote:
I think this could be a big area of research. If you don't know whether the images are related, you might need an algorithm that performed feature identification to try to determine what kinds of objects are present. For example, it's easy for a person to tell if the Eiffel Tower is in a picture but it would be tricky to write a program to do so if the pictures were taken at different angles. There is software that does this sort of thing remarkably well but I couldn't write something like that.
It wouldn't be too hard to let the user select two areas to compare.
Then I would look for bright spots or perhaps areas of sudden contrast and try to line those up. It might be worth doing some image pre-processing such as using an edge detector to remove a lot of the data from the pictures. Then it might be easier to match them up.
In the end you would have to try some changes between the two images (move them around and rotate them) to see which gives the best result. You *might* be able to get a gradient of improvement. I.e. if moving +1 in the X direction helps, do it again until it stops helping. I suspect the gradient won't last long, though, unless it's a very smoothly colored image.
I'm sure you could get something to work. I'm just not sure how fast it would be.
(A cool problem would be to assemble a panoramic view taken in several pictures by a camera.)
(A related problems that I've always thought would be fun would be to assembly a jigsaw puzzle from an image of the pieces. It would be sort of similar--trying to match up corresponding parts of pieces--although the edges would be well-defined so it would be a lot easier.)
Sorry I don't have any concrete code. It would be fun to work on if I only had the time ;-) Reply to this
7/11/2011 11:45 AMRod Stephens wrote:
I had two other thoughts. First, if you need to worry about scaling, that adds a whole new level of complexity. I assume these images are probably taken at the same scale and distance from the subject. If not, you would need to try different scaling levels in addition to offsets and rotations.
Second, you could probably scale the images to make them smaller before trying to find the parts that line up. If two images line up at a certain position, then their scaled versions should also line up at the scaled position. That would give you much less data to compare. After you find a good match, you could verify it on the original images. Reply to this
7/11/2011 4:32 PM
Mark wrote:
Great thoughts Rod. Thanks very much for taking the time to respond. If I have some success I'll respond back.
I did find this site: http://elastix.isi.uu.nl/index.php Elastix appears to be a wrapper around more powerful image analysis tools designed for this kind of thing. It may be something I can integrate though it's not .net code, even if only to drop out to cmdline to crunch the data, then feed parameters back into the comparison code. We'll see!
Nice example. Question though: if the images are photographs that were captured at slightly different frames of reference (in other words my camera moved ever so slightly between shots), what strategy would you take to get both images lined up before running this comparison?
Reply to this
This technique doesn't really work for that. However, you could try transforming the image in various ways to see if you can minimize the difference. For example, try offsetting one image by 1 or 2 pixels in the +/- X and Y directions and adding up the total differences. You might also rotate the image slightly, although that takes a bit longer.
(I think this technique was originally developed to find differences in telescope images to find moving objects such as planets, asteroids, and novas. There the telescope should be in exactly the same orientation for each picture and even if it's not you can use the stars to align the images. I suppose you could use objects in the scene to align the images more generally if you can figure out how. I may have to think about that...)
Reply to this
Interesting - I'll consider that. I'm thinking it would be really powerful to be able to align and then compare two selection areas of two images, rather than the whole images. In that case there would need to be some creative approach, probably using advanced math functions(?) to look through each selection area to find similarities between the two. Having identified the similar areas, a reference pixel could be established (ie top left most pixel included in region of similarity) then the comparison could align and proceed based on that pixel. Any thoughts on likelihood of success and angle of approach? Thanks!
Reply to this
I think this could be a big area of research. If you don't know whether the images are related, you might need an algorithm that performed feature identification to try to determine what kinds of objects are present. For example, it's easy for a person to tell if the Eiffel Tower is in a picture but it would be tricky to write a program to do so if the pictures were taken at different angles. There is software that does this sort of thing remarkably well but I couldn't write something like that.
It wouldn't be too hard to let the user select two areas to compare.
Then I would look for bright spots or perhaps areas of sudden contrast and try to line those up. It might be worth doing some image pre-processing such as using an edge detector to remove a lot of the data from the pictures. Then it might be easier to match them up.
In the end you would have to try some changes between the two images (move them around and rotate them) to see which gives the best result. You *might* be able to get a gradient of improvement. I.e. if moving +1 in the X direction helps, do it again until it stops helping. I suspect the gradient won't last long, though, unless it's a very smoothly colored image.
I'm sure you could get something to work. I'm just not sure how fast it would be.
(A cool problem would be to assemble a panoramic view taken in several pictures by a camera.)
(A related problems that I've always thought would be fun would be to assembly a jigsaw puzzle from an image of the pieces. It would be sort of similar--trying to match up corresponding parts of pieces--although the edges would be well-defined so it would be a lot easier.)
Sorry I don't have any concrete code. It would be fun to work on if I only had the time ;-)
Reply to this
I had two other thoughts. First, if you need to worry about scaling, that adds a whole new level of complexity. I assume these images are probably taken at the same scale and distance from the subject. If not, you would need to try different scaling levels in addition to offsets and rotations.
Second, you could probably scale the images to make them smaller before trying to find the parts that line up. If two images line up at a certain position, then their scaled versions should also line up at the scaled position. That would give you much less data to compare. After you find a good match, you could verify it on the original images.
Reply to this
Great thoughts Rod. Thanks very much for taking the time to respond. If I have some success I'll respond back.
I did find this site:
http://elastix.isi.uu.nl/index.php Elastix appears to be a wrapper around more powerful image analysis tools designed for this kind of thing. It may be something I can integrate though it's not .net code, even if only to drop out to cmdline to crunch the data, then feed parameters back into the comparison code. We'll see!
Thanks again
Mark
Reply to this