So you don’t really have a decent graphics card, CUDA in C or pyCuda are not options since you don’t have a NVIDIA card, or you just want something completely cross-platform without a large amount of research. Canny edge detection in straight Java does not need to be slow.
Accomplishing a faster and even memory efficient canny edge detection algorithm only requires the use of loops and the proxy design pattern. Basically, simple code applied to the theory will do the trick. All of the code is available at my github repository.
Pre-Processing
The most important initial task is actually to pre-process the image to bring out the edges that are wanted and obscure those that might get caught up in the detection. This is likely standard practice as unwanted edges are still edges. It is possible to implement a series of simple corrections such as a Gaussian or box blur, denoise, or an unsharp mask. The links in the previous sentence were not used in the following blur or my code libraries but the concept remains the same.
Gaussian Blur Example
The gaussian blur is an equation based way of getting rid of unwanted edges. The equation averages pixels within a certain window using Gauss’ statistical normal distribution equation.
This equation is implemented simply in Java by applying a window to an image.
private void blurGauss(double radius) { // TODO create a kernel and implement a gaussian blur // this is a somewhat lazy implementation setting a definite center and // not wrapping to avoid black space // the radius of the kernal should be at least 3*the blur factor BufferedImage proxyimage=image.getImage(); int rows_and_columns = (int) Math.round(radius); while (rows_and_columns % 2 == 0 & rows_and_columns != 0) { rows_and_columns++; } while (rows_and_columns > proxyimage.getWidth()) { rows_and_columns = 3; } int centerx = ((rows_and_columns + 1) / 2) - 1; int centery = ((rows_and_columns + 1) / 2) - 1; // the kernel sum float sum_multiplier = 0; /* get the kernel */ // the base for gaussian filtering float base_start = (float) (1 / (2 * Math.PI * Math.pow(radius, 2))); // the multiplier matrix to be applied to every pixel, ensured to be one float[][] arr = new float[rows_and_columns][rows_and_columns]; // the central coordinates for (int i = 0; i < rows_and_columns; i++) { for (int j = 0; j < rows_and_columns; j++) { float exp = 0; // calculate the corners exp = (float) -1.0 * (float) ((Math.pow((Math.abs(i - centerx)), 2) + (Math .pow(Math.abs(j - centery), 2))) / (2 * Math .pow(radius, 2))); float base = (float) (base_start * Math.exp(exp)); arr[i][j] = base; sum_multiplier += base; } } /* replace the values by multiplying by the sum_multiplier */ // get the multiplier sum_multiplier = (float) 1 / sum_multiplier; // multiply by the sum multiplier for each number for (int i = 0; i < rows_and_columns; i++) { for (int j = 0; j < rows_and_columns; j++) { arr[i][j] = arr[i][j] * sum_multiplier; } } // blur the image using the matrix complete_gauss(arr, rows_and_columns, centerx, centery); } private void complete_gauss(float[][] arr, int rows_and_columns,int centerx, int centery) { // TODO complete the gaussian blur by applying the kernel for each pixel BufferedImage proxyimage=image.getImage(); // the blurred image BufferedImage image2 = new BufferedImage(proxyimage.getWidth(),proxyimage.getHeight(), BufferedImage.TYPE_INT_RGB); // the r,g,b, values int r = 0; int g = 0; int b = 0; int i = 0; int j = 0; // the image height and width int width = image2.getWidth(); int height = image2.getHeight(); int tempi = 0; int tempj = 0; int thisx = 0; int thisy = 0; if (arr.length != 1) { for (int x = 0; x < width; x++) { for (int y = 0; y < height; y++) { // the values surrounding the pixel and the resulting blur // multiply pixel and its neighbors by the appropriate // ammount i = (int) -Math.ceil((double) rows_and_columns / 2); j = (int) -Math.ceil((double) rows_and_columns / 2); while (i < Math.ceil((double) rows_and_columns / 2) & j < Math.ceil((double) rows_and_columns / 2)) { // sets the pixel coordinates thisx = x + i; if (thisx = proxyimage.getWidth()) { thisx = 0; } thisy = y + j; if (thisy = proxyimage.getHeight()) { thisy = 0; } // the implementation tempi = (int) (Math .round(((double) rows_and_columns / 2)) + i); tempj = (int) (Math .round(((double) rows_and_columns / 2)) + j); if (tempi >= arr[0].length) { tempi = 0; } if (tempj >= arr[0].length) { tempj = 0; } r += (new Color(proxyimage.getRGB((thisx), (thisy))) .getRed() * arr[tempi][tempj]); g += (new Color(proxyimage.getRGB((thisx), (thisy))) .getGreen() * arr[tempi][tempj]); b += (new Color(proxyimage.getRGB((thisx), (thisy))) .getBlue() * arr[tempi][tempj]); j++; if (j == Math.round((double) rows_and_columns / 2)) { j = 0; i++; } } // set the new rgb values with a brightening factor r = Math.min( 255, Math.max( 0, r + ((int) Math.round(arr[0].length * arr[0].length)))); g = Math.min( 255, Math.max( 0, g + ((int) Math.round(arr[0].length * arr[0].length)))); b = Math.min( 255, Math.max( 0, b + ((int) Math.round(arr[0].length * arr[0].length)))); Color rgb = new Color(r, g, b); image2.setRGB(x, y, rgb.getRGB()); r = 0; g = 0; b = 0; i = 0; j = 0; } } image.setImage(image2); } }
A matrix is generated and then used to blur the image in several loops. Methods were created to make the code more understandable.
Although an “is-a” classification is often associated with interfaces or abstract classes, the proxy design pattern is better implemented with an interface that controls access to the “expensive object.”
Steps to Canny Edge Detection
Canny edge detection takes three steps. These steps prepare the image, mark potential edges, and weed out the best edges.
They are:
- Blur with or without denoise and convert to greyscale
- Perform an analysis based on threshold values using an intensity gradient
- Perform hysteresis
Sobel Based Intensity Gradient
One version of the intensity gradient (read here for more depth on the algorithm) is derived using the Sobel gradient. The gradient is applied in a similar way to the blur, using a window and a matrix.
The matrix finds specific changes in intensity to discover which potential edges are the best candidates. Convolution is performed on the matrix to obtain the best result.
Perform Hysteresis
Hysteresis weeds out the remaining noise from the image, leaving the actual edges. This is necessary in using the Sobel gradient since it finds too many candidates. The trick is to weed out edges from non-edges using threshold values based on the intensity gradient. Values above and below a chosen threshold are thrown out.
A faster way to perform this, if necessary, is to try to use a depth first search-like algorithm to find the ends of the edge, taking connected edges and leaving the rest. This action is fairly accurate.
The Code
Sobel Based Intensity Gradient
private void getIntensity() { // TODO calculate magnitude /* * Kernels * * G(x) G(y) -1|0|1 -1|-2|-1 -2|0|2 0|0|0 -1|0|1 1|-2|1 * * |G|(magnitude for each cell)approx. =|G(x)|+|G(y)|= * |(p1+2p2+p3)-(p7+2p8+p9)|+|(p3+2p6+p9)|-|(p1+2p4+p7)|blank rows or * colums are left out of the calc. */ // the buffered image BufferedImage image2 = new BufferedImage(image.getWidth(), image.getHeight(), BufferedImage.TYPE_BYTE_GRAY); // gives ultimate control can also use image libraries // the current position properties int x = 0; int y = 0; // the image width and height properties int width = image.getWidth(); int height = image.getHeight(); // iterate throught the image for (y = 1; y < height - 1; y++) { for (x = 1; x < width - 1; x++) { // convert to greyscale by masking (32 bit color representing // intensity --> reduce to greyscale by taking only set bits) // gets the pixels surrounding hte center (the center is always // weighted at 0 in the convolution matrix) int c1 = (image.getRGB(x - 1, y - 1) & 0xFF); int c2 = (image.getRGB(x - 1, y) & 0xFF); int c3 = (image.getRGB(x - 1, y + 1) & 0xFF); int c4 = (image.getRGB(x, y - 1) & 0xFF); int c6 = (image.getRGB(x, y + 1) & 0xFF); int c7 = (image.getRGB(x + 1, y - 1) & 0xFF); int c8 = (image.getRGB(x + 1, y) & 0xFF); int c9 = (image.getRGB(x + 1, y + 1) & 0xFF); // apply the magnitude of the convolution kernal (blank // column/row not applied) // differential x and y gradients are as follows // this is non-max suppression /* * Lxx = |1,-2,1|*L Lyy= {1,-2,1}*L ({} because its vertical and * not horizontal) */ int color = Math.abs((c1 + (2 * c2) + c3) - (c7 + (2 * c8) + c9)) + Math.abs((c3 + (2 * c6) + c9) - (c1 + (2 * c4) + c7)); // trim to fit the appropriate color pattern color = Math.min(255, Math.max(0, color)); // suppress non-maximum // set new pixel of the edge image2.setRGB(x, y, color); } } // reset the image image = image2; }
Hysteresis
private void hysterisis() { // TODO perform a non-greedy hysterisis using upper and lower threshold // values int width = image.getWidth(); int height = image.getHeight(); Color curcol = null; int r = 0; int g = 0; int b = 0; ve = new String[width][height]; for (int i = 0; i < width; i++) { for (int j = 0; j < height; j++) { ve[i][j] = "n"; } } for (int i = 0; i < height; i++) { for (int j = 0; j < width; j++) { curcol = new Color(image.getRGB(j, i)); if (ve[j][i].compareTo("n") == 0 & (((curcol.getRed() + curcol.getBlue() + curcol .getGreen()) / 3) > upperthreshold)) { ve[j][i] = "c"; image.setRGB(j, i, new Color(255, 255, 255).getRGB()); follow_edge(j, i, width, height); } else if (ve[j][i].compareTo("n") == 0) { ve[j][i] = "v"; image.setRGB(j, i, new Color(0, 0, 0).getRGB()); } } } }
Depth First Like Noise Reduction
private void follow_edge(int j, int i, int width, int height) { // TODO recursively search edges (memory shouldn't be a problem here // since the set is finite and should there should be less steps than // number of pixels) // search the eight side boxes for a proper edge marking non-edges as // visitors, follow any edge with the for-loop acting // as the restarter int x = j - 1; int y = i - 1; Color curcol = null; for (int k = 0; k < 9; k++) { if (x >= 0 & x < width & y >= 0 & y < height & x != j & y != i) { curcol = new Color(image.getRGB(j, i)); // check color if (ve[x][y].compareTo("n") == 0 & ((curcol.getRed() + curcol.getBlue() + curcol .getGreen()) / 3) > lowerthreshold) { ve[x][y] = "c"; image.setRGB(j, i, new Color(255, 255, 255).getRGB()); follow_edge(x, y, width, height); } else if (ve[x][y].compareTo("n") == 0 & x != j & y != i) { ve[x][y] = "v"; image.setRGB(j, i, new Color(0, 0, 0).getRGB()); } } // check x and y by k if ((k % 3) == 0) { x = (j - 1); y++; } } }
Laplace Based Intensity Gradient as Opposed to Sobel
The Sobel gradient is not the only method for performing intensity analysis. A Laplacian operator can be used to obtain a different matrix. The Sobel detector is less sensitive to light differences and yields both magnitude and direction but is slightly more complicated. The Laplace gradient may also reduce the need for post-processing as the Sobel gradient normally accepts too many values.
The Laplace gradient uses 0 as a mask value, obtaining the following matrix.
-1 | -1 | -1 |
-1 | 8 | -1 |
-1 | -1 | -1 |
The matrix is used to transform each pixel’s RGB value based on whether or not it is part of a candidate edge.
private void find_all_edges() { // TODO find all edges using laplace rather than sobel and hysterisis // (noise can interfere with the result) // the new buffered image containing the edges BufferedImage image2 = new BufferedImage(image.getWidth(), image.getHeight(), BufferedImage.TYPE_INT_RGB); // gives ultimate control can also use image libraries // the current position properties int x = 0; int y = 0; // the image width and height properties int width = image.getWidth(); int height = image.getHeight(); /* * Denoise Using Rewritten Code found at * http://introcs.cs.princeton.edu/ * java/31datatype/LaplaceFilter.java.html * * Using laplace is better than averaging the neighbors from each part * of an image as it does a better job of getting rid of gaussian noise * without overdoing it * * Applies a default filter: * * -1|-1|-1 -1|8|-1 -1|-1|-1 */ // perform the laplace for each number for (y = 1; y < height - 1; y++) { for (x = 1; x < width - 1; x++) { // get the neighbor pixels for the transform Color c00 = new Color(image.getRGB(x - 1, y - 1)); Color c01 = new Color(image.getRGB(x - 1, y)); Color c02 = new Color(image.getRGB(x - 1, y + 1)); Color c10 = new Color(image.getRGB(x, y - 1)); Color c11 = new Color(image.getRGB(x, y)); Color c12 = new Color(image.getRGB(x, y + 1)); Color c20 = new Color(image.getRGB(x + 1, y - 1)); Color c21 = new Color(image.getRGB(x + 1, y)); Color c22 = new Color(image.getRGB(x + 1, y + 1)); /* apply the matrix */ // to check, try using gauss jordan // apply the transformation for r int r = -c00.getRed() - c01.getRed() - c02.getRed() + -c10.getRed() + 8 * c11.getRed() - c12.getRed() + -c20.getRed() - c21.getRed() - c22.getRed(); // apply the transformation for g int g = -c00.getGreen() - c01.getGreen() - c02.getGreen() + -c10.getGreen() + 8 * c11.getGreen() - c12.getGreen() + -c20.getGreen() - c21.getGreen() - c22.getGreen(); // apply the transformation for b int b = -c00.getBlue() - c01.getBlue() - c02.getBlue() + -c10.getBlue() + 8 * c11.getBlue() - c12.getBlue() + -c20.getBlue() - c21.getBlue() - c22.getBlue(); // set the new rgb values r = Math.min(255, Math.max(0, r)); g = Math.min(255, Math.max(0, g)); b = Math.min(255, Math.max(0, b)); Color c = new Color(r, g, b); image2.setRGB(x, y, c.getRGB()); } } image = image2; }
Output
The following before and after pictures show the results of applying the Sobel based matrix to the image and using the depth first search like approach to hysteresis.
After
Python Based Edge Detection
In Python, OpenCV performs this operation in a single line.
import cv2 image=cv2.imread(imagepath) cv2.Canny(image,100,200)