I'm working in Diabetic Retinopathy Detection problem where I've been given Retina images [image1] with score labels. My job is to build a classification model that can detect and score retinopathy given unlabeled retina images.
First step that I'm currently working on is extracting features from these images and build input vector that I'll use as an input for my classification algorithm. I've basic knowledge in image processing and I've tried to crop my images to edges [Image2], turn it to Gray scale and get its histogram as an input vector but it seems that I still have a large representation for an image. In addition to that I may have lost some essential features that was encoded into the RGB image.
Image1:
Image2:
pre-processing medical images is not a trivial task, for the performance improvement of diabetic retinopathy you need to highlight the blood vessels, there are several pre-processing suitable for this, I am sending a link that may be useful
https://github.com/tfs4/IDRID_hierarchical_combination/blob/master/preprocess.py
Related
Imagine a digital picture of a flower. I am looking for an algorithm and a platform to use it, in which it will generate a series of "derivative images", in which each image shows the moulding of the flower in a time series. The rules for choosing areas and the colours in the derivative images will be instructed by the artist, and the final output must look as if one has actually filmed a similar flower becoming mouldy (like green), where the contours of objects remain fixed. It should also be based on a randomised algorithm where each generated sequence of images will be unique.
Judging by the description of the task, the program will have to perform complex image processing, involving estimation of an object's three-dimensional position and orientation from a 2d image, generation of a filter based on that data and its application on the image. This can be accomplished with the OpenCV library.
I am using HOG feature detector based on SVM classification. I can successfully extract license plate, but the extracted number plate have some unnecessary pixels/lines apart from license number. My image processing pipeline is as follows:
Applying HOG detector on the grayscale image
Cropping detected region
Re-sizing the cropped image
Applying adaptive threshold to highlight the plate numbers & filtering background using following Opencv code
cvAdaptiveThreshold(cropped_plate, thresholded_plate, 255,CV_ADAPTIVE_THRESH_GAUSSIAN_C, CV_THRESH_BINARY_INV,11, 9);
De-skewing plate image
Due to this unnecessary information, Tesseract-OCR software is getting confused to recognize numbers correctly. The extracted number plates images look like the following.
How can i filter these unnecessary pixels/lines from the images? Any help will be appreciated.
You want to remove all non-text objects in the image. To do that, I suggest sorting the blobs by area of their bounding box (maxy - miny)*(maxx - minx). Do some statistical analysis; you know you are looking for objects of a similar size. Once you identify the approximate size of a character, make a larger bounding box that estimates the whole text. Keep the small blobs inside it, so for your picture, the dash sign will be preserved.
You can probably achieve a lot by filtering contours out. Try to find contours that have a certain width/height ratio, a certain amount of white pixels with countNonZero() etc. If that does not help, you can always try to implement a text detection algorithm like Run Length Smoothing Algorithm (RLSA).
I would like to know if converting an image to gray scale is necessary step for all image pre processing techniques. I am using a neural network for face recognition. Is it really necessary for converting it into a gray scale or can I give color images also as input to neural networks?
Converting to gray scale is not necessary for image processing, but is usually done for a few reasons:
Simplicity - Many image processing operations work on a plane of image data (e.g., a single color channel) at a time. So if you have an RGBA image you might need to apply the operation on each of the four image planes and then combine the results. Gray scale images only contain one image plane (containing the gray scale intensity values).
Data reduction - Suppose you have a RGBA image (red-green-blue-alpha). If you converted this image to gray scale you would only need to process 1/4 of the data compared to the color image. For many image processing applications, especially video processing (e.g., real-time object tracking), this data reduction allows the algorithm to run in a reasonable amount of time.
However, it's important to understand that while there are many advantages of converting to gray scale, it is not always desirable. When you convert to gray scale you not only reduce the quantity of image data, but you also lose information (e.g., color information). For many image processing applications color is very important, and converting to gray scale can worsen results.
To summarize: If converting to gray scale still yields reasonable results for whatever application you're working on, it is probably desirable, especially due to the likely reduction in processing time. However it comes at the cost of throwing away data (color data) that may be very helpful or required for many image processing applications.
No, it is not required, it simplifies things, so it is an often practice to do so, but in general you could work directly on the color image, in any representation (RGB, CMYK) by simply using more dimensions (or more complex similarity/distance measure/kernel).
i need a way to determine wheter a picture is a photograph or not. I've got a bunch of random image files (paper document scans, logos and of course photographs taken by a camera) and i need to filter out only the photographs for creating a preview.
The solution proposed at Determine if image is photograph or drawing, quickly only works in a limited way (i.e. some logos are completly black with wite font, some logos have only colors in it - no white areas) and sometimes i've got scan of a white paper containing multiple photographs with white space arround - i need to identify those, too - because then i have to key out the white part and save the photographs on the scan in seperate files.
Your process to do this should probably be similar to the following:
Extract features from the image (pixel values, groups of pixels,
HoG, SIFT, GIST, DCT, Wavelet, Dictionary learning coefficients,
etc. depending on how much time you have)
Aggregate these features somehow so that you get a fixed length
vector (histogram, pyramid scheme)
Apply a standard classification (SVM, k-NN, neural network, Random
Forest) or clustering algorithm (k-means, GMM, etc.) and measure how
well it works (F1 score is usually okay, ROC may be better for
2-class problems)
Repeat from step 1 with different features if you are unsatisfied with the results from 3
The solution you reference seems to be pretty reasonable in terms of steps 1 and 2.
A simple next step in extracting and aggregating features could be to create histograms from all pixel values in the image. If you have a lot of labeled data you should feed these features to a standard classifier. Otherwise, run a clustering algorithm on these histogram features and check the cluster assignments to see if they are correlated with the photograph/non-photograph assignment.
Check the following paper:
http://www.vision.ee.ethz.ch/~gallju/projects/houghforest/houghforest.html . They provide source code.
I believe the program accepts an input file with negative and positive images for training. The output of the classification part of it will be a image voting map (hough map?). You might need to decide on a threshold value to locate regions of interest. So if there two logos in the image it will mark out both of them. The algorithm worked very well for me in a past.
Training on 100 positive and 100 negative images should be enough, I believe. Don't use big images for training also (256x256 should be enough).
My aim is to detect the vein pattern in leaves which characterize various species of plants
I have already done the following:
Original image:
After Adaptive thresholding:
However the veins aren't that clear and get distorted , Is there any way i could get a better output
EDIT:
I tried color thresholding my results are still unsatisfactory i get the following image
Please help
The fact that its a JPEG image is going to give the "block" artifacts, which in the example you posted causes most square areas around the veins to have lots of noise, so ideally work on an image that's not been through lossy compression. If that's not possible then try filtering the image to remove some of the noise.
The veins you are wanting to extract have a different colour from the background, leaf and shadow so some sort of colour based threshold might be a good idea. There was a recent S.O. question with some code that might help here.
After that some sort of adaptive normalisation would help increase the contrast before you threshold it.
[edit]
Maybe thresholding isn't an intermediate step that you want to do. I made the following by filtering to remove jpeg artifacts, doing some CMYK channel math (more cyan and black) then applying adaptive equalisation. I'm pretty sure you could then go on to produce (subpixel maybe) edge points using image gradients and non-maxima supression, and maybe use the brightness at each point and the properties of the vein structure (mostly joining at a tangent) to join the points into lines.
In the past I made good experiences with the Edge detecting algorithm difference of Gaussian. Which basically works like this:
You blur the image twice with the gaussian blurr algorithm but with differenct blur radii.
Then you calculate the difference between both images.
Pixel with same color beneath each other will creating a same blured color.
Pixel with different colors beneath each other wil reate a gradient which is depending on the blur radius. For bigger radius the gradient will stretch more far. For smaller ones it wont.
So basically this is bandpass filter. If the selected radii are to small a vain vill create 2 "parallel" lines. But since the veins of leaves are small compared with the extends of the Image you mostly find radii, where a vein results in 1 line.
Here I added th processed picture.
Steps I did on this picture:
desaturate (grayscaled)
difference of Gaussian. Here I blured the first Image with a radius of 10px and the second image with a radius of 2px. The result you can see below.
This is only a quickly created result. I would guess that by optimizing the parametes, you can even get better ones.
This sounds like something I did back in college with neural networks. The neural network stuff is a bit hard so I won't go there. Anyways, patterns are perfect candidates for the 2D Fourier transform! Here is a possible scheme:
You have training data and input data
Your data is represented as a the 2D Fourier transform
If your database is large you should run PCA on the transform results to convert a 2D spectrogram to a 1D spectrogram
Compare the hamming distance by testing the spectrum (after PCA) of 1 image with all of the images in your dataset.
You should expect ~70% recognition with such primitive methods as long as the images are of approximately the same rotation. If the images are not of the same rotation.you may have to use SIFT. To get better recognition you will need more intelligent training sets such as a Hidden Markov Model or a neural net. The truth is to getting good results for this kind of problem may be quite a lot of work.
Check out: https://theiszm.wordpress.com/2010/07/20/7-properties-of-the-2d-fourier-transform/