Use gray scale images to train Haar Classifier - visual-studio-2010

Hi everyboy I am using visual studio 2010 with Kinect and opencv!! I have a question about Harr classifier!!Well I want to recognize a object that has a lot of sizes but I do not care its color!!for that reason I want to train Haar Classifier with gray scale images and when I show a object in front kinect, convert that frame in gray scale image in order my sotfware detect that is objects that was trained!! Is that a good idea?? Because object have a lot of sizes and colors!And I want to eliminate color!!

From my experience with opencv haarcascades you should always convert to grey-scale and then equalise your histogram before proceeding.

Related

GIMP: How to change the color levels of many images according to preset?

So, I have this dataset of 3k images in grayscale and want to create a colorized respective target to feed them to a model. I decided to do that by changing the RGB color curves. Of course, it is impossible to do this manually for so many images. I couldn't understand how to do this with the python-fu or script-fu. Any ideas?
I know that one way would be to apply the transformation for each channel using OpenCV, the problem with that is that I don't know the equation to the curves that I drew.

Clear Blood Vessel Segmentation in 3D image using VTK

I am using vtk for segmenting blood vessels.
I have two dicom image sets,one has normal CT images and the other has CT with MIP(Maximum Intensity Projection).
So i subtracted the two series and gave that result to the input of vtkMarchingCubes.But my segmented image is showing only less details.I have attached what i got in the picture.
https://i.stack.imgur.com/V66nN.png
I tried using filters but no use
I need to get even the thin vessels.How is it possible using only VTK?
If not How is it possible in ITK.
If my question is not clear kindly inform
I recommend to use 3D Slicer and then download VMTKVesselEnhancement for it and to identify tubular shapes in your 3D images and then using segmentation methods to extract the 3D surfaces of the blood vessel.
In ITK, you could use vesselness filter to enhance the vessels. That should make them easier to extract.

Get real RAW values from pixels using OpenCV or any other software

In my understanding of spec sheets from digital cameras, each output color pixel is made out 4 real pixels on the CCD. However, when reading from a camera, for example using OpenCV, one can get as much as NxMx3 pixels. The two green pixels get averaged.
For what I understood, OpenCV let's you transform from RGB images to grayscale, but couldn't find a way of getting the raw values from the CCD. Of course, it could be that there is a lower level limitation (i.e. the transformation to color space happens on the electronics and not on the computer). Or that there is some interpolation and hence, there are in reality NxM pixels and not NxMx4 pixels in a camera.
Is there any way of getting RAW data from a camera with OpenCV, or is there any information stored in RAW files acquired with commercial cameras?

How to extract features from retina images

I'm working in Diabetic Retinopathy Detection problem where I've been given Retina images [image1] with score labels. My job is to build a classification model that can detect and score retinopathy given unlabeled retina images.
First step that I'm currently working on is extracting features from these images and build input vector that I'll use as an input for my classification algorithm. I've basic knowledge in image processing and I've tried to crop my images to edges [Image2], turn it to Gray scale and get its histogram as an input vector but it seems that I still have a large representation for an image. In addition to that I may have lost some essential features that was encoded into the RGB image.
Image1:
Image2:
pre-processing medical images is not a trivial task, for the performance improvement of diabetic retinopathy you need to highlight the blood vessels, there are several pre-processing suitable for this, I am sending a link that may be useful
https://github.com/tfs4/IDRID_hierarchical_combination/blob/master/preprocess.py

Is conversion to gray scale a necessary step in Image preprocessing?

I would like to know if converting an image to gray scale is necessary step for all image pre processing techniques. I am using a neural network for face recognition. Is it really necessary for converting it into a gray scale or can I give color images also as input to neural networks?
Converting to gray scale is not necessary for image processing, but is usually done for a few reasons:
Simplicity - Many image processing operations work on a plane of image data (e.g., a single color channel) at a time. So if you have an RGBA image you might need to apply the operation on each of the four image planes and then combine the results. Gray scale images only contain one image plane (containing the gray scale intensity values).
Data reduction - Suppose you have a RGBA image (red-green-blue-alpha). If you converted this image to gray scale you would only need to process 1/4 of the data compared to the color image. For many image processing applications, especially video processing (e.g., real-time object tracking), this data reduction allows the algorithm to run in a reasonable amount of time.
However, it's important to understand that while there are many advantages of converting to gray scale, it is not always desirable. When you convert to gray scale you not only reduce the quantity of image data, but you also lose information (e.g., color information). For many image processing applications color is very important, and converting to gray scale can worsen results.
To summarize: If converting to gray scale still yields reasonable results for whatever application you're working on, it is probably desirable, especially due to the likely reduction in processing time. However it comes at the cost of throwing away data (color data) that may be very helpful or required for many image processing applications.
No, it is not required, it simplifies things, so it is an often practice to do so, but in general you could work directly on the color image, in any representation (RGB, CMYK) by simply using more dimensions (or more complex similarity/distance measure/kernel).

Resources