I am using vtk for segmenting blood vessels.
I have two dicom image sets,one has normal CT images and the other has CT with MIP(Maximum Intensity Projection).
So i subtracted the two series and gave that result to the input of vtkMarchingCubes.But my segmented image is showing only less details.I have attached what i got in the picture.
https://i.stack.imgur.com/V66nN.png
I tried using filters but no use
I need to get even the thin vessels.How is it possible using only VTK?
If not How is it possible in ITK.
If my question is not clear kindly inform
I recommend to use 3D Slicer and then download VMTKVesselEnhancement for it and to identify tubular shapes in your 3D images and then using segmentation methods to extract the 3D surfaces of the blood vessel.
In ITK, you could use vesselness filter to enhance the vessels. That should make them easier to extract.
Related
So, I have this dataset of 3k images in grayscale and want to create a colorized respective target to feed them to a model. I decided to do that by changing the RGB color curves. Of course, it is impossible to do this manually for so many images. I couldn't understand how to do this with the python-fu or script-fu. Any ideas?
I know that one way would be to apply the transformation for each channel using OpenCV, the problem with that is that I don't know the equation to the curves that I drew.
How to detect such cracks that you can see in the attached images? I have tried some OpenCV algorithms like blob detection (cv::SimpleBlobDetector) but couldn't get any results.
It is a cropped image, the full image has some other features as well, so I am not sure thresholding can work because I have to get the bounding box of the detected crack. One way is to assign several (region of interest) ROI and try to detect within that ROI, but this crack doesn't appear at the same location in the image. Any idea?
Can this problem be solved with machine/deep learning (like object detection)? If I train a model with a crack dataset? Because the crack part of the image doesn't have lots of features so I am not sure this method will work. Please guide.
Thanks.
These cracks are difficult to detect because the image is noisy (presumably X-ray) and the contrast poor, so the signal-to-noise ratio is low.
I would try by applying a gaussian filter for denoising, but only in the horizontal direction, to preserve the horizontal edges. Then detection of the horizontal edges.
This is about what a Gabor filter does. You can try different orientations.
Use mathematical morphology operation.
By example Matlab code:
a=imread('in.png');
se=strel( 'disk', 7);
b = imgaussfilt(a,1.3);
c=b-imopen(b,se);
c=3*c;
d=imclearborder(c);
imwrite(d, 'out.png');
Can someone tell me how (or the name of it, so that I could look it up) I can implement this interpolation effect? https://www.youtube.com/watch?v=36lE9tV9vm0&t=3010s&frags=pl%2Cwn
I tried to use r = r+dr, g = g+dr and b = b+db for the RGB values in each iteration, but it looks way too simple compared to the effect from the video.
"Can someone tell me how I can implement this interpolation effect?
(or the name of it, so that I could look it up)..."
It's not actually a named interpolation effect. It appears to interpolate but really it's just realtime updated variations of some fictional facial "features" (the hair, eyes, nose, etc are synthesized pixels taking hints from a library/database of possible matching feature types).
For this technique they used Neural Networks to do a process similar to DFT Image Reconstruction. You'll be modifying the image data in Frequency domain (with u,v), not Time domain (using x,y).
You can read about it at this PDF: https://research.nvidia.com/sites/default/files/pubs/2017-10_Progressive-Growing-of/karras2018iclr-paper.pdf
The (Python) source code:
https://github.com/tkarras/progressive_growing_of_gans
For ideas, on Youtube you can look up:
DFT image reconstruction (there's a good example with b/w Nicholas Cage photo reconstructed in stages. Loud music warning).
Image Synthesis with neural networks (one clip had salternative shoe and hand-bag designs (item photos) being "synthesized" by an N.N. after it analyzed features from other existing catalogue photos as "inspiration".
Image Enhancement Super Resolution using neural networks This method is closest to answering your question. One example has very low-res blurry pixelated image in b/w. Cannot tell if boy or girl. During a test, The network synthesizes various higher quality face images that it thinks is the correct match for the testing input.
After understanding what/how they're achieve it, you could think of shortcuts to get similar effect without needing networks eg: only using regular pixel editing functions.
Found it in another video, it is called "latent space interpolation", it has to be applied on the compressed images. If I have image A and the next image is image B, I have first to encode A and B, use the interpolation on the encoded data and finally decode the resulted image.
As of today, I found out that this kind of interpolation effect can be easily implemented for 3d image data. That is if the image data is available in a normalized and at 3d origin centred way, like for example in a unit sphere around the origin and the data of each faceimage is inside that unit sphere. Having the data of two images stored this way the interpolation can be calculated by taking the differences of rays going through the origin center and through each area of the sphere at some desired resolution.
I have about 6000 aerial images taken by 3DR drone for vegetation sites.
The images have to overlap to some extant because the drone flights cover the area go from EW and then again NS, so the images present the same area from two directions. I need the overlap for the images for extra accuracy.
I don't know to write a code on IDL to combine the images and create that overlap. Can anyone help please?
Thanks
What you need is something that is identifiable that occurs in both images. Preferably you would have several things across the field of view so that you could get the correct rotation as well as a simple x-y shift.
The basic steps you will need to follow are:
Source Identification - Identify sources in all images that will later be used to align the images. Make sure the centering of these soruces are good so that they will align better later.
Basic alignment. Start with a guess on where the images should align then try to match the sources.
Match the sources. There are several libraries that can do this for stars (in astronomical images) that could be adapted for this.
Shift and rotate the images. This can be done to the pixels or to the header that is read in and have a program manipulate the pixels on the fly.
I am new to Opencv. What I am trying to do is measure liquid level using single digital camera. After searching a lot I got one research paper having the algorithm and steps at the below link.
http://personnel.sju.edu.tw/改善師資研究成果/98年度/著作/22.pdf
But this algorithm having a step where I need to apply chrominance filtering on captured image. Opncv doesn't come with such inbuilt functionality. So how can I implement chrominance filtering, is there any way to to do this. Please help me, thanks.
I didn't manage to access the link yo provided, but I guess what you want to to is to segment your image based on the color (chroma). In order to do that, you have to first convert your BGR image (RGB but with the red and blue channels swapped, which is the natural ordering for OpenCV) to a chrominance-based space. You can do this using the function cv::cvtColor()with an argument such as CV_BGR2YUV or CV_BGR2YCrCb.
See the full doc of the conversion function here.
After that, you can split the channels of your images with cv::split()e.g. :
cv::Mat yuvImage;
std::vector<cv::Mat> yuvChannels;
cv::split(yuvImage, yuvChannels);
Then work on the image of the resulting vector that contains your channel of interest.