How can Nifti files be restructured so that they can be fitted in a CNN? - bioinformatics

Most of my Nifti files have different xyZ dimensions. How can I arrange them so that they have the same dimensionality and then be able to train them in a CNN?

You can resample the images to have the same voxel size and then crop them to cover the same physical size. This is important because if you just re-size the images then each pixel (or voxel in 3D) will be a different physical size, and your CNN will be learning features at different scales.
If you are interested in performing this in Python, then I strongly suggest SimpleITK. Another package built on top of this is platipy - which has tools for isotropic resampling and cropping which could be useful.

Related

find difference of 2 images that may also be different sizes

I have 2 pdf files that have multiple layers of images.
The images are black and white diagrams and should be almost identical.
Problem is that the position of the diagrams in the canvas may be offset by a little (>3%) and they may be scaled to slightly different sizes, the size difference is small (>3%)
Is there a way to manipulate the images to minimise their difference by translation and scaling and then highlight the differences?

fast rasterisation and colorization of 2D polygons of known shape to an image file

The shape and positions of all the polygons are known beforehand. The polygons are not overlapping and will be of different colors and shapes, and there could be quite many of them. The polygons are defined in floating point based coordinates and will be painted on top of a JPEG photo as annotation.
How could I create the resulting image file as fast as possible after I get to know which color I should give each polygon?
If it would save time I would like to perform as much as possible of the computations beforehand. All information regarding geometry and positions of the polygons are known in advance. The JPEG photo is also known in advance. The only information not known beforehand is the color of each polygon.
The JPEG photo has a size of 250x250 pixels, so that would also be the image size of the resulting rasterised image.
The computations will be done on a Linux computer with a standard graphics card, so OpenGL might be a viable option. I know there are also rasterisation libraries like Cairo that could be used to paint polygons. What I wonder is if I could take advantage of the fact that I know so much of the input in advance and use that to speed up the computation. The only thing missing is the color of each polygon.
Preferably I would like to find a solution that would only precompute things in the form of data files. In other words as soon as the polygon colors are known, the algorithm would load the other information from datafiles (JPEG file, polygon geometry file and/or possibly precomputed datafiles). Of course it would be faster to start the computation out with a "warm" state ready in the GPU/CPU/RAM but I'd like to avoid that. The choice of programming language is not so import, but could for instance be C++.
To give some more background information: The JavaScript library OpenSeadragon that is running in a web browser requests image tiles from a web server. The idea is that measurement points (i.e. the polygons) could be plotted on-the-fly on to pregenerated Zooming Images (DZI format) by the web server. So for one image tile the algorithm would only need to be run one time. The aim is low latency.

How to extract features from retina images

I'm working in Diabetic Retinopathy Detection problem where I've been given Retina images [image1] with score labels. My job is to build a classification model that can detect and score retinopathy given unlabeled retina images.
First step that I'm currently working on is extracting features from these images and build input vector that I'll use as an input for my classification algorithm. I've basic knowledge in image processing and I've tried to crop my images to edges [Image2], turn it to Gray scale and get its histogram as an input vector but it seems that I still have a large representation for an image. In addition to that I may have lost some essential features that was encoded into the RGB image.
Image1:
Image2:
pre-processing medical images is not a trivial task, for the performance improvement of diabetic retinopathy you need to highlight the blood vessels, there are several pre-processing suitable for this, I am sending a link that may be useful
https://github.com/tfs4/IDRID_hierarchical_combination/blob/master/preprocess.py

Image comparison of HD and SD images using Java

I have a database containing both the SD and HD version of same set of images. Problem is that their names are different from each other.
For example an image of a flower in SD has a name greatflower.png and the same flower has HD version with the name bestflower.png
The challenge is to find a similarity between them. I have tried resizing the images to the same width and height and then pixel by pixel comparing them but the results are not accurate. I just want to have an answer in terms of yes or no after the algorithm is done performing comparison between a pair of images.
My question is which library of java or algorithm will perform a perfect image similarity analysis between an HD and SD version of the image ?
Thanking you all in anticipation :)
I would as a first step create color histograms of all images (for a reduced color-depth).
Than you can use the histogram similarity to find candidates of same images.
In a second step I would resize both images to the same (quite small) size and than make a pixel by pixel comparison with little tolerance related to the colors. If more than 90% of the pixels are very similar, I would classify the images as being the same.

How to scale JPEG image down so that text is clear as possible?

I have some JPEG images that I need scale down to about 80% of original size. Original image dimension are about 700px × 1000px. Images contain some computer generated text and possibly some graphics (similar to what you would find in corporate word documents).
How to scale image so that the text is as legible as possible? Currently we are scaling the imaeg down using bicubic interpolation, but that makes the text blurry and foggy.
Two options:
Use a different resampling algorithm. Lanczos gives you a much less blurrier result.
You ight use an advances JPEG library that resamples the 8x8 blocks to 6x6 pixels.
If you are not set on exactly 80% you can try getting and building djpeg from http://www.ijg.org/ as it can decompress your jpeg to 6/8ths (75%) or 7/8ths (87.5%) size and the text quality will still be pretty good:
Original
7/8
3/4
(SO decided to scale the images when showing them inline)
There may be a scaling algorithm out there that works similarly, but this is an easy off the shelf solution.
There is always a loss involved in scaling down, but it again depends of your trade offs.
Blurring and artifact generation is normal for jpeg images, so its recommended that you generate images is the correct size the first time.
Lanczos is a fine solution, but you have your trade offs
If its just the text and you are concerned about it, you could try dilation filter over the resampled image. This would correct some blurriness but may also affects the graphics. If you can live with it, its good. Alternatively if you can identify the areas of text, you can apply dilation just over those areas.

Resources