I have a CT image of a lung as shown below :
I am trying to filter the inner of the two lungs such that i only remove the thin lines of the bronchi ,but i need to keep these small "circles " as much as i can for extraction in the next step as these small circles are nodules candidates ( cancerous structures ) . So please if you can mention to me a good filtering technique for this purpose. Thanks in advance
You can try imfilter, with Gaussian, or perhaps disk filter. Try:
img=imread('orsoR.png');
h = fspecial('disk',5);
y = imfilter(h, img);
figure;
imshow(y)
Related
I am using Paraview 5.0.1. If any solution requires updating, I can try.
I want to programmatically obtain field plots (and corresponding PlotOverLine) of displacements and stresses in rotated coordinate systems.
What are appropriate/convenient/possible ways of doing this?
So far, I have created one Calculator filter for each component of displacements and stresses.
For instance, I used Calculators in 2D with results
(displacement.iHat)*cos(0.7853981625)+(displacement.jHat)*sin(0.7853981625)
(stress_3-stress_0)*sin(45.0*3.14159265/180)*cos(45.0*3.14159265/180)+stress_1*((cos(45.0*3.14159265/180))^2-(sin(45.0*3.14159265/180))^2)
It works fine, but it is quite cumbersome, in several aspects:
Creating them (one filter per component).
Plotting several of them in a single XY plot
Exporting them (one export per component).
Is there a simple way to do this?
PS: The Transform filter does not accomplish this. It rotates the view, not the fields.
Two solutions:
Ugly, inneficient solution
Use Transform and check "Transform All Input vectors"
Add a calculator and add a dummy array
Use transform the other way around, without checking "Transform All Input vectors"
Correct solution :
Compute the transformation yourself in a programmable filter
input = self.GetUnstructuredGridInput();
output = self.GetUnstructuredGridOutput();
output.ShallowCopy(input)
data = input.GetPointData().GetArray("YourArray")
vec = vtk.vtkDoubleArray();
vec.SetNumberOfComponents(3);
vec.SetName("TransformedVectors");
numPoints = input.GetNumberOfPoints()
for i in xrange(0, numPoints):
tuple = data.GetTuple(i)
transform(tuple) # implement the transform in python
vec.InsertNextTuple(tuple)
output.GetPointData().AddArray(vec)
An example of detectSURFFeatures in comparison of 2 image is in below. I couldn't make detectSURFFeatures function work in my MATLAB. no help or doc detectSURFFeatures gives any clue. the error says " > UncalibratedSterio
Undefined function 'detectSURFFeatures' for input arguments of type 'uint8'." but the function itself can cover uint8 as i know. what should i do?
%Rectified Sterio Image Uncalibrated
% There is no calibration of cameras
I1 = rgb2gray(imread('right_me.jpg'));
I2 = rgb2gray(imread('left_me.jpg'));
Value = 2000.0;
blobs1 = detectSURFFeatures(I1, 'MetricThreshold', Value);
blobs2 = detectSURFFeatures(I2, 'MetricThreshold', Value);
figure;
imshow(I1);
hold on;
plot(selectStrongest(blobs1, 30));
title('Thirty strongest SURF features in I1');
figure;
imshow(I2);
hold on;
plot(selectStrongest(blobs2, 30));
title('Thirty strongest SURF features in I2');
You are getting that error because detectSURFFeatures does not exist in your MATLAB distribution. You must have at least R2011b, as that was when detectSURFFeatures was available: http://www.mathworks.com/help/vision/release-notes.html#R2011b
I suspect you have an older version of MATLAB than R2011b and so if you want to make it easy on yourself, you need to upgrade your version of MATLAB. However, if I may make a suggestion, I suggest the mexopencv project by Kota Yamaguchi: http://kyamagu.github.io/mexopencv/
He wrote OpenCV wrappers that can directly interface with MATLAB and so you can actually call OpenCV's SURF feature detection and matching methods from MATLAB but you will need to install OpenCV to do that. It will be a bit of work to get it working, but this is one solution I can provide if you don't want to upgrade your version of MATLAB.
Good luck!
I've been trying to combine an image produced from a deforestation database called Hansen and a shapefile created in ArcGIS to make a georeference image. The script I've written so far is below but unable to figure out how to combine the two (I've tried several scripts including http://uk.mathworks.com/help/map/examples/creating-maps-using-mapshow.html?searchHighlight=overlay%20maps). Any assistance would be helpful!
Thank you,
Michelle
% Read in thresholded Hansen data
Data_FrenchGuiana = imread('FrenchGuiana_GFC_extract_thresholded.tif');
LossYear_FrenchGuiana = Data_FrenchGuiana(:,:,2);
LossYear_FrenchGuiana = double(LossYear_FrenchGuiana);
figure('color','white');
image(LossYear_FrenchGuiana)
imwrite(A,'LossYear_FrenchGuiana.tif')
country = shaperead('FrenchGuiana.shp');
figure mapshow(country);
xlabel('easting in meters')
ylabel('northing in meters')
I'm looking at implementing a Caffe CNN which accepts two input images and a label (later perhaps other data) and was wondering if anyone was aware of the correct syntax in the prototxt file for doing this? Is it simply an IMAGE_DATA layer with additional tops? Or should I use separate IMAGE_DATA layers for each?
Thanks,
James
Edit: I have been using the HDF5_DATA layer lately for this and it is definitely the way to go.
HDF5 is a key value store, where each key is a string, and each value is a multi-dimensional array. Thus, to use the HDF5_DATA layer, just add a new key for each top you want to use, and set the value for that key to store the image you want to use. Writing these HDF5 files from python is easy:
import h5py
import numpy as np
filelist = []
for i in range(100):
image1 = get_some_image(i)
image2 = get_another_image(i)
filename = '/tmp/my_hdf5%d.h5' % i
with hypy.File(filename, 'w') as f:
f['data1'] = np.transpose(image1, (2, 0, 1))
f['data2'] = np.transpose(image2, (2, 0, 1))
filelist.append(filename)
with open('/tmp/filelist.txt', 'w') as f:
for filename in filelist:
f.write(filename + '\n')
Then simply set the source of the HDF5_DATA param to be '/tmp/filelist.txt', and set the tops to be "data1" and "data2".
I'm leaving the original response below:
====================================================
There are two good ways of doing this. The easiest is probably to use two separate IMAGE_DATA layers, one with the first image and label, and a second with the second image. Caffe retrieves images from LMDB or LEVELDB, which are key value stores, and assuming you create your two databases with corresponding images having the same integer id key, Caffe will in fact load the images correctly, and you can proceed to construct your net with the data/labels of both layers.
The problem with this approach is that having two data layers is not really very satisfying, and it doesn't scale very well if you want to do more advanced things like having non-integer labels for things like bounding boxes, etc. If you're prepared to make a time investment in this, you can do a better job by modifying the tools/convert_imageset.cpp file to stack images or other data across channels. For example you could create a datum with 6 channels - the first 3 for your first image's RGB, and the second 3 for your second image's RGB. After reading this in using the IMAGE_DATA layer, you can split the stream into two images using a SLICE layer with a slice_point at index 3 along the slice_dim = 1 dimension. If further down the road, you decide that you want to load even more complex assortments of data, you'll understand the encoding scheme and can write your own decoding layer based off of src/caffe/layers/data_layer.cpp to gain full control of the pipeline.
You may also consider using HDF5_DATA layer with multiple "top"s
I wanted to have one dimensional PSD for an Image calculated (along rows and columns separately) using matlab.
I use the following snippet for the same.
F=fft(img,[],2);%FFT along dim2
F=fftshift(F,2);
mtf=(abs(F)).^2;
mtf_mean = mean(mtf,2);% Mean of all contents of a row
mtf_mean_norm = mtf_mean/max(max(mtf_mean)); %Normalization to 1
plot(mtf_mean_norm);
When I plot it, I expected a symmetrical plot with respect to a center (and that's what I want). But, I happen to see that two parts look asymmerical like in the attached figure.
Looks like I have a code bug, Any clues what am I missing ?
Image url: http://i.stack.imgur.com/RrLIt.jpg
I am not an image processing person, but just from my own stats knowledge I would say.
you should use mtf=abs(F)'*abs(F) instead of mtf=(abs(F)).^2. I got the following figure
here is the code that generates the figure.
> img=randn(50,50);
> F=fft(img,[],2);%FFT along dim2
> F=fftshift(F,2);
> mtf=abs(F)'*abs(F);
> mtf_mean = mean(mtf,2);% Mean of all contents of a row
> mtf_mean_norm = mtf_mean/max(max(mtf_mean)); %Normalization to 1
> plot(mtf_mean_norm);
> plot(mtf_mean_norm);