How to limit the raster processing extent using a spatial mask? - image

I am trying to limit raster processing in MATLAB to include only areas within a shapefile boundary, similar to how ArcGIS Spatial Analyst functions use a mask. Here is some (reproducible) sample data I am working with:
A 4-band NAIP image (WARNING 169MB download)
A shapefile of study area boundaries (A zipped shapefile on File Dropper)
Here is a MATLAB script I use to calculate NDVI:
file = 'C:\path\to\doi1m2011_41111h4nw_usda.tif';
[I R] = geotiffread(file);
outputdir = 'C:\output\'
% Calculate NDVI
NIR = im2single(I(:,:,4));
red = im2single(I(:,:,1));
ndvi = (NIR - red) ./ (NIR + red);
double(ndvi);
imshow(ndvi,'DisplayRange',[-1 1]);
% Stretch to 0 - 255 and convert to 8-bit unsigned integer
ndvi = floor((ndvi + 1) * 128); % [-1 1] -> [0 256]
ndvi(ndvi < 0) = 0; % not really necessary, just in case & for symmetry
ndvi(ndvi > 255) = 255; % in case the original value was exactly 1
ndvi = uint8(ndvi); % change data type from double to uint8
% Write NDVI to .tif file (optional)
tiffdata = geotiffinfo(file);
outfilename = [outputdir 'ndvi_' 'temp' '.tif'];
geotiffwrite(outfilename, ndvi, R, 'GeoKeyDirectoryTag', tiffdata.GeoTIFFTags.GeoKeyDirectoryTag)
The following image illustrates what I would like to accomplish using MATLAB. For this example, I used the ArcGIS raster calculator (Float(Band4-Band1)/Float(Band4+Band1)) to produce the NDVI on the right. I also specified the study area shapefile as a mask in the environment settings.
Question:
How can I limit the raster processing extent in MATLAB using a polygon shapefile as a spatial mask to replicate the results shown in the figure?
What I have unsuccessfully tried:
roipoly and poly2mask, although I cannot seem to apply these functions properly (taking into account these are spatial data) to produce the desired effects.
EDIT:
I tried the following to convert the shapefile to a mask, without success. Not sure where I am going wrong here...
s = 'C:\path\to\studyArea.shp'
shp = shaperead(s)
lat = [shp.X];
lon = [shp.Y];
x = shp.BoundingBox(2) - shp.BoundingBox(1)
y = shp.BoundingBox(3) - shp.BoundingBox(1)
x = poly2mask(lat,lon, x, y)
Error messages:
Error using poly2mask
Expected input number 1, X, to be finite.
Error in poly2mask (line 49)
validateattributes(x,{'double'},{'real','vector','finite'},mfilename,'X',1);
Error in createMask (line 13)
x = poly2mask(lat,lon, x, y)

You can read the region of interest by:
roi = shaperead('study_area_shapefile/studyArea.shp');
Chop the trailing NaN:
rx = roi.X(1:end-1);
ry = roi.Y(1:end-1);
If you have several polygons in your shapefile, they are seperated by NaNs and you have to treat them seperately.
Then use the worldToIntrinsic-method from your spatial reference of the sat-image to convert the polygon-points into image-coordinates:
[ix, iy] = R.worldToIntrinsic(rx,ry);
This assumes both coordinate systems are the same.
Then you can go and make your mask by:
mask = poly2mask(ix,iy,R.RasterSize(1),R.RasterSize(2));
You can use the mask on your original multilayer image before making any calculation by:
I(repmat(~mask,[1,1,4])) = nan;
Or use it on a single layer (i.e. red) by:
red(~mask) = nan;
If the regions are very small, it could be beneficial (for memory and computation power) to convert a masked image to a sparse matrix. I have not tried if that makes any speed-difference.
red(~mask) = 0;
sred = sparse(double(red));
Unfortunatly, sparse matrizes are only possible with doubles, so your uint8 needs prior to be converted.
Generally you should crop the ROI out of the image. Look in the objects "roi" and "R" to find useful parameters and methods. I haven't done it here.
Finally my version of your script, with some slight other changes:
file = 'doi1m2011_41111h4nw_usda.tif';
[I R] = geotiffread(file);
outputdir = '';
% Read Region of Interest
roi = shaperead('study_area_shapefile/studyArea.shp');
% Remove trailing nan from shapefile
rx = roi.X(1:end-1);
ry = roi.Y(1:end-1);
% convert to image coordinates
[ix, iy] = R.worldToIntrinsic(rx,ry);
% make the mask
mask = poly2mask(ix,iy,R.RasterSize(1),R.RasterSize(2));
% mask sat-image
I(repmat(~mask,[1,1,4])) = 0;
% convert to sparse matrizes
NIR = sparse(double(I(:,:,4)));
red = sparse(double(I(:,:,1)));
% Calculate NDVI
ndvi = (NIR - red) ./ (NIR + red);
% convert back to full matrizes
ndvi = full(ndvi);
imshow(ndvi,'DisplayRange',[-1 1]);
% Stretch to 0 - 255 and convert to 8-bit unsigned integer
ndvi = (ndvi + 1) / 2 * 255; % [-1 1] -> [0 255]
ndvi = uint8(ndvi); % change and round data type from double to uint8
% Write NDVI to .tif file (optional)
tiffdata = geotiffinfo(file);
outfilename = [outputdir 'ndvi_' 'temp' '.tif'];
geotiffwrite(outfilename, ndvi, R, 'GeoKeyDirectoryTag', tiffdata.GeoTIFFTags.GeoKeyDirectoryTag);
mapshow(outfilename);

There are three steps here, for which I will create 3 functions:
Compute the NDVI for the complete input image: ndvi = comp_ndvi(nir, red)
Compute the mask from the shapefile: mask = comp_mask(shape)
Combine the NDVI and the mask: output = combine_ndvi_mask(ndvi, mask)
You have the code for comp_ndvi() in your question. The code for combine_ndvi_mask() depends on what you want to do to the masked areas; if you want to make them white, it might look like:
function output = combine_ndvi_mask(ndvi, mask)
output = ndvi;
output(~mask) = 255;
end
In comp_mask() you will want to use poly2mask() to convert the polygon vertices into the raster mask. In order to help here I need to know what you've got already. Have you loaded the vertices into MATLAB? What have you tried with poly2mask?

Related

Overlapping grayscale and RGB Images

I would like to overlap two images, one grayscale and one RGB image. I would like to impose the RGB image on top of the grayscale image, but ONLY for pixels greater than a certain value. I tried using the double function in MATLAB, but this seems to change the color scheme and I cannot recover the original RGB colors. What should I do in order to retain the original RGB image instead of mapping it to one of the MATLAB colormaps? Below is my attempt at superimposing:
pixelvalues = double(imread('hello.png'));
PixelInt = mean(pixelvalues,3);
I1 = ind2rgb(Brightfield(:,:,1), gray(256)); %Brightfield
I2 = ind2rgb(PixelInt, jet(256)); %RGB Image
imshow(I2,[])
[r,c,d] = size(I2);
I1 = I1(1:r,1:c,1:d);
% Replacing those pixels below threshold with Brightfield Image
threshold = 70;
I2R = I2(:,:,1); I2G = I2(:,:,2); I2B = I2(:,:,3);
I1R = I1(:,:,1); I1G = I1(:,:,2); I1B = I1(:,:,3);
I2R(PixelInt<threshold) = I1R(PixelInt<threshold);
I2G(PixelInt<threshold) = I1G(PixelInt<threshold);
I2B(PixelInt<threshold) = I1B(PixelInt<threshold);
I2(:,:,1) = I2R; I2(:,:,2) = I2G; I2(:,:,3) = I2B;
h = figure;
imshow(I2,[])
Original RGB Image:
Brightfield:
Overlay:
Is the content of pixelvalues what you show in your first image? If so, that image does not use a jet colormap. It has pink and white values above the red values, whereas jet stops at dark red at the upper limits. When you take the mean of those values and then generate a new RGB image with ind2rgb using the jet colormap, you're creating an inherently different image. You probably want to use pixelvalues directly in generating your overlay, like so:
% Load/create your starting images:
pixelvalues = imread('hello.png'); % Color overlay
I1 = repmat(Brightfield(:, :, 1), [1 1 3]); % Grayscale underlay
[r, c, d] = size(pixelvalues);
I1 = I1(1:r, 1:c, 1:d);
% Create image mask:
PixelInt = mean(double(pixelvalues), 3);
threshold = 70;
mask = repmat((PixelInt > threshold), [1 1 3]);
% Combine images:
I1(mask) = pixelvalues(mask);
imshow(I1);
Note that you may need to do some type conversions when loading/creating the starting images. I'm assuming 'hello.png' is a uint8 RGB image and Brightfield is of type uint8. If I load your first image as pixelvalues and your second image as I1, I get the following when running the above code:
Create a mask and use it to combine the images:
onionOrig = imread('onion.png');
onionGray = rgb2gray(onionOrig);
onionMask = ~(onionOrig(:,:,1)<100 & onionOrig(:,:,2)<100 & onionOrig(:,:,3)<100);
onionMasked(:,:,1) = double(onionOrig(:,:,1)) .* onionMask + double(onionGray) .* ~onionMask;
onionMasked(:,:,2) = double(onionOrig(:,:,2)) .* onionMask + double(onionGray) .* ~onionMask;
onionMasked(:,:,3) = double(onionOrig(:,:,3)) .* onionMask + double(onionGray) .* ~onionMask;
onionFinal = uint8(onionMasked);
imshow(onionFinal)

Color only a segment of an image in Matlab

I'm trying to color only a segment of an image in Matlab. For example, I load an RGB image, then I obtain a mask with Otsu's method (graythresh). I want to keep the color only in the pixels that have value of 1 after applying im2bw with graythresh as the threshold. For example:
image = imread('peppers.png');
thr = graythresh(image);
bw = im2bw(image, thr);
With this code I obtain the following binary image:
My goal is to keep the color in the white pixels.
Thanks!
I have another suggestion on how to replace the pixels we don't care about. This works by creating linear indices for each of the slices where black pixels exist in the bw image. The summation with the result of find is done because bw is the size of just one "slice" of image and this is how we get the indices for the other 2 slices.
Starting MATLAB 2016b:
image(find(~bw)+[0 numel(bw)*[1 2]]) = NaN;
In older versions:
image(bsxfun(#plus,find(~bw),[0 numel(bw)*[1 2]])) = NaN;
Then imshow(image) gives:
Note that NaN gets converted to 0 for integer classes.
Following the clarification that the other pixels should be kept in their gray version, see the below code:
% Load image:
img = imread('peppers.png');
% Create a grayscale version:
grayimg = rgb2gray(img);
% Segment image:
if ~verLessThan('matlab','9.0') && exist('imbinarize.m','file') == 2
% R2016a onward:
bw = imbinarize(grayimg);
% Alternatively, work on just one of the color channels, e.g. red:
% bw = imbinarize(img(:,:,1));
else
% Before R2016a:
thr = graythresh(grayimg);
bw = im2bw(grayimg, thr);
end
output_img = repmat(grayimg,[1 1 3]);
colorpix = bsxfun(#plus,find(bw),[0 numel(bw)*[1 2]]);
output_img(colorpix) = img(colorpix);
figure; imshow(output_img);
The result when binarizing using only the red channel:
Your question misses "and replace the rest with black". here are two ways:
A compact solution: use bsxfun:
newImage = bsxfun(#times, Image, cast(bw, 'like', Image));
Although I am glad with the previous one, you can also take a look at this step-by-step approach:
% separate the RGB layers:
R = image(:,:,1);
G = image(:,:,2);
B = image(:,:,3);
% change the values to zero or your desired color wherever bw is false:
R(~bw) = 0;
G(~bw) = 0;
B(~bw) = 0;
% concatenate the results:
newImage = cat(3, R, G, B);
Which can give you different replacements for the black region:
UPDATE:
According to the comments, the false area of bw should be replaced with grayscale image of the same input. This is how to achieve it:
image = imread('peppers.png');
thr = graythresh(image);
bw = im2bw(image, thr);
gr = rgb2gray(image); % generate grayscale image from RGB
newImage(repmat(~bw, 1, 1, 3)) = repmat(gr(~bw), 1, 1, 3); % substitude values
% figure; imshow(newImage)
With this result:

Plot over an image background in MATLAB

I'd like to plot a graph over an image. I followed this tutorial to Plot over an image background in MATLAB and it works fine:
% replace with an image of your choice
img = imread('myimage.png');
% set the range of the axes
% The image will be stretched to this.
min_x = 0;
max_x = 8;
min_y = 0;
max_y = 6;
% make data to plot - just a line.
x = min_x:max_x;
y = (6/8)*x;
imagesc([min_x max_x], [min_y max_y], img);
hold on;
plot(x,y,'b-*','linewidth',1.5);
But when I apply the procedure to my study case, it doesn't work. I'd like to do something like:
I = imread('img_png.png'); % here I load the image
DEM = GRIDobj('srtm_bigtujunga30m_utm11.tif');
FD = FLOWobj(DEM,'preprocess','c');
S = STREAMobj(FD,flowacc(FD)>1000);
% with the last 3 lines I calculated the stream network on a geographic area using the TopoToolBox
imagesc(I);
hold on
plot(S)
The aim is to plot the stream network over the satellite image of the same area.
The only difference between the two examples that doesn't let the code working is in the plot line, in the first case "plot(x,y)" works, in the other one "plot(S)" doesn't.
Thanks guys.
This is the satellite image, imagesc(I)
It is possible that the plot method of the STREAMobj performs it's own custom plotting including creating new figures, axes, toggling hold states, etc. Because you can't easily control what their plot routine does, it's likely easier to flip the order of your plotting so that you plot your stuff after the toolbox plots the STREAMobj. This way you have completely control over how your image is added.
% Plot the STREAMobj
hlines = plot(S);
% Make sure we plot on the same axes
hax = ancestor(hlines, 'axes');
% Make sure that we can add more plot objects
hold(hax, 'on')
% Plot your image data on the same axes
imagesc(I, 'Parent', hax)
Maybe I am preaching to the choir or overlooking something here but the example you used actually mapped the image to the data range of the plot, hence the lines:
% set the range of the axes
% The image will be stretched to this.
min_x = 0;
max_x = 8;
min_y = 0;
max_y = 6;
imagesc([min_x max_x], [min_y max_y], img);
where you directly plot your image
imagesc(I);
If now your data coordinates and your image coordinates are vastly different you either see one or the other.
Thanks guys, I solved in this way:
I = imread('orto.png'); % satellite image loading
DEM = GRIDobj('demF1.tif');
FD = FLOWobj(DEM,'preprocess','c');
S = STREAMobj(FD,flowacc(FD)>1000); % Stream network extraction
x = S.x; % [node attribute] x-coordinate vector
y = S.y; % [node attribute] y-coordinate vector
min_x = min(x);
max_x = max(x);
min_y = min(y);
max_y = max(y);
imagesc([min_x max_x], [min_y max_y], I);
hold on
plot(S);
Here's the resulting image: stream network over the satellite image
Actually the stream network doesn't match the satellite image just because I'm temporarily using different images and DEM.

How can I change the outer limits of a Circular mask to a different colour

I have the following function that is successful in creating a grey circular mask over the image input, such that the new image is a grey border around a circular image. Example: Grey circular mask.
All I want to do is make the mask a very specific green, but I haven't been successful.
Here is the code:
function [newIm] = myCircularMask(im)
%Setting variables
rad = size(im,1)/2.1; %Radius of the circle window
im = double(im);
[rows, cols, planes]= size(im);
newIm = zeros(rows, cols, planes);
%Generating hard-edged circular mask with 1 inside and 0 outside
M = rows;
[X,Y] = meshgrid(-M/2:1:(M-1)/2, -M/2:1:(M-1)/2);
mask = double(zeros(M,M));
mask(X.^2 + Y.^2 < rad^2) = 1;
% Soften edge of mask
gauss = fspecial('gaussian',[12 12],0.1);
mask = conv2(mask,gauss,'same');
% Multiply image by mask, i.e. x1 inside x0 outside
for k=1:planes
newIm(:,:,k) = im(:,:,k).*mask;
end
% Make mask either 0 inside or -127 outside
mask = (abs(mask-1)*127);
% now add mask to image
for k=1:planes
newIm(:,:,k) = newIm(:,:,k)+mask;
end
newIm = floor(newIm)/255;
The type of green I would like to use is of RGB values [59 178 74].
I'm a beginner with MATLAB, so any help would be greatly appreciated.
Cheers!
Steve
After masking your image, create a color version of your mask:
% test with simple mask
mask = ones(10,10);
mask(5:7,5:7)=0;
% invert mask, multiply with rgb-values, make rgb-matrix:
r_green=59/255; g_green=178/255; b_green=74/255;
invmask=(1-mask); % use mask with ones/zeroes
rgbmask=cat(3,invmask*r_green,invmask*g_green,invmask*b_green);
Add this to your masked image.
Edit:
function [newIm] = myCircularMask(im)
%Setting variables
rad = size(im,1)/2.1; %Radius of the circle window
im = double(im);
[rows, cols, planes]= size(im);
newIm = zeros(rows, cols, planes);
%Generating hard-edged circular mask with 1 inside and 0 outside
M = rows;
[X,Y] = meshgrid(-M/2:1:(M-1)/2, -M/2:1:(M-1)/2);
mask = double(zeros(M,M));
mask(X.^2 + Y.^2 < rad^2) = 1;
% Soften edge of mask
gauss = fspecial('gaussian',[12 12],0.1);
mask = conv2(mask,gauss,'same');
% Multiply image by mask, i.e. x1 inside x0 outside
for k=1:planes
newIm(:,:,k) = im(:,:,k).*mask;
end
% Here follows the new code:
% invert mask, multiply with rgb-values, make rgb-matrix:
r_green=59/255; g_green=178/255; b_green=74/255;
invmask=(1-mask); % use mask with ones/zeroes
rgbmask=cat(3,invmask*r_green,invmask*g_green,invmask*b_green);
newIm=newIm+rgbmask;
Note that I haven't been able to test my suggestion, so there might be errors.

How to extract color shade from a given sample image to convert another image using color of sample image?

I have a sample image and a target image. I want to transfer the color shades of sample image to target image. Please tell me how to extract the color from sample image.
Here the images:
input source image:
input map for desired output image
output image
You can use a technique called "Histogram matching" (another description)
Basically, you use the histogram for your source image as a goal and transform the values for each input map pixel to get the output histogram as close to source as possible. You do it for each rgb channel of the image.
Here is my python code for that:
from scipy.misc import imsave, imread
import numpy as np
imsrc = imread("source.jpg")
imtint = imread("tint_target.jpg")
nbr_bins=255
imres = imsrc.copy()
for d in range(3):
imhist,bins = np.histogram(imsrc[:,:,d].flatten(),nbr_bins,normed=True)
tinthist,bins = np.histogram(imtint[:,:,d].flatten(),nbr_bins,normed=True)
cdfsrc = imhist.cumsum() #cumulative distribution function
cdfsrc = (255 * cdfsrc / cdfsrc[-1]).astype(np.uint8) #normalize
cdftint = tinthist.cumsum() #cumulative distribution function
cdftint = (255 * cdftint / cdftint[-1]).astype(np.uint8) #normalize
im2 = np.interp(imsrc[:,:,d].flatten(),bins[:-1],cdfsrc)
im3 = np.interp(imsrc[:,:,d].flatten(),cdftint, bins[:-1])
imres[:,:,d] = im3.reshape((imsrc.shape[0],imsrc.shape[1] ))
imsave("histnormresult.jpg", imres)
The output for you samples will look like that:
You could also try making the same in HSV colorspace - it might give better results.
I think the hardest part is to determine the dominant color of the first image. Just looking at it, with all the highlights and shadows, the best overall color will be the one that has the highest combination of brightness and saturation. I start with a blurred image to reduce the effects of noise and other anomalies, then convert each pixel to the HSV color space for the brightness and saturation measurement. Here's how it looks in Python with PIL and colorsys:
blurred = im1.filter(ImageFilter.BLUR)
ld = blurred.load()
max_hsv = (0, 0, 0)
for y in range(blurred.size[1]):
for x in range(blurred.size[0]):
r, g, b = tuple(c / 255. for c in ld[x, y])
h, s, v = colorsys.rgb_to_hsv(r, g, b)
if s + v > max_hsv[1] + max_hsv[2]:
max_hsv = h, s, v
r, g, b = tuple(int(c * 255) for c in colorsys.hsv_to_rgb(*max_hsv))
For your image I get a color of (210, 61, 74) which looks like:
From that point it's just a matter of transferring the hue and saturation to the other image.
The histogram matching solutions above did not work for me. Here is my own, based on OpenCV:
def match_image_histograms(image, reference):
chans1 = cv2.split(image)
chans2 = cv2.split(reference)
new_chans = []
for ch1, ch2 in zip(chans1, chans2):
hist1 = cv2.calcHist([ch1], [0], None, [256], [0, 256])
hist1 /= hist1.sum()
hist2 = cv2.calcHist([ch2], [0], None, [256], [0, 256])
hist2 /= hist2.sum()
lut = np.searchsorted(hist1.cumsum(), hist2.cumsum())
new_chans.append(cv2.LUT(ch1, lut))
return cv2.merge(new_chans).astype('uint8')
obtain average color from color map
ignore saturated white/black colors
convert light map to grayscale
change dynamic range of lightmap to match your desired output
I use max dynamic range. You could compute the range of color map and set it for light map
multiply the light map by avg color
This is how it looks like:
And this is the C++ source code
//picture pic0,pic1,pic2;
// pic0 - source color
// pic1 - source light map
// pic2 - output
int x,y,rr,gg,bb,i,i0,i1;
double r,g,b,a;
// init output as source light map in grayscale i=r+g+b
pic2=pic1;
pic2.rgb2i();
// change light map dynamic range to maximum
i0=pic2.p[0][0].dd; // min
i1=pic2.p[0][0].dd; // max
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
{
i=pic2.p[y][x].dd;
if (i0>i) i0=i;
if (i1<i) i1=i;
}
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
{
i=pic2.p[y][x].dd;
i=(i-i0)*767/(i1-i0);
pic2.p[y][x].dd=i;
}
// extract average color from color map (normalized to unit vecotr)
for (r=0.0,g=0.0,b=0.0,y=0;y<pic0.ys;y++)
for (x=0;x<pic0.xs;x++)
{
rr=BYTE(pic0.p[y][x].db[picture::_r]);
gg=BYTE(pic0.p[y][x].db[picture::_g]);
bb=BYTE(pic0.p[y][x].db[picture::_b]);
i=rr+gg+bb;
if (i<400) // ignore saturated colors (whiteish) 3*255=white
if (i>16) // ignore too dark colors (whiteish) 0=black
{
r+=rr;
g+=gg;
b+=bb;
}
}
a=1.0/sqrt((r*r)+(g*g)+(b*b)); r*=a; g*=a; b*=a;
// recolor output
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
{
a=DWORD(pic2.p[y][x].dd);
rr=r*a; if (rr>255) rr=255; pic2.p[y][x].db[picture::_r]=BYTE(rr);
gg=g*a; if (gg>255) gg=255; pic2.p[y][x].db[picture::_g]=BYTE(gg);
bb=b*a; if (bb>255) bb=255; pic2.p[y][x].db[picture::_b]=BYTE(bb);
}
I am using own picture class so here some members:
xs,ys size of image in pixels
p[y][x].dd is pixel at (x,y) position as 32 bit integer type
p[y][x].db[4] is pixel access by color bands (r,g,b,a)
[notes]
If this does not meet your needs then please specify more and add more images. Because your current example is really not self explanatonary
Regarding previous answer, one thing to be careful with:
once the CDF will reach its maximum (=1), the interpolation will get mislead and will match wrongly your values. To avoid this, you should provide the interpolation function only the part of CDF meaningful (not after where it reaches 1) and the corresponding bins. Here the answer adapted:
from scipy.misc import imsave, imread
import numpy as np
imsrc = imread("source.jpg")
imtint = imread("tint_target.jpg")
nbr_bins=255
imres = imsrc.copy()
for d in range(3):
imhist,bins = np.histogram(imsrc[:,:,d].flatten(),nbr_bins,normed=True)
tinthist,bins = np.histogram(imtint[:,:,d].flatten(),nbr_bins,normed=True)
cdfsrc = imhist.cumsum() #cumulative distribution function
cdfsrc = (255 * cdfsrc / cdfsrc[-1]).astype(np.uint8) #normalize
cdftint = tinthist.cumsum() #cumulative distribution function
cdftint = (255 * cdftint / cdftint[-1]).astype(np.uint8) #normalize
im2 = np.interp(imsrc[:,:,d].flatten(),bins[:-1],cdfsrc)
if (cdftint==1).sum()>0:
idx_max = np.where(cdftint==1)[0][0]
im3 = np.interp(im2,cdftint[:idx_max+1], bins[:idx_max+1])
else:
im3 = np.interp(im2,cdftint, bins[:-1])
Enjoy!

Resources