Difference in raster values between R and QGIS - pixel

I want to ask what probably is a basic question about the way in which R and QGIS import raster files.
I have a single band-raster. When I import it into R, using the "raster" function of the raster package, I get this range of pixel values:
class : RasterLayer
dimensions : 10980, 10980, 120560400 (nrow, ncol, ncell)
resolution : 10, 10 (x, y)
extent : 6e+05, 709800, 5590200, 5700000 (xmin, xmax, ymin, ymax)
coord. ref. : +proj=utm +zone=31 +datum=WGS84 +units=m +no_defs +ellps=WGS84 +towgs84=0,0,0
data source : /data/MTDA/CGS_S2_RADIOMETRY/2017/10/15/S2B_20171015T104525Z_31UFS_TOC_V100/S2B_20171015T104525Z_31UFS_TOC-B02_10M_V100.tif
names : S2B_20171015T104525Z_31UFS_TOC.B02_10M_V100
values : -32768, 32767 (min, max)
When I stack this layer in a raster brick, I get these min-max values:
class : RasterLayer
band : 2 (of 11 bands)
dimensions : 10980, 10980, 120560400 (nrow, ncol, ncell)
resolution : 10, 10 (x, y)
extent : 6e+05, 709800, 5590200, 5700000 (xmin, xmax, ymin, ymax)
coord. ref. : +proj=utm +zone=31 +datum=WGS84 +units=m +no_defs +ellps=WGS84 +towgs84=0,0,0
data source : /tmp/Rtmp882dZS/raster/r_tmp_2017-11-10_172819_11532_86514.grd
names : S2B_20171015T104525Z_31UFS_TOC.B02_10M_V100
values : -1129, 9994 (min, max)
However, if I load the same raster in QGIS, the min value is 228 and the max value is 907 (I calculated these values with the options "Extent: Full" and "Accuracy: Actual (slower)".
So, where do these differences come from? I do not understand exactly what R and QGIS are doing...

For the first object, the min and max values are not known as the file does not provide them (or not correctly). With RasterLayer r you can do
r <- setMinMax(r)
To see what they are. If they do not become the same as for the second layer you show, then you probably mixed something up. After clarifying these things, it might be useful to compare with QGIS. For that, you would probably need to provide an example file.

In the end, I found what is the difference about!
When asking to R, I get the real min/max values. QGIS, instead, calculates the min/max values with a cumulative count. When I set the "Load min/max values" (in the Raster Properties window) to "Min/Max" I got the same values R showed.

Related

omega-k algorithm simulation in matlab

I want to simulate omega k algorithm to focus synthetic aperture radar raw data based on cumming's book, "Digital Processing of Synthetic Aperture Radar Data". First I simulated point target raw data in stripmap mode and do everything which is mentioned in the book. But my target doesn't focused. To make sure my raw data is made truly, I focused it with conventional RDA algorithm and my point target focused in true position which means that my raw data simulation routine is Ok.
Here is my matlab code for omega k algorithm:
%% __________________________________________________________________________
fr = linspace(-fs/2,fs/2,nfftr);
faz = linspace(-PRF/2,PRF/2,nffta);
fr_prime = sqrt((f0+fr).^2-(c*faz'/(2*vp)).^2)-f0;
Rref = rs(ceil(Ns/2));
theta_ref = 4*pi*Rref/c*(fr_prime+f0)+pi*fr.^2/kr;
%2D FFT
S_raw = fftshift(fft2(s_raw,nffta,nfftr));
%RFM
S_BC = S_raw.*exp(1j*theta_ref);
for idx = 1:Na
S_int(idx,:) = interp1(fr_prime(idx,:)+f0,S_BC(idx,:),fr+f0,'pchip');
end
S_c = S_int.*exp(-1j*4*pi*fr*Rref/c);
s_c = ifft2(S_c,Na,Nr);
%% __________________________________________________________________________
in this code:
f0 : center frequency
kr : Chirp Rate in Range
fs : Sampling frequency in range
vp : platform velocity
rs : range array (form near range to far range)
Rref : Reference range (Hear I take it as middle range cell)
Ns : number of range cells
Na : number of samples in Azimuth
s_c : Focused Image
three targets are positioned at [10 , Ns/2 , Ns-10] in range and Na/2 in azimuth.
here is my results:
Data after Bulk Compression in Time Domain
Data after stolt Interpolation in Time Domain
I examined several interpolation methods like sinc interp , linear interp , pchip and others, but non of them worked for me.
I appreciate everyone who could help me and tell me whats my mistake...
thank you...
In the accurate version of Omega-k, Cumming did not ask to multiply with a matched filter again after stolt interpolation. The focusing should be complete just with a 2D iFFT.

Best fitting rectangle with a variable number of small rectangles keeping aspect ratio

I need to find the optimal placement of a given N child rectangles keeping the aspect ratio of the father rectangle.
Use case is the following:
- the father rectangle is a big picture, let's say 4000x3000 pixels (this one can be rescaled).
- child rectangles are 296x128 pixels (e-ink displays of users)
The objective is to show the big picture across all the current number of displays (this number can change from 1 to 100)
This is an example:
Can happen that number of small rectangles will not fit the big rectangle aspect ratio, like if number of small rectangles is odd, in this case I can think to have like a small number (max 5) of spare rectangles to add in order to complete the big rectangle.
this seems to be a valid approach (python + opencv)
import cv2
import imutils
def split_image(image, boards_no=25, boards_shape=(128, 296), additional=5):
# find image aspect ratio
aspect_ratio = image.shape[1]/image.shape[0]
print("\nIMAGE INFO:", image.shape, aspect_ratio)
# find all valid combination of a,b that maximize your available badges
valid_props = [(a, b) for a in range(boards_no+additional+1) for b in range(boards_no+additional+1) if a*b in [q for q in range(boards_no, boards_no+additional)]]
print("\nVALID COMBINATIONS", valid_props)
# find all aspect ratio from previous combination
aspect_ratio_all = [
{
'board_x': a,
'board_y': b,
'aspect_ratio': (a*boards_shape[1])/(b*boards_shape[0]),
'shape': (b*boards_shape[0], a*boards_shape[1]),
'type': 'h'
} for (a, b) in valid_props]
aspect_ratio_all += [
{
'board_x': a,
'board_y': b,
'aspect_ratio': (a*boards_shape[0])/(b*boards_shape[1]),
'shape': (b*boards_shape[1], a*boards_shape[0]),
'type': 'v'
} for (a, b) in valid_props]
min_ratio_diff = min([abs(aspect_ratio-x['aspect_ratio']) for x in aspect_ratio_all])
best_ratio = [x for x in aspect_ratio_all if abs(aspect_ratio-x['aspect_ratio']) == min_ratio_diff][0]
print("\MOST SIMILAR ASPECT RATIO:", best_ratio)
# resize image maximining height or width
resized_img = imutils.resize(image, height=best_ratio['shape'][0])
border_width = int((best_ratio['shape'][1] - resized_img.shape[1]) / 2)
border_height = 0
if resized_img.shape[1] > best_ratio['shape'][1]:
resized_img = imutils.resize(image, width=best_ratio['shape'][1])
border_height = int((best_ratio['shape'][0] - resized_img.shape[0]) / 2)
border_width = 0
print("RESIZED SHAPE:", resized_img.shape, "BORDERS (H, W):", (border_height, border_width))
# fill the border with black
resized_img = cv2.copyMakeBorder(
resized_img,
top=border_height,
bottom=border_height,
left=border_width,
right=border_width,
borderType=cv2.BORDER_CONSTANT,
value=[0, 0, 0]
)
# split in tiles
M = resized_img.shape[0] // best_ratio['board_y']
N = resized_img.shape[1] // best_ratio['board_x']
return [resized_img[x:x+M,y:y+N] for x in range(0,resized_img.shape[0],M) for y in range(0,resized_img.shape[1],N)]
image = cv2.imread('image.jpeg')
tiles = split_image(image)
Our solutions will always be rectangles into which we have fit the biggest picture that we can keeping the aspect ratio correct. The question is how we grow them.
In your example a single display is 296 x 128. (Which I assume is length and height.) Our scaled image to 1 display is 170.6 x 128. (You can take out fractional pixels in your scaling.)
The rule is that at all points, whatever direction is filled gets filled in with more displays so we can expand the picture. In the single display solution we therefore go from a 1x1 rectangle to a 1x2 one and we now have 296 x 256. Our scaled image is now 296 x 222.
Our next solution will be a 2x2 display. This gives us 594 x 256 and our scaled image is 321.3 x 256.
Next we get a 2x3 display. This gives us 594 x 384 and our scaled display is now 512 x 384.
Since we are still maxing on the second dimension we next go to 2x4. This gives us 594 x 512 and our scaled display is 594 x 445.5. And so on.
For your problem it will not take long to run through all of the sizes up to however many displays you have, and you just take the biggest rectangle that you can make from the list.
Important special case. If the display rectangle and image have the same aspect ratio, you have to add to both dimensions. Which in the case that the image and the displays have the same aspect ratio gives you 1 x 1, 2 x 2, 3 x 3 and so on through the squares.

Calculate 3D distance based on change in intensity

I have three sections (top, mid, bot) of grayscale images (3D). In each section, I have a point with coordinates (x,y) and intensity values [0-255]. The distance between each section is 20 pixels.
I created an illustration to show how those images were generated using a microscope:
Illustration
Illustration (side view): red line is the object of interest. Blue stars represents the dots which are visible in top, mid, bot section. The (x,y) coordinates of these dots are known. The length of the object remains the same but it can rotate in space - 'out of focus' (illustration shows a rotating line at time point 5). At time point 1, the red line is resting (in 2D image: 2 dots with a distance equal to the length of the object).
I want to estimate the x,y,z-coordinate of the end points (represents as stars) by using the changes in intensity, the knowledge about the length of the object and the information in the sections I have. Any help would be appreciated.
Here is an example of images:
Bot section
Mid section
Top section
My 3D PSF data:
https://drive.google.com/file/d/1qoyhWtLDD2fUy2zThYUgkYM3vMXxNh64/view?usp=sharing
Attempt so far:
enter image description here
I guess the correct approach would be to record three images with slightly different z-coordinates for your bot and your top frame, then do a 3D-deconvolution (using Richardson-Lucy or whatever algorithm).
However, a more simple approach would be as I have outlined in my comment. If you use the data for a publication, I strongly recommend to emphasize that this is just an estimation and to include the steps how you have done it.
I'd suggest the following procedure:
Since I do not have your PSF-data, I fake some by estimating the PSF as a 3D-Gaussiamn. Of course, this is a strong simplification, but you should be able to get the idea behind it.
First, fit a Gaussian to the PSF along z:
[xg, yg, zg] = meshgrid(-32:32, -32:32, -32:32);
rg = sqrt(xg.^2+yg.^2);
psf = exp(-(rg/8).^2) .* exp(-(zg/16).^2);
% add some noise to make it a bit more realistic
psf = psf + randn(size(psf)) * 0.05;
% view psf:
%
subplot(1,3,1);
s = slice(xg,yg,zg, psf, 0,0,[]);
title('faked PSF');
for i=1:2
s(i).EdgeColor = 'none';
end
% data along z through PSF's center
z = reshape(psf(33,33,:),[65,1]);
subplot(1,3,2);
plot(-32:32, z);
title('PSF along z');
% Fit the data
% Generate a function for a gaussian distibution plus some background
gauss_d = #(x0, sigma, bg, x)exp(-1*((x-x0)/(sigma)).^2)+bg;
ft = fit ((-32:32)', z, gauss_d, ...
'Start', [0 16 0] ... % You may find proper start points by looking at your data
);
subplot(1,3,3);
plot(-32:32, z, '.');
hold on;
plot(-32:.1:32, feval(ft, -32:.1:32), 'r-');
title('fit to z-profile');
The function that relates the intensity I to the z-coordinate is
gauss_d = #(x0, sigma, bg, x)exp(-1*((x-x0)/(sigma)).^2)+bg;
You can re-arrange this formula for x. Due to the square root, there are two possibilities:
% now make a function that returns the z-coordinate from the intensity
% value:
zfromI = #(I)ft.sigma * sqrt(-1*log(I-ft.bg))+ft.x0;
zfromI2= #(I)ft.sigma * -sqrt(-1*log(I-ft.bg))+ft.x0;
Note that the PSF I have faked is normalized to have one as its maximum value. If your PSF data is not normalized, you can divide the data by its maximum.
Now, you can use zfromI or zfromI2 to get the z-coordinate for your intensity. Again, I should be normalized, that is the fraction of the intensity to the intensity of your reference spot:
zfromI(.7)
ans =
9.5469
>> zfromI2(.7)
ans =
-9.4644
Note that due to the random noise I have added, your results might look slightly different.

How to add a Gaussian shaped object to an image?

I am interested in adding a single Gaussian shaped object to an existing image, something like in the attached image. The base image that I would like to add the object to is 8-bit unsigned with values ranging from 0-255. The bright object in the attached image is actually a tree represented by normalized difference vegetation index (NDVI) data. The attached script is what I have have so far. How can I add a a Gaussian shaped abject (i.e. a tree) with values ranging from 110-155 to an existing NDVI image?
Sample data available here which can be used with this script to calculate NDVI
file = 'F:\path\to\fourband\image.tif';
[I R] = geotiffread(file);
outputdir = 'F:\path\to\output\directory\'
%% Make NDVI calculations
NIR = im2single(I(:,:,4));
red = im2single(I(:,:,1));
ndvi = (NIR - red) ./ (NIR + red);
ndvi = double(ndvi);
%% Stretch NDVI to 0-255 and convert to 8-bit unsigned integer
ndvi = floor((ndvi + 1) * 128); % [-1 1] -> [0 256]
ndvi(ndvi < 0) = 0; % not really necessary, just in case & for symmetry
ndvi(ndvi > 255) = 255; % in case the original value was exactly 1
ndvi = uint8(ndvi); % change data type from double to uint8
%% Need to add a random tree in the image here
%% Write to geotiff
tiffdata = geotiffinfo(file);
outfilename = [outputdir 'ndvi_' '.tif'];
geotiffwrite(outfilename, ndvi, R, 'GeoKeyDirectoryTag', tiffdata.GeoTIFFTags.GeoKeyDirectoryTag)
Your post is asking how to do three things:
How do we generate a Gaussian shaped object?
How can we do this so that the values range between 110 - 155?
How do we place this in our image?
Let's answer each one separately, where the order of each question builds on the knowledge from the previous questions.
How do we generate a Gaussian shaped object?
You can use fspecial from the Image Processing Toolbox to generate a Gaussian for you:
mask = fspecial('gaussian', hsize, sigma);
hsize specifies the size of your Gaussian. You have not specified it here in your question, so I'm assuming you will want to play around with this yourself. This will produce a hsize x hsize Gaussian matrix. sigma is the standard deviation of your Gaussian distribution. Again, you have also not specified what this is. sigma and hsize go hand-in-hand. Referring to my previous post on how to determine sigma, it is generally a good rule to set the standard deviation of your mask to be set to the 3-sigma rule. As such, once you set hsize, you can calculate sigma to be:
sigma = (hsize-1) / 6;
As such, figure out what hsize is, then calculate your sigma. After, invoke fspecial like I did above. It's generally a good idea to make hsize an odd integer. The reason why is because when we finally place this in your image, the syntax to do this will allow your mask to be symmetrically placed. I'll talk about this when we get to the last question.
How can we do this so that the values range between 110 - 155?
We can do this by adjusting the values within mask so that the minimum is 110 while the maximum is 155. This can be done by:
%// Adjust so that values are between 0 and 1
maskAdjust = (mask - min(mask(:))) / (max(mask(:)) - min(mask(:)));
%//Scale by 45 so the range goes between 0 and 45
%//Cast to uint8 to make this compatible for your image
maskAdjust = uint8(45*maskAdjust);
%// Add 110 to every value to range goes between 110 - 155
maskAdjust = maskAdjust + 110;
In general, if you want to adjust the values within your Gaussian mask so that it goes from [a,b], you would normalize between 0 and 1 first, then do:
maskAdjust = uint8((b-a)*maskAdjust) + a;
You'll notice that we cast this mask to uint8. The reason we do this is to make the mask compatible to be placed in your image.
How do we place this in our image?
All you have to do is figure out the row and column you would like the centre of the Gaussian mask to be placed. Let's assume these variables are stored in row and col. As such, assuming you want to place this in ndvi, all you have to do is the following:
hsizeHalf = floor(hsize/2); %// hsize being odd is important
%// Place Gaussian shape in our image
ndvi(row - hsizeHalf : row + hsizeHalf, col - hsizeHalf : col + hsizeHalf) = maskAdjust;
The reason why hsize should be odd is to allow an even placement of the shape in the image. For example, if the mask size is 5 x 5, then the above syntax for ndvi simplifies to:
ndvi(row-2:row+2, col-2:col+2) = maskAdjust;
From the centre of the mask, it stretches 2 rows above and 2 rows below. The columns stretch from 2 columns to the left to 2 columns to the right. If the mask size was even, then we would have an ambiguous choice on how we should place the mask. If the mask size was 4 x 4 as an example, should we choose the second row, or third row as the centre axis? As such, to simplify things, make sure that the size of your mask is odd, or mod(hsize,2) == 1.
This should hopefully and adequately answer your questions. Good luck!

Calculate average gray value of a sub-image specifed by row and column indexing in MATLAb

I have an image and I want to calculate the average gray value of different patches of the image.
I started with defining a patch using a row and column index. This is how I specify my where my subimage is located.
for x = 10 : 1 : 74
for y = 30 : 1 : 94
.........
end
end`
Now how do I calculate the average gray value of this subimage? I know that all this means is finding the mean(mean(image)). But since I have only the row and column positions, how can I apply this same concept.
try this
mean(mean(im(10:74,30:94)))
Assuming your image is some MxN matrix why don't you create a submatrix and calculate the mean over that?
eg:
subimage = image(10:74, 30:94);
mean_grey = mean(mean(subimage))
An alternative solution: convolve the image (I) with a flat kernel (h) (size of your 'sub-image') and take the value of the result at any index.
h = ones(a,b); % sub-image is size a x b
h = h / sum(h(:));
J = imfilter(I, h);
% J(x,y) will give you the average of a sub-image centered on (x,y)
Edge cases may cause strange behavior (sub-image out of image range), but you can supply a third argument to imfilter to address this.

Resources