how gamma and beta get updated during backward process in batch normalization layer - backpropagation

Could you please let me know whether it is possible to check how the gamma and beta of the batch normalization layer get updated during the backpropagation process in PyTorch or TensorFlow? I mean, can I print their values in each epoch?

Related

How to set spacing in Frangi filter for 3D stack of DICOM data

I am using the Frangi filter for hepatic vessel segmentation.
The problem is that that data are not isotropic [1,1,1].
I can do resampling. It creates more slices but it looses pixels and its not so precise.
I found, that maybe I can change it directly in the Frangi function (skimage function) in the script where the Hessian function is computed. But even then I don't know which values I should set up as spacing.
Because now I have some results, but they are not correct, because I am computing with squeeze image in z-direction.
Thank you for your help.
By my reading of the code, currently it is not possible to use a different scale (sigma) for the different axes — we assume the same sigma is used for each axis. It should be possible to improve this in a future version. You can create a feature request at https://github.com/scikit-image/scikit-image/issues/new/. I suggest that you link back to this question when creating it.

Convolution effect changed from Gimp version 2.8.22 to 2.10.18

I recently had the task to apply several convolution filters at university. While playing around with Gimp version 2.10.18, I noticed that the filters from the exercises I applied did not have the supposed outcome.
I found out that convolution behavior changed from Gimp 2.8.22 to 2.10.18 and wanted to ask if someone knew how to get the old behavior back.
Let me explain what should happen and what actually happens in 2.10.18:
My sample picture looks like this (these are the values in all its pixel rows):
90 90 150 210 210
I now apply the filter
0 0 0
0 2 0
0 0 0
with divisor 1 and offset 0.
The maths behind it and Gimp 2.8 tell me that the outcome should be composed of
180 values on the left side, 255 on the right side
I don't understand what Gimp 2.10 does, but the outcome just has brighter values (90->125, 150->205, 210->255) instead of the expected change.
Is this a bug or am I somehow missing something? Thanks!
A big difference between 2.10 (high-bit-depth) and previous versions (8-bit), is that the 2.10 works in "linear light". In 2.8, the 0..255 values pixels are not a linear representation of the color/luminosity but are gamma-corrected (so that there are more values for dark tones(*)). Most Gimp 2.8 tools work (incorrectly) directly on these gamma-corrected values. In Gimp 2.10, if you are in 8-bit (and in the general case, using gamma-corrected representation, but this is mostly useful in 8-bit), the pixel data is converted to 32-bit FP linear, removing the gamma compensation, then the required transformation is applied, then the data is converted back to 8-bit, with the gamma compensation reinstated.
June 2021 Edit: in 2.10, if you put the image in a high-precision mode, and use the values that are the mathematical equivalents to 90/255, 15O/255 and 210/255:
... you get a result that is equivalent to 180/255:
Which confirms that in 2.10 convolution operates on "linear light".
So
If you want the old behavior, use the old Gimp. But you have to keep in mind that the old behavior was incorrect, even if some workflows could take advantage of it.
If you wanted to see what a spatial convolution matrix can do, then use Gimp 2.10 in "linear light".
(*) Try this: open two images in Gimp, fill one with a checkerboard pattern and one with grey (128,128,128). Step back until the checkerboard becomes a uniform gray. You'll notice that the plain gray image is darker... so (128,128,128) is not the middle of the luminosity range.

Feature Scaling, how to standardize

Well I've an image and a vector, the vector consists 3 datas, positonX, positionY and Intensity(0-255).
How do I standardize this, should standardization be done for each pixel or for an entire column, Also once standardized, for example I take the mean of 5 pixels, how do I get the original values(destandardize) back?
Can you please elaborate what are you trying to achieve and platform you are using in case you are using R you can try a package called imager. ..let me know if it helps

Matlab removal of skull from MRI images in .png format

I have been given data to create spatial priors for GM WM and CSF for a project involving brain segmentation using the level set method.
I am currently stuck on how to remove the skull from Axial coronal and sagittal vies of the brain? essentially i want to extract the brain and just have the GM, WM and CSF intact.
I have attempted using thresholding and regionprops in matlab but they leave a piece of the skull always and then remove some of the GM etc.
ideally i would like to make it a built in part of my final piece of code.
Thanks in advance for any advice or guidance on this.
The image below is similar to the data I have, except in my case the skull is not perfectly connected.
How would i extract the CSF and GM and WM from this? i am confused as to how to threshold the image for each type of tissue to create a sort of statistical map.
Without seeing your image, it is hard to tell how and whether the extraction method fits in your case. Yet basically, I guess bwconncomp/bwlabel and ismember are supposed to work.
One example:
I=imread('mri.jpg');
I=rgb2gray(I);
subplot(1,2,1)
imshow(I)
BW=im2bw(I,graythresh(I));
[L,n]=bwlabel(BW);
mask=ismember(L,2:n);
I1=I.*uint8(mask);
subplot(1,2,2)
imshow(I1)
Result (left is the original image, and right is the one after skull removal):

Is there a standard sequence for gamma, brightness & contrast corrections?

In my application I'm doing gamma, brightness and contrast corrections defined by the user. Now I was wondering whether there is a standard order of doing this or not.
It may sound trivial but I couldn't find anything regarding this. I guess it's possible to get the same result regardless of the order but I just want to be sure in order to make it as intuitive as possible.
Most predictable effects can be achieved by using color matrix. Applying transformations sequentially might result in loss as color overflows during each steps will be irrecoverable.
Alternatively color transformations can be done on higher precision than source data - then order is not important due to gamma, contrast and brightness being luminance transformations only.
Edit: To clarify - order is not important within a single transformation, not to confuse with multiple transformations in sequence.

Resources