What is the dff parameter in transformer model? - transformer-model

In the paper, it describes the base model's network configuration are below:
d_model: embedding size
h: attention head count
d_k: key matrix dimension
d_v: value matrix dimension
dff: 2048?
What's the dff?

According to this paper, the dff is the feed forward upwards projection size.

Related

Mathematical Image representation

Let I(V) -> R^3 be an image defined on a set of pixels V.
This is common line which I come across in most of the research papers related to image processing. I am confused about the R^3 factor here. Does it represent the RGB model or something else?
Context : www.eecs.berkeley.edu/~carreira/papers/cvpr2010_2.pdf
Yes, when the pixel values are tuples, the image must be multispectral. And in the particular case of three components, red/green/blue channels are the most likely.

Image reconstruction after applying dwt and reconstruction to save computational time

I have applied dwt2 on 2D image, applied source and channel coding on LL sub band and transmitted it.
Now I have a question on the receiver side. Do I have to apply source and channel coding on HL, LH, HH and transmit as well to reconstruct the image on the other end (using idwt)? Is it possible to reconstruct LL sub band without the rest? I am asking this so as to save the computational time. What do you guys suggest?
The low passed part and high passed part of a wavelet decomposed signal are independent (actually orthogonal to each other), for a 2D image, you will have four sub-images after one level of decomposition and all of them are not related to each other.
So if the low frequency blurred image (LL part) is all you want to recover on the receiver side, you won't need the other parts.

What is the base of the logarithm in Log Transformation in image processing?

I am reading gonzales image processing book and as you know the log transformation has been defined like the following in the book:
s = c*log(1+r)
Now I have one question:
Is the logarithm based on 10 or it is a natural logarithm which is based on napier number?
The log transform is used for dark pixels enhancement. The dark pixels in an image are expanded as compare to the higher pixel values. So the base can be any number depending on the visualization effect of image.
I think log10 is often used because it is related to the decibel scale in signal processing, such as what is used in signal to noise definition.
If this is log() from math.h, then it's the natural logarithm.
That is, it's base is e, which is approximately 2.71828.

How to Mask Operations openCV - Frequency masking

I would like to create a frequency mask in openCV but have no idea how to go about it. The frequency mask will be an ideal bandpass filter so image filtering will be done in the frequency domain. For this example lets say frequencies between 100Hz-200Hz will be coupled.
Anyone know how to do this?
Thanks is advance!
Perform DFT of your image.
Make an filter matrix with size of your image, where you have ring with radius R1 = wavelength_min, R2=wavelength_max. Ring is filled with ones and the rest elements are zeros.
Multiply DFT of your image by this matrix.
Perform inverse DFT of obtained image.

different method in weight changes on Backpropagation algorithm?

I've been using this tutorial for my reference on coding backpropagation. But, today, I've found another tutorial that has used same reference with me but with another approach in changing of synapse weight. What's different about the both of approach?
EDIT
Thank you Renan for your quick response.
The main difference is:
First method is changing the synapse weight after calculate delta in each neuron (node).
On the second method, The delta is calculated after calculate synapse weight based on delta from layer above.
note: I'll edit this explanation if still not clear. Thanks.
Equal calculations
Since the delta for the current layer depends of the weight layer between the layer above and the current layer, both of the methods is correct.
But it would not be correct to adjust the input weights to a layer before calculating the delta for the layer below!
The equation
Here you can see the mathmatical equation for calculating the derivative
of the Error with respect to the weights depends on the weights between
this layer and the layer above. (using Sigmoid)
O_i = the layer below # ex: input
O_k = the current layer # ex: hidden layer
O_o = the layer above # ex: output layer

Resources