I am extracting the word embeddings vector from a word2vec model using model.wv. What is the range of values for each element in this vector?
import gensim
word2vec_model = gensim.models.Word2Vec.load("testModel")
word2vec_model.wv["increase"] #What is range of values for each vector element?
Can't seem to find this information in the documentation.
Every dimension of the vector is 32-bit floating point value.
There's no essential or enforced limit other than that, though the training process is such that individual dimensions tend not to be "very large" – often staying in the range between -1.0 and 1.0.
It's common (but not required or beneficial for all applications) to normalize word-vectors to have a magnitude of 1.0 before comparing them to other similarly-normalized word-vectors.
You can request such a unit-normalized version of a word-vector with the word_vec() method's use_norm parameter:
model.wv.word_vec(word, use_norm=True)
In such a unit-normed vector, no single dimension will be outside the range of -1.0 to 1.0.
Related
I searched a lot for finding the threshold values for the below mention methods.
methods = ['cv2.TM_CCOEFF', 'cv2.TM_CCOEFF_NORMED', 'cv2.TM_CCORR',
'cv2.TM_CCORR_NORMED', 'cv2.TM_SQDIFF', cv2.TM_SQDIFF_NORMED']
I also tried to figure them out by myself but I could only find thresholds for 3 methods which have max value of 1.0. The other methods values were in range of 10^5. I would like to know the bounds of these methods.
Can somebody point me in the right direction. My agenda is to loop through all the methods for template matching and get the best outcome.I went through the documentation and source code, but no luck.
These are the values I got , I could understand that *NORMED methods have values 0-1.
cv2.TM_CCOEFF -- 25349100.0
cv2.TM_CCOEFF_NORMED -- 0.31208357214927673
cv2.TM_CCORR -- 616707328.0
cv2.TM_CCORR_NORMED -- 0.9031367897987366
cv2.TM_SQDIFF -- 405656000.0
cv2.TM_SQDIFF_NORMED -- 0.737377941608429
As described in opencv documentation matchTemplate result is a sum of differences (varies with method) for each pixel, so for not normalized methods - thresholds would vary with size of template.
You can see formulas for each method and calculate thresholds for your template type considering that max difference between pixels is 255 for CV_8UC1 image.
So lets say you have 2 grayscale images and smallest one is 10x10.
In that case for TM_SQDIFF minimum distance would be 10x10x0^2=0 (images are identical) and maximum would be 10x10x255^2=6502500 (one image is completely black and other is white), which results in [0, 6502500] boundaries.
Of course it is possible to calculate that for the undefined sizes [A, B].
For TM_CCORR it would be AxBxmax(T(x',y')I(x+x',y+y')) = 65025AB
You can go on and calculate that for remaining methods, remember that if you have different from CV_8UC image types (like 32FC or 32SC) - you would need to replace 255 with corresponding values (max(float) max(int32))
I have two matrices, one containing 3D coordinates that are nominal positions per a CAD model and the other containing 3D coordinates of actual measured positions using a CMM. Every nominal point has a corresponding measurement, or in other words the two matrices are of equal length and width. I'm not sure what the best way is to fit the measured points to the nominal points. I need a way of calculating the translation and rotation to apply to all of the measured points that produce the minimum distance between each nominal/measured pair of points while not exceeding allowed tolerances on maximum distance at any other point. This is similar to Registration of point clouds but different in that each pair of nominal/measured points has a unique tolerance/limit on how far apart they are allowed to be. That limit is higher for some pairs and lower for others. I'm programming in .Net and have looked into Point Cloud Library (PCL), OpenCV, Excel, and basic matrix operations as possible approaches.
This is a sample of the data
X Nom Y Nom Z Nom X Meas Y Meas Z Meas Upper Tol Lower Tol
118.81 2.24 -14.14 118.68 2.24 -14.14 1.00 -0.50
118.72 1.71 -17.19 118.52 1.70 -17.16 1.00 -0.50
115.36 1.53 -24.19 115.14 1.52 -23.98 0.50 -0.50
108.73 1.20 -27.75 108.66 1.20 -27.41 0.20 -0.20
Below is the type of matrix I need to calculate in order to best fit the measured points to the nominal points. I will multiply it by the measured point matrix to best fit to the nominal point matrix.
Transformation
0.999897324 -0.000587540 0.014317661
0.000632725 0.999994834 -0.003151567
-0.014315736 0.003160302 0.999892530
-0.000990993 0.001672040 0.001672040
This is indeed a job for a rigid registration algorithm.
In order to handle your tolerances you have a couple of options:
Simple option: Run rigid registration, check afterwards if result is within tolerances
Bit harder option: Offset your points in the CAD, where you have imbalanced tolerances
the rest the same as the previous option.
Hardest option: What you probably want to do is and have the offset as in the second option, and also add a weight function based on measured position and set tolerance. This weight function should effect the energy function in such a way, that the individual function vectors are larger when you have a small tolerance and smaller when you have a larger tolerence.
So now about implementation, for options 1 and 2 your fastest way to result would probably be:
Use PCL C++ version in a visual studio 2010 environment. There's lots of information about installation of PCL and VS2010 and get it running. Also PCL has a nice ICP registration tutorial that should get you going.
Use VTK for python, it has an ICP algorithm:
Installing VTK for Python
http://www.vtk.org/Wiki/VTK/Examples/Python/IterativeClosestPoints
If your really want option 3 you can do:
Make the weight function in PCL library source code and compile it
Make the complete ICP algorithm yourself in .net:
http://www.math.tau.ac.il/~dcor/Graphics/adv-slides/ICP.ppt
Use math.numerics sparse matrix/vector algebra and solvers to create your own optimizer
Realize the Lev-Marq or Gauss-Newton optimizer from:
imm methods for non-linear least squares problems, K. Madsen, 2004
Generate your own function vector and jacobian matrix (with weight function)
Have quite some patience to get is all to work together :)
Post the result for the others on StackOverflow that are waiting for ICP in C# .net
Any recommendation on how to implement efficient integral functions, like SumX and SumY, in GLSL shaders?
SumX(u) = Integration with respect to x = I(u0,y) + I(u1,y) +... + I(uN,y); u=normalized x coordinate
SumY(v) = Integration with respect to y = I(x,v0) + I(x,v1) +... + I(x,vN); v=normalized y coordinate
For instance the 5th pixel of the first line would be the sum of all five pixels on the first line. And the last pixel would be the sum of all previous pixels including the last pixel itself.
What you are asking for is called prefix sum or summed area table (SAT) for the 2D case (just so you find online resources more easily).
Summed area tables can be efficiently implemented on the GPU by decomposing into several parrallel prefix sum passes [1], [2].
The prefix sum can be accelerated by using local memory to store intermediate partial sums (see example in OpenCL or example in CUDA, the same can in principle be done in an OpenGL fragment shader as well with image load-store, or in a compute shader: OpenGL Super Bible example, similar example to be found in OpenGL Insights around page 280).
Note that you may quickly run into precision issues as the sum may get quite large for the rightmost (downmost) pixels. Integer or fp16 render targets will most likely result in failure due to overflow or lacking precision, fp32 will work most of the time.
I have googled around but havnt found an answer that suits me for OpenGL.
I want to construct a sparse matrix with a single diagonal and around 9 off-diagonals. These diagonals arent necessarily next to the main diagonal and they wrap around. Each diagonal is an image in row-major format i.e. a vector of size NxM.
The size of the matrix is (NxM)x(NxM)
My question is as follows:
After some messing around with the math I have arrived at the basic units of my operation. It involves a pixel by pixel multiplication of two images (WITHOUT limiting the value of the result i.e. so it can be above 1 or below 0), storing the resulting image and then adding a bunch of the resulting images (SAME as above).
How can I multiply and add images on a pixel by pixel basis in OpenGL? Is it easier in 1.1 or 2.0? Will use of textures cause hard maxing of the results to between 0 and 1? Will this maximize the use of the gpu cores?
In order to be able to store values outside the [0-1] range you would have to use floating point textures. There is no support in OpenGL ES 1.1 and for OpenGL ES 2.0 it is an optional extension (See other SO question).
In case your implementation supports it you could then write a fragment program to do the required math.
In OpenGL ES 1.1 you could use the glTexEnv call to set up how the pixels from different texture units are supposed to be combined. You could then use "modulate" or "add" to multiply/add the values. The result would be clamped to [0,1] range though.
For a thumbnail-engine I would like to develop an algorithm that takes x random thumbnails (crop, no resize) from an image, analyzes them for contrast and chooses the one with the highest contrast. I'm working with PHP and Imagick but I would be glad for some general tips about how to compute contrast of imagery.
It seems that many things are easier than computing contrast, for example counting colors, computing luminosity,etc.
What are your experiences with the analysis of picture material?
I'd do it that way (pseudocode):
L[256] = {0,0,0...}
loop over each pixel:
luminance = avg(R,G,B)
increment L[luminance] by 1
for i = 0 to 255:
if L[i] < C: L[i] = 0 // C = threshold of your chose
find index of first and last non-zero value of L[]
contrast = last - first
In looking for the image "with the highest contrast," you will need to be very careful in how you define contrast for the image. In the simplest way, contrast is the difference between the lowest intensity and the highest intensity in the image. That is not going to be very useful in your case.
I suggest you use a histogram approach to describe the contrast of a given image and then compare the properties of the histograms to determine the image with the highest contrast as you define it. You could use a variety of well known containers to represent the histogram in code, or construct a class to meet your specific needs. (I am not implying that you need to create a histogram in the form of a chart – just a statistical representation of the intensity values.) You could use the variance of each histogram directly as a measure of contrast, or use the standard deviation if that is easier to work with.
The key really lies in how you define the contrast of the image. In general, I would define a high contrast image as one with values present for all, or nearly all, the possible values. And I would further add that in this definition of a high contrast image, the intensity values of the image will tend to be distributed across the range of possible values in a uniform way.
Using this approach, a low contrast image would tend to have relatively few discrete intensity values and they would tend to be closely grouped together rather than uniformly distributed. (As a general rule, they will also tend to be grouped toward the center of the range.)