Lets say I have a random number, that goes from 0.0 to 1.0, and I want to reduce its "range", to, lets say 0.25 to 0.75 respecting its harmony.
What would be the pseudo code for this?
Examples:
0.0 in the [0.0, 1.0] range would be 0.25 in the [0.25, 0.75] range
0.5 would still be 0.5
Thanks.
You have a random number r.
First you shrink the range: r = r * (0.75-0.25).
Now it's in the range [0, 0.5]. Then you shift the range: r = r + 0.25.
Now it's in the range [0.25, 0.75].
Related
Can Counting Sort sort decimal values? I'm just started learning java recently and I'm unsure. Does anyone know if it can or not?
Counting Sort only works because you know exactly how many elements there are in the array, and because each index in an array represents an integer of the same value. The array [1, 2, 0, 2, 1] can be represented as [1, 2, 2] in the middle stage of the sort. There is one 0, two 1s, and two 2s.
This is not possible in the same way with decimals. If you could ensure a certain level of precision I suppose you could add a slot for each potential value. For example, if all the decimals were rounded to the nearest tenth you would need at 10 slots for each whole number plus 1 slot for 0: 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0
// the original array
[0.1, 1.1, 0.7, 1.4, 0.5, 0.7]
// after the counting step
// 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
[ 0 1 0 0 0 1 0 2 0 0
// 1.1 1.2 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9
1 0 0 0 1 0 0 0 0 0 ]
// when expanded to the sorted array
[0.1, 0.5, 0.7, 0.7, 1.1, 1.4]
So for a sort with a max value of N, the size of the counting array (K) would be N * 10. So k = 10N. This is obviously not true counting sort, since the counting step would be more complex since each value cannot be so easily mapped to an index.
So while theoretically possible and would have the same time complexity, it would not be a "true" counting sort while also taking up FAR more memory space to be practical, especially when sorting large number ranges with high precision, and being less flexible than other sorting algorithms. You would be better off using most any other sorting algorithm.
Transition matrix for a Markov chain:
0.5 0.3 0.0 0.0 0.2
0.0 0.5 0.0 0.0 0.5
0.0 0.4 0.4 0.2 0.0
0.3 0.0 0.2 0.0 0.5
0.5 0.2 0.0 0.0 0.3
This is a transition matrix with states {1,2,3,4,5}. States {1,2,5} are recurrent and states {3,4} are transient. How can I (without using the fundamental matrix trick):
Compute the expected number of steps needed to first return to state 1, conditioned on starting in state 1
Compute the expected number of steps needed to first reach any of the states {1,2,5}, conditioned on starting in state 3.
If you don't want to use the fundamental matrix, you can do two things:
Create a function that simulates the Markov chain until the stopping condition is met and that returns the number of steps. Take the average over a large number of runs to get the expectation.
Introduce dummy absorbing states in your transition matrix and repeatedly calculate p = Pp where p is a vector with 1 in the index of starting state and 0 elsewhere. With some accounting you can get the expected values that you want.
I am using AffineTransforms to rotate a volume. I am confused now by the sign of the rotation angle. For a right-hand system, when looking down an axis, say Z axis, rotating the XY plane counter-clockwise should be positive angles. I define a rotation matrix r = [0.0 -1. 0.0; 1.0 0.0 0.0; 0.0 0.0 1.0], which is to rotate along the Z axis 90 degree counter-clockwise. Indeed, r * [1 0 0]' gives [0 1 0]', which rotates X axis to Y axis.
Now I define a volume v.
3×3×3 Array{Float64,3}:
[:, :, 1] =
0.0 0.0 0.0
0.0 0.0 0.0
0.0 0.0 0.0
[:, :, 2] =
0.0 0.0 0.0
1.0 0.0 0.0
0.0 0.0 0.0
[:, :, 3] =
0.0 0.0 0.0
0.0 0.0 0.0
0.0 0.0 0.0
then I define tfm = AffineTransform(r, vec([0 0 0]))) which is the same as tfm = tformrotate(vec([0 0 1]), π/2).
then transform(v, tfm). The rotation center is the input array center. I got
3×3×3 Array{Float64,3}:
[:, :, 1] =
0.0 0.0 0.0
0.0 0.0 0.0
0.0 0.0 0.0
[:, :, 2] =
0.0 1.0 0.0
0.0 0.0 0.0
0.0 0.0 0.0
[:, :, 3] =
0.0 0.0 0.0
0.0 0.0 0.0
0.0 0.0 0.0
This is surprising to me because the output is the 90 degree rotation along Z axis but clockwise. It seems to me that this is actually a -90 degree rotation. Could somebody point out what I did wrong? Thanks.
Admittedly, this confused me too. Had to read the help for transform and TransformedArray again.
First, the print order of arrays is a bit confusing, with the first index shown in columns, but it is the X-axis, as the dimensions of v are x,y,z in this order.
In the original v, we have v[2,1,2] == 1.0. But, by default, transform uses the center of the array as origin, so 2,1,2 is relative to center (0,-1,0) i.e. a unit vector in the negative y-axis direction.
The array returned by transform has values which are evaluated at x,y,z by giving the value of the original v at tfm((x,y,z)) (see ?TransformedArray).
Specifically, we have transform(v,tfm)[1,2,2] is v[tfm((-1,0,0))] which is v[(0,-1,0)] (because rotating (-1,0,0) counterclockwise is (0,-1,0)) which is v[2,1,2] in the uncentered v indices. Finally, v[2,1,2] == 1.0 as was in the output in the question.
Coordinate transformation are always tricky, and it is easy to confuse transformations and their inverse.
Hope this helps.
I read about it on Wikipedia, theory sounds good, but I don't know to apply to practice.
I have an small example like this one:
Original Image Matrix
1 2
3 4
If I want to double size the image, then the new matrix is
x x x x
x x x x
x x x x
x x x x
Now, the fun part is how to transfer old values in original matrix to the new matrix, I intend to do like this
1 x 2 x
x x x x
3 x 4 x
x x x x
Then applying the Bi cubic Interpolation on it (at this moment just forget about using 16 neighbor pixel, I don't have enough space to demonstrate such a large matrix here).
Now my questions are:
1. Do I do the data transferring (from old to new matrix) right? If not, what should it look like?
2. What should be the value of x variables in the new matrix? to me , this seems correct because at least we have some values to do the calculation instead of x notations.
1 1 2 2
1 1 2 2
3 3 4 4
3 3 4 4
3. Will all of the pixels in new matrix be interpolated? Because the pixels at the boundary do not have enough neighbor pixels to perform the calculation.
Thank you very much.
Interpolation means estimating a value for points that don't physically exist. You need to start with a coordinate system, so let's just use two incrementing integers for X position and Y position.
0, 0 1, 0
0, 1 1, 1
Your output requires 4x4 pixels which should be spaced at 0.5 intervals instead of the 1.0 intervals of the input:
-0.25,-0.25 0.25,-0.25 0.75,-0.25 1.25,-0.25
-0.25, 0.25 0.25, 0.25 0.75, 0.25 1.25, 0.25
-0.25, 0.75 0.25, 0.75 0.75, 0.75 1.25, 0.75
-0.25, 1.25 0.25, 1.25 0.75, 1.25 1.25, 1.25
None of the coordinates in the output exist in the input, so they'll all need to be interpolated.
The offset of the first coordinate of -0.25 is chosen so that the first and last coordinate will be equal distances from the edges of the input, and is calculated by the difference between the output and input intervals divided by 2. For example if you wanted a 10x zoom the interval is 0.1, the initial offset is (0.1-1)/2 and the points would be (-0.45, -0.35, -0.25, -0.15, ... 1.35, 1.45).
The Bicubic algorithm will require data for points outside of the original image. The easiest solution comes when you use a premultiplied alpha representation for your pixels, then you can just use (0,0,0,0) as the value for anything outside the image boundaries.
This should be an easy one.
I have a list of numbers. How do I scale list's values, between -1.0 and 1.0 in order for min = -1 and max = 1.0?
Find the min and the max
then for each number scale x to 2 * (x - min)/( max - min) - 1
Just to check --
min scales to -1
and max scales to 1
If it is a long list precomputing c = 2/(max - min) and scaling with c * x - 1 is a good idea.
This is a signed normalization
1 - get the Minimum and Maximum value on the list (MinVal,MaxVal)
2 - convert each number using this expressions
signedNormal = (((originalNumber - Minimum) / (Maximum - Minimum)) * 2.0) - 1.0
I deliberately made this inefficient in order to be clear - more efficient would be
double min = myList.GetMinimum();
double max = myList.GetMaximum();
double signedRangeInverse = 1.0 / (max - min);
for(int i = 0;i < myList.NumberOfItems();i++)
myList[i] = (((myList[i] - min) * signedRangeInverse) * 2.0) - 1
No point in recalculating range each time
No point in dividing range, mult is faster
If you want 0 to still equal 0 in the final result:
Find the number with the largest magnitude. This will either map to 1 or -1.
Work out what you need to multiply it by to make it 1 or -1.
Multiply all the numbers in the collection by that factor.
E.g
[ -5, -3, -1, 0, 2, 4]
Number with largest magnitude is -5. We can get that to equal -1 by multiplying by 0.2 (-1 / -5). (Beware of divide by 0s if your numbers are all 0s.)
So multiply all the elements by 0.2. This would give:
[-1, -0.6, -0.2, 0, 0.4, 0.8]
Although note that
[ -5, -5, -5 ] -> [ -1, -1, -1 ]
and
[ 5, 5, 5 ] -> [ 1, 1, 1 ]
and
[ 0, 0, 0 ] -> [ 0, 0, 0 ]
That may or may not be what you want. Thanks to #Hammerite for prompting me on that one with his very helpful comment :)