About BSplines and Beziers - genetic-algorithm

I'm doing a small drawing-test program and I'm drawing curves.
While it has been pretty easy with Beziers, I'm stuck with Splines. Like with first, when I click in the window, I add a knot, but honestly I don't get how to draw my curve from here: how do I compute values like I do with Beziers (see below)?
///> Coefficient calc in algorithm
///> dT Sampled in [0,1]
///> bla bla bla
vdCoeff[0] = ( 1 - dT ) * ( 1 - dT ) * ( 1 - dT );
vdCoeff[1] = dT *( 1 - dT ) * ( 1 - dT );
vdCoeff[2] = dT * dT * ( 1 - dT );
vdCoeff[3] = dT * dT * dT;
///> bla bla bla

Related

Calculate % based on a combination of count and stored values - Power BI

I have a basic table that looks like this:
DayNo. Customer AgentsInvolved CallID
0 AAA 1 1858
0 AAA 3 1859
2 AAA 1 1860
0 BBB 2 1862
0 CCC 1 1863
0 DDD 3 1864
9 DDD 1 1865
9 DDD 4 1866
I need to be able to find the % of customers who only contacted only once, and spoke to 1 agent only. So from the above example, out of 4 distinct customers only customer CCC falls into this category (1 call, 1 AgentInvolved)
So the Desired result would be: 1/4 or 25%
How can I create a Power BI measure to do this calc?
Try this measure:
Desired Result =
VAR summarizetable =
SUMMARIZECOLUMNS (
'table'[Customer],
"Calls", COUNT ( 'table'[CallID] ),
"Agents", SUM ( 'table'[AgentsInvolved] ),
"Day", SUM ( 'table'[DayNo.] )
)
RETURN
COUNTROWS (
FILTER ( summarizetable, [Calls] = 1 && [Agents] = 1 && [Day] = 0 )
)
/ COUNTROWS ( summarizetable )
The summarized table created on the fly in VAR summarizetable looks like this:
Here's another approach:
Measure =
SUMX(
VALUES(Table2[Customer]),
CALCULATE(
IF(
DISTINCTCOUNT(Table2[CallID]) = 1 &&
SUM(Table2[AgentsInvolved]) = 1,
1,
0
),
Table2[DayNo.] = 0
)
) /
DISTINCTCOUNT(Table2[Customer])
If you want to exclude the 0 day rows in the denominator as well, replace the last line with
CALCULATE(DISTINCTCOUNT(Table2[Customer]), Table2[DayNo.] = 0)
Desired Result =
VAR summarizetable =
SUMMARIZECOLUMNS (
'table'[AgentsInvolved],
'table'[DayNo.],
'table'[Customer],
"Calls", COUNT ( 'table'[CallID] )
)
)
RETURN
COUNTROWS (
FILTER ( summarizetable, [Calls] = 1 && 'table'[AgentsInvolved] = 1 && 'table'[DayNo.] = 0 )
) / DISTINCTCOUNT ( 'table'[Customer])

Leveberg-Marquard Optimization for Intensity-based Image Alignment Matlab Failed

I am trying to implement the intensity-based image alignment algorithm in the paper Efficient, Robust, and Fast Global Motion Estimation for Video Coding on MATLAB. The problem is that the Levenberg-Marquardt (LM) optimization doesn't work properly, so the energy function cannot approach even a local minima due to very tiny update term computed from the LM approximation equation. I've searching for a week but still could not figure out the problem. Here is my implementation.
I have two gray images I and T equal in size (W x H) and would like to estimate a rigid transformation F (scale + rotation + translation) that aligns I onto T. This could be obtained by minimizing the following energy function
Where
According to LM, I can derive the gradient term of E with respect to each parameter a_i (i=1,4) as follow (from equation (5) in the paper).
And the terms of the Hessian matrix of E are (from equation (4) in the paper)
From the above equations, I derive the bellow MATLAB procedure to compute beta and alpha terms.
%========== 1. Compute gradient of T using Sobel filter
gx = [-1 -2 -1; 0 0 0; 1 2 1] / 4;
gy = [-1 0 1; -2 0 2; -1 0 1] / 4;
Tx = conv2( T, gx, 'same' );
Ty = conv2( T, gy, 'same' );
%========== 2. Warp I using F to compute diff_image = I(x, y) - T(F(x,y,a)). F was previously initialized by an identity matrix of size 3x3
tform = affine2d( F );
I_warp = imwarp( I, tform, 'linear', 'OutputView', imref2d( size( T ) ) );
diff_img = I_wapr - T;
% create a mask for overlaping region between two aligned images
mask = ones( size( T ) );
mask_warp = imwarp( mask, tform, 'nearest', 'OutputView', imref2d( size( T ) ) );
overlap_area = numel( mask_warp );
diff_img = diff_img .* mask;
error = sum( sum( diff_img ) ) / 2 / overlap_area;
% create x, y grids
[y, x] = ndgrid( 0 : h-1, 0 : w-1 );
x_warp = imwarp( x, t_form, 'nearest', 'OutputView', imref2d( size(T) ) );
y_warp = imwarp( y, t_form, 'nearest', 'OutputView', imref2d( size(T) ) );
%======== compute beta_i = - dE/da_i (i=1,4)
sx = Tx .* diff_img;
sy = Ty .* diff_img;
beta_1 = sum( sum( x_warp .* sx + y_warp .* sy ) );
beta_2 = sum( sum( y_warp .* sx - x_warp .* sy ) );
beta_3 = sum( sum( sx ) );
beta_4 = sum( sum( sy ) );
beta = -[beta_1; beta_2; beta_3; beta_4] / overlap_area;
%======= compute alpha_ij = (dE/da_i) * (dE/da_j) i,j = 1,4
Sxx = (Tx .^ 2) .* mask_warp;
Syy = (Ty .^ 2) .* mask_warp;
Sxy = (Tx.* Ty) .* mask_warp;
xx = x_warp .^2;
yy = y_warp .^2;
xy = x_warp .* y_warp;
alpha_11 = sum( sum( xx .* Sxx + 2 * xy .* Sxy + yy .* Syy ) );
alpha_12 = sum( sum( (yy - xx) .* Sxy + xy .* (Sxx - Syy) ) );
alpha_13 = sum( sum( x .* Sxx + y .* Sxy ) );
alpha_14 = sum( sum( x .* Sxy + y .* Syy ) );
alpha_22 = sum( sum( yy .* Sxx - 2 * xy .* Sxy + xx .* Syy ) );
alpha_23 = sum( sum( y .* Sxx - x .* Sxy ) );
alpha_24 = sum( sum( y .* Sxy - x .* Syy ) );
alpha_33 = sum( sum( Sxx ) );
alpha_34 = sum( sum( Sxy ) );
alpha_44 = sum( sum( Syy ) );
alpha = [alpha_11 alpha_12 alpha_13 alpha_14;
alpha_12 alpha_22 alpha_23 alpha_24;
alpha_13 alpha_23 alpha_33 alpha_34;
alpha_14 alpha_24 alpha_34 alpha_44] / overlap_area;
% With lamda was previously initialized by 0.0001
for i = 1 : 4
alpha(i, i) = alpha(i, i) * (lamda + 1);
end
%======== Find the update term: delta_a = alpha^(-1) * beta
delta_a = pinv( alpha ) * beta
% Or we can solve for delta_a using SVD
%[U, S, V] = svd( alpha );
%inv_S = S;
%for ii = 1 : size(S, 1)
% if S(ii, ii)
% inv_S(ii, ii) = 1 / S(ii, ii);
% end
%end
%delta_a = V * inv_S * U' * beta;
%======== Update a_i and new error value
a = [ F(1, 1)-1;
F(2, 1);
F(3, 1);
F(3, 2)];
new_a = a + delta_a;
new_F = [new_a(1)+1 -new_a(2) 0;
new_a(2) new_a(1)+1 0;
new_a(3) new_a(4) 1];
tform = affine2d( new_F );
mask_warp = imwarp( mask, tform, 'nearest', 'OutputView', imref2d(size(T)));
new_I_warp = imwarp( I, tform, 'linear', 'OutputView', imref2d(size(T)));
new_diff_img = (new_I_warp - T) .* mask_warp;
new_error = sum( sum( new_diff_img .^2 ) ) / 2/ sum( sum( mask_warp ) );
if( new_error > error )
lamda = lamda * 10;
elseif new_error < error
lamda = lamda /10;
The above process is repeated until the iteration reach the limitation or the new_error value less than a predetermined threshold. However, after each iteration, the update terms are too small that parameters make trivial movement. For example,
delta a = 1.0e-04 * [0.0011 -0.0002 0.2186 0.2079]
Is there anybody know how to fixed it? Any help would be highly appreciated.

understanding HSL to RGB color space conversion algorithm

I've read the HSL to RGB algorithm in wikipedia. I understand it and can convert using it. However I came upon another algorithm here, and the math is "explained" here.
The algorithm is:
//H, S and L input range = 0 ÷ 1.0
//R, G and B output range = 0 ÷ 255
if ( S == 0 )
{
R = L * 255
G = L * 255
B = L * 255
}
else
{
if ( L < 0.5 ) var_2 = L * ( 1 + S )
else var_2 = ( L + S ) - ( S * L )
var_1 = 2 * L - var_2
R = 255 * Hue_2_RGB( var_1, var_2, H + ( 1 / 3 ) )
G = 255 * Hue_2_RGB( var_1, var_2, H )
B = 255 * Hue_2_RGB( var_1, var_2, H - ( 1 / 3 ) )
}
Hue_2_RGB( v1, v2, vH ) //Function Hue_2_RGB
{
if ( vH < 0 ) vH += 1
if( vH > 1 ) vH -= 1
if ( ( 6 * vH ) < 1 ) return ( v1 + ( v2 - v1 ) * 6 * vH )
if ( ( 2 * vH ) < 1 ) return ( v2 )
if ( ( 3 * vH ) < 2 ) return ( v1 + ( v2 - v1 ) * ( ( 2 / 3 ) - vH ) * 6)
return ( v1 )
}
I've tried following the math but I can't figure it. How does it work?
The first part if ( S == 0 ) is for the case that there is no Saturation it means that it’s a shade of grey. You set the Luminance, set RGB to that grey scale level and you are done.
If this is not the case, then we need to perform the tricky part:
We shall use var_1 and var_2 as temporary values, only for making the code more readable.
So, if Luminance is smaller then 0.5 (50%) then var_2 = Luminance x (1.0 + Saturation.
If Luminance is equal or larger then 0.5 (50%) then var_2 = Luminance + Saturation – Luminance x Saturation. That's the else part of:
if ( L < 0.5 ) var_2 = L * ( 1 + S )
else var_2 = ( L + S ) - ( S * L )
Then we do:
var1 = 2 x Luminance – var_2
which is going to be useful later.
Now we need another three temporary variables for each color channel, as far as Hue is conserned. For Red, we add 0.333 to it (H + (1/3) in code), for Green we do nothing, and for Blue, we subtract 0.333 from it (H + (1/3)). That temporaty value is called vH (value Hue) in Hue_2_RGB().
Now each color channel will be treated separetely, thus the three function calls. There are four formulas that can be applied to a color channel. Every color channel should "use" only one formula.
Which one? It depends on the value of Hue (vH).
By the way, the value of vH must be normalized, thus if it's negative we add 1, or if it's greater than 1, we subtract 1 from it, so that vH lies in [0, 1].
If 6 x vH is smaller then 1, Color channel = var_1 + (var_2
– var_1) x 6 x vH
If 2 x vH is smaller then 1, Color channel = var_2
If 3 x vH is smaller then 2, Color channel = var_1 + (var_2 – var_1)
x (0.666 – vH) x 6
Else, Color channel = var_1
For R = 255 * Hue_2_RGB( var_1, var_2, H + ( 1 / 3 ) ), the Color Channel would be the Red, named R in the code.

Run hour calculation of 3 machines (eliminate overlapping time)

I have a list of 3 machine's run hour details, in that i have to eliminate overlapping time between each machine's run hour. As shown in the img. Please help on this & thanks in advance.
First, you need to calculate the overlap between each combination of two machines.
Overlap between machine A and B in column I: = IF( OR( B2 = 0; D2 = 0 ); 0; MAX( 0; MIN( C2; E2 ) - MAX( B2; D2 ) ) )
Overlap between machine B and C in column J: = IF( OR( B2 = 0; F2 = 0 ); 0; MAX( 0; MIN( C2; G2 ) - MAX( B2; F2 ) ) )
Overlap between machine A and C in column K: = IF( OR( D2 = 0; F2 = 0 ); 0; MAX( 0; MIN( E2; G2 ) - MAX( D2; F2 ) ) )
The IF( OR() ) statements are to controll for empty cells.
Now for the result, calculate the difference between the last end date and the earliest start date and substract the overlap: = MAX( C2; E2; G2 ) - MIN( B2; D2; F2 ) - SUM( I2:K2 ).
Copy down and that's it. Obviously you colud combine everything in one formula if you really want to, but that would be a very long formula and a bit of a mess.
PS: note that my machine uses semicolons instead of commas. Depending on your regional settings, you might have to replace the semicolons by commas.

How to calculate center of gravity in grid?

Given a grid (or table) with x*y cells. Each cell contains a value. Most of these cells have a value of 0, but there may be a "hot spot" somewhere on this grid with a cell that has a high value. The neighbours of this cell then also have a value > 0. As farer away from the hot spot as lower the value in the respective grid cell.
So this hot spot can be seen as the top of a hill, with decreasing values the farer we are away from this hill. At a certain distance the values drop to 0 again.
Now I need to determine the cell within the grid that represents the grid's center of gravity. In the simple example above this centroid would simply be the one cell with the highest value. However it's not always that simple:
the decreasing values of neighbour cells around the hot spot cell may not be equally distributed, or a "side of the hill" may fall down to 0 sooner than another side.
there is another hot spot/hill with values > 0 elsewehere within the grid.
I could think that this is kind of a typical problem. Unfortunately I am no math expert so I don't know what to search for (at least I have not found an answer in Google).
Any ideas how can I solve this problem?
Thanks in advance.
You are looking for the "weighted mean" of the cell values. Assuming each cell has a value z(x,y), then you can do the following
zx = sum( z(x, y) ) over all values of y
zy = sum( z(x, y) ) over all values of x
meanX = sum( x * zx(x)) / sum ( zx(x) )
meanY = sum( y * zy(y)) / sum ( zy(y) )
I trust you can convert this into a language of your choice...
Example: if you know Matlab, then the above would be written as follows
zx = sum( Z, 1 ); % sum all the rows
zy = sum( Z, 2 ); % sum all the columns
[ny nx] = size(Z); % find out the dimensions of Z
meanX = sum((1:nx).*zx) / sum(zx);
meanY = sum((1:ny).*zy) / sum(zy);
This would give you the meanX in the range 1 .. nx : if it's right in the middle, the value would be (nx+1)/2. You can obviously scale this to your needs.
EDIT: one more time, in "almost real" code:
// array Z(N, M) contains values on an evenly spaced grid
// assume base 1 arrays
zx = zeros(N);
zy = zeros(M);
// create X profile:
for jj = 1 to M
for ii = 1 to N
zx(jj) = zx(jj) + Z(ii, jj);
next ii
next jj
// create Y profile:
for ii = 1 to N
for jj = 1 to M
zy(ii) = zy(ii) + Z(ii, jj);
next jj
next ii
xsum = 0;
zxsum = 0;
for ii = 1 to N
zxsum += zx(ii);
xsum += ii * zx(ii);
next ii
xmean = xsum / zxsum;
ysum = 0;
zysum = 0;
for jj = 1 to M
zysum += zy(jj);
ysum += jj * zy(ii);
next jj
ymean = ysum / zysum;
This Wikipedia entry may help; the section entitled "A system of particles" is all you need. Just understand that you need to do the calculation once for each dimension, of which you apparently have two.
And here is a complete Scala 2.10 program to generate a grid full of random integers (using dimensions specified on the command line) and find the center of gravity (where rows and columns are numbered starting at 1):
object Ctr extends App {
val Array( nRows, nCols ) = args map (_.toInt)
val grid = Array.fill( nRows, nCols )( util.Random.nextInt(10) )
grid foreach ( row => println( row mkString "," ) )
val sum = grid.map(_.sum).sum
val xCtr = ( ( for ( i <- 0 until nRows; j <- 0 until nCols )
yield (j+1) * grid(i)(j) ).sum :Float ) / sum
val yCtr = ( ( for ( i <- 0 until nRows; j <- 0 until nCols )
yield (i+1) * grid(i)(j) ).sum :Float ) / sum
println( s"Center is ( $xCtr, $yCtr )" )
}
You could def a function to keep the calculations DRYer, but I wanted to keep it as obvious as possible. Anyway, here we run it a couple of times:
$ scala Ctr 3 3
4,1,9
3,5,1
9,5,0
Center is ( 1.8378378, 2.0 )
$ scala Ctr 6 9
5,1,1,0,0,4,5,4,6
9,1,0,7,2,7,5,6,7
1,2,6,6,1,8,2,4,6
1,3,9,8,2,9,3,6,7
0,7,1,7,6,6,2,6,1
3,9,6,4,3,2,5,7,1
Center is ( 5.2956524, 3.626087 )

Resources