If there is a svg rotation( a deg) with default pivot point(0,0), then I can calculate the rotation transform matrix as
_ _
| cos a -sin a 0 |
| sin a cos a 0 |
| 0 0 1 |
- -
But if pivot point is not (0,0), Lets say (px,py) then how do I calculate the rotation transform matrix?
I got the ans,
Lets the pivot point is (px ,py) and rotation is a degree
then net transform matrix will be
_ _ _ _
| 1 0 px | | cos a -sin a 0 |
net_matrix = | 0 1 py | X | sin a cos a 0 |
| 0 0 1 | | 0 0 1 |
- - - -
_ _
| 1 0 -px |
rotate_transform_matrix = net_matrix X | 0 1 -py |
| 0 0 1 |
- -
You can use javascript to apply the rotation transform on an svg element:
var rect = document.createElementNS("http://www.w3.org/2000/svg", "rect");
rect.setAttribute('transform', 'rotate(-30 50 50)');
rect.getCTM();
to get The TransformMatrix.
Just multiplying out (and tidying up the result to use the same variable names as the W3C) in case anyone else reading this wants something explicit.
rotate(a, cx, cy)
is equivalent to
matrix(cos(a), sin(a), -sin(a), cos(a), cx(1 - cos(a)) + cy(sin(a)), cy(1 - cos(a)) - cx(sin(a)))
Using mathematical notation assuming rotate and matrix are functions.
For anyone who is interested in Swarnendu Paul rotate_transform_matrix above, one would get:
_ _
| cos a -sin a px * (1 - cos a) + py * sin a |
| sin a cos a py * (1 - cos a) - px * sin a |
| 0 0 1 |
¯ ¯
I used it for SVG matrix transforms.
Related
Help me please to solve this problem:
I have a vector A and I get vector B this way:
B = M1 * A * 0.5 + M2 * A * 0.5;
M1 - rotation matrix 0 deg.
M2 - rotation Matrix 45 deg.
I need to get a way to compute A if B is known. For instance if B == (0.8535, 0.3535), then A should be (1.0, 0.0). How can I get the inverted formula?
UPD: for 0.4/0.6 the result formula is:
A=(M1*0.4+M2*0.6)^-1 * B
Bring this equation into a single matrix-vector product
B = M1 * A * 0.5 + M2 * A * 0.5
B = (M1 * 0.5 + M2 * 0.5)*A
B = M*A
and invert M
A = inv(M)*B = M\B
For example
M1 = | 1 0 | M2 = | 1/√2 -1/√2 |
| 0 1 | | 1/√2 1/√2 |
makes
M = | √2/4+1/2 -√2/4 |
| √2/4 √2/4+1/2 |
and the inverse
inv(M) = | 1 √2-1 |
| 1-√2 1 |
you will find that
inv(M)*| 0.8535 | = | 0.999999 |
| 0.3535 | | -3e-5 |
The above process is part of linear algebra, exactly because you can use the associative & distributive properties with non-scalar quantities.
A = (M1 * 0.5 + M2 * 0.5)^-1 * B
you know that the truth table for material implication is:
A | C | Y = A --> C
0 | 0 | 1
0 | 1 | 1
1 | 0 | 0
1 | 1 | 1
From this table we can deduce
A --> C = Y = ~A~C + ~AC + AC (where ~X stands for NOT X)
But it is also well known that
A --> C = ~(A~C)
I can't reduce the 1st expression (~A~C + ~AC + AC) to the 2nd ( ~(A~C) ), can you show me through which steps can you obtain the 2nd from the 1st?
Thank you.
(~A~C + ~AC + AC)
(~A~C + ~AC) + AC
~A(~C + C) + AC
~A(T) + AC
~A + AC
~~(~A + AC)
~((~~A)~(AC))
~(A~(AC))
~(A(~A + ~C))
~(A~A + A~C)
~(F + A~C)
~(A~C)
I am new to PyMC and trying to set up the simple conditional probability model: P(has_diabetes|bmi, race). Race can take on 5 discrete values encoded as 0-4 and BMI can take on a non-zero positive real number. So far I have the parent variables setup like this:
p_race = [0.009149232914923292,
0.15656903765690378,
0.019637377963737795,
0.013947001394700141,
0.800697350069735]
race = pymc.Categorical('race', p_race)
bmi_alpha = pymc.Exponential('bmi_alpha', 1)
bmi_beta = pymc.Exponential('bmi_beta', 1)
bmi = pymc.Gamma('bmi', bmi_alpha, bmi_beta, value=bmis, observed=True)
I have observed data that looks like:
| bmi | race | has_diabetes |
| 21.7 | 1 | 0 |
| 45.3 | 4 | 1 |
| 18.9 | 2 | 0 |
| 26.6 | 0 | 0 |
| 35.1 | 4 | 0 |
I am attempting to model has_diabetes as:
has_diabetes = pymc.Bernoulli('has_diabetes', p_diabetes, value=data, observed=True)
My problem is that I am not sure how to construct the p_diabetes function since it is dependent on the values of race and and the continuous value of bmi.
You need to construct a deterministic function that generates p_diabetes as a function of your predictors. The safest way to do this is via a logit-linear transformation. For example:
intercept = pymc.Normal('intercept', 0, 0.01, value=0)
beta_race = pymc.Normal('beta_race', 0, 0.01, value=np.zeros(4))
beta_bmi = pymc.Normal('beta_bmi', 0, 0.01, value=0)
#pymc.deterministic
def p_diabetes(b0=intercept, b1=beta_race, b2=beta_bmi):
# Prepend a zero for baseline
b1 = np.append(0, b1)
# Logit-linear model
return pymc.invlogit(b0 + b1[race] + b2*bmi)
I would make the baseline race be the largest group (it is assumed to be index 0 in this example).
Actually, its not clear what the first part of the model above is for, specifically, why you are building models for the predictors, but perhaps I am missing something.
I have a matrix start index of my data and the number of elements of my data
and I need to find the number of rows that the data span.
e.g. the matrix
0 5
-------------------
| | | | |x |x |
-------------------
|x |x |x |x |x |x |
-------------------
| | | | | | |
-------------------
| | | | | | |
-------------------
My data is marked with x. I know the start index 4, the length of the data 8.
And I need to determine the number of rows this data spans, 2 in this case. (just doing length/6 is off by one in many cases, surely there have to be a simple formula for this..)
If you only know offset (i.e. index of the starting column), size (i.e. how many data), cols (i.e. maximum number of colums), and you want to calculate how many rows your data will span, you can do
int get_spanned_rows(int offset, int size, int cols) {
int spanned_rows = (offset + size) / cols
if ( ( (offset + size ) % cols) != 0 )
spanned_rows++
return spanned_rows
}
where % is the modulus (or reminder) operator
int calc_rows(int start,int length){
int rows = 0;
int x= 0;
if (start != 0){
x = 6%start;
rows+=1;
}
if ((start+length) % 6 != 0){
x +=(start+length) % 6;
rows+=1;
}
rows+= (length - x)/6;
return rows;
}
calculates number of rows by dividing by 6 but after subtracting partially filled rows count
In case you only want the number of rows you can calculate
num_rows = (offset + size + cols - 1) / cols
which in this case is
num_rows = (4 + 8 + 6 - 1) / 6 = 17 / 6 = 2
This is a hard algorithms problem that :
Divide the list in 2 parts (sum) that their sum closest to (most) each other
list length is 1 <= n <= 100 and their(numbers) weights 1<=w<=250 given in the question.
For example : 23 65 134 32 95 123 34
1.sum = 256
2.sum = 250
1.list = 1 2 3 7
2.list = 4 5 6
I have an algorithm but it didn't work for all inputs.
init. lists list1 = [], list2 = []
Sort elements (given list) [23 32 34 65 95 123 134]
pop last one (max one)
insert to the list which differs less
Implementation :
list1 = [], list2 = []
select 134 insert list1. list1 = [134]
select 123 insert list2. because if you insert to the list1 the difference getting bigger 3. select 95 and insert list2 . because sum(list2) + 95 - sum(list1) is less.
and so on...
You can reformulate this as the knapsack problem.
You have a list of items with total weight M that should be fitted into a bin that can hold maximum weight M/2. The items packed in the bin should weigh as much as possible, but not more than the bin holds.
For the case where all weights are non-negative, this problem is only weakly NP-complete and has polynomial time solutions.
A description of dynamic programming solutions for this problem can be found on Wikipedia.
The problem is NPC, but there is a pseudo polynomial algorithm for it, this is a 2-Partition problem, you can follow the way of pseudo polynomial time algorithm for sub set sum problem to solve this. If the input size is related polynomially to input values, then this can be done in polynomial time.
In your case (weights < 250) it's polynomial (because weight <= 250 n => sums <= 250 n^2).
Let Sum = sum of weights, we have to create two dimensional array A, then construct A, Column by Column
A[i,j] = true if (j == weight[i] or j - weight[i] = weight[k] (k is in list)).
The creation of array with this algorithm takes O(n^2 * sum/2).
At last we should find most valuable column which has true value.
Here is an example:
items:{0,1,2,3}
weights:{4,7,2,8} => sum = 21 sum/2 = 10
items/weights 0| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10
---------------------------------------------------------
|0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0
|1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0
|2 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1
|3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1
So because a[10, 2] == true the partition is 10, 11
This is an algorithm I found here and edited a little bit to solve your problem:
bool partition( vector< int > C ) {
// compute the total sum
int n = C.size();
int N = 0;
for( int i = 0; i < n; i++ ) N += C[i];
// initialize the table
T[0] = true;
for( int i = 1; i <= N; i++ ) T[i] = false;
// process the numbers one by one
for( int i = 0; i < n; i++ )
for( int j = N - C[i]; j >= 0; j--)
if( T[j] ) T[j + C[i]] = true;
for(int i = N/2;i>=0;i--)
if (T[i])
return i;
return 0;
}
I just returned first T[i] which is true instead of returning T[N/2] (in max to min order).
Finding the path which gives this value is not hard.
This problem is at least as hard as the NP-complete problem subset sum. Your algorithm is a greedy algorithm. This type of algorithm is fast, and can generate an approximate solution quickly but it cannot find the exact solution to an NP-complete problem.
A brute force approach is probably the simplest way to solve your problem, although it is will be to slow if there are too many elements.
Try every possible way of partitioning the elements into two sets and calculate the absolute difference in the sums.
Choose the partition for which the absolute difference is minimal.
Generating all the partitions can be done by considering the binary representation of each integer from 0 to 2^n, where each binary digit determines whether the correspending element is in the left or right partition.
Trying to resolve the same problem I have faced into the following idea which seems too much a solution, but it works in a linear time. Could one provide an example which would show that it does not work or explain why it is not a solution?
arr = [20,10,15,6,1,17,3,9,10,2,19] # a list of numbers
g1 = []
g2 = []
for el in reversed(sorted(arr)):
if sum(g1) > sum(g2):
g2.append(el)
else:
g1.append(el)
print(f"{sum(g1)}: {g1}")
print(f"{sum(g2)}: {g2}")
Typescript code:
import * as _ from 'lodash'
function partitionArray(numbers: number[]): {
arr1: number[]
arr2: number[]
difference: number
} {
let sortedArr: number[] = _.chain(numbers).without(0).sortBy((x) => x).value().reverse()
let arr1: number[] = []
let arr2: number[] = []
let median = _.sum(sortedArr) / 2
let sum = 0
_.each(sortedArr, (n) => {
let ns = sum + n
if (ns > median) {
arr1.push(n)
} else {
sum += n
arr2.push(n)
}
})
return {
arr1: arr1,
arr2: arr2,
difference: Math.abs(_.sum(arr1) - _.sum(arr2))
}
}