Related
so i am working on a Problem which has two parts. I did finish part one with help of :this useful Forum. Some body already tried to do the first part of the problem and i took their code.
To the Problem:
Write a proc ( here i named it "reduced" ) in Maple that calculates reduced Echelon Form of a Matrix.
Write a proc that uses "reduced" to calculate the inverse of a Matrix.
The Code to the first Part ( the Code is tested and i claim that it runs correctly)
multiplikation:= proc(
m::posint,
a::depends(And(posint, satisfies(a-> a <= m))),
b::And({float, rational}, Not(identical(0,0.)))
)
Matrix((m,m), (i,j)-> `if`(i=j, `if`(i=a, b, 1), 0))
end proc:
addition:= proc(
m::posint,
a::depends(And(posint, satisfies(a-> a <= m))),
b::depends(And(posint, satisfies(b-> b <= m))),
c::And({float, rational}, Not(identical(0,0.)))
)
Matrix((m,m), (i,j)-> `if`(i=a and j=b, c, `if`(i=j, 1, 0)))
end proc:
perm:= proc(
m::posint,
a::depends(And(posint, satisfies(a-> a <= m))),
b::depends(And(posint, satisfies(b-> b <= m and a<>b)))
)
Matrix((m,m), (i,j)-> `if`({i,j}={a,b} or i=j and not i in {a,b}, 1, 0))
end proc:
and the main proc :
reduced:= proc(B::Matrix)
uses LA= LinearAlgebra;
local
M:= B, l:= 1, #l is current column.
m:= LA:-RowDimension(M), n:= LA:-ColumnDimension(M), i, j
;
for i to m do #going through every row item
#l needs to be less than column number n.
if n < l then return M end if;
j:= i; #Initialize current row number.
while M[j,l]=0 do #Search for 1st row item <> 0.
j:= j+1;
if m < j then #End of row: Go to next column.
j:= i;
l:= l+1;
if n < l then return M fi #end of column and row
end if
end do;
if j<>i then M:= perm(m,j,i).M end if; #Permute rows j and i
#Multiply row i with 1/M[i,l], if it's not 0.
if M[i,l] <> 0 then M:= multiplikation(m,i,1/M[i,l]).M fi;
#Subtract each row j with row i for M[j,l]-times.
for j to m do if j<>i then M:= addition(m,j,i,-M[j,l]).M fi od;
l:= l+1 #Increase l by 1; next iteration i increase either.
end do;
return M
end proc:
If you need any additional Information about the Code above, i will explain more.
for the second part i am thinking to use the Gauss Jordan Algorithmus but i have a problem:
i cannot use the identity matrix as a parameter in "reduced". because it has 0 in rows and columns.
Do you have any idea how i could implement the Gauss Jordan Algorithmus with the Help of my proc : reduced ?
The stated goal was to utilize the reduced procedure.
One way to do that is to augment the input Matrix by the identity Matrix, reduce that, and then return the right half the augmented Matrix.
The steps that transform the input Matrix into the identity Matrix also transform the identity Matrix into the inverse (of the input Matrix).
For example, using your procdures,
inv := proc(B::Matrix(square))
local augmented,m;
uses LinearAlgebra;
m := RowDimension(B);
augmented := <<B|IdentityMatrix(m)>>;
return reduced(augmented)[..,m+1..-1];
end proc:
MM := LinearAlgebra:-RandomMatrix(3,generator=1..5);
[1 4 2]
[ ]
MM := [1 5 3]
[ ]
[2 3 5]
ans := inv(MM);
[ 8 -7 1]
[ - -- -]
[ 3 3 3]
[ ]
[ 1 1 -1]
ans := [ - - --]
[ 6 6 6 ]
[ ]
[-7 5 1]
[-- - -]
[6 6 6]
ans.MM, MM.ans;
[1 0 0] [1 0 0]
[ ] [ ]
[0 1 0], [0 1 0]
[ ] [ ]
[0 0 1] [0 0 1]
ps. You might want to also consider what your reduce prodedure does when the Matrix is not invertible.
I came across a question and unable to find a feasible solution.
Image Quantization
Given a grayscale mage, each pixels color range from (0 to 255), compress the range of values to a given number of quantum values.
The goal is to do that with the minimum sum of costs needed, the cost of a pixel is defined as the absolute difference between its color and the closest quantum value for it.
Example
There are 3 rows 3 columns, image [[7,2,8], [8,2,3], [9,8 255]] quantums = 3 number of quantum values.The optimal quantum values are (2,8,255) Leading to the minimum sum of costs |7-8| + |2-2| + |8-8| + |8-8| + |2-2| + |3-2| + |9-8| + |8-8| + |255-255| = 1+0+0+0+0+1+1+0+0 = 3
Function description
Complete the solve function provided in the editor. This function takes the following 4 parameters and returns the minimum sum of costs.
n Represents the number of rows in the image
m Represents the number of columns in the image
image Represents the image
quantums Represents the number of quantum values.
Output:
Print a single integer the minimum sum of costs/
Constraints:
1<=n,m<=100
0<=image|i||j|<=255
1<=quantums<=256
Sample Input 1
3
3
7 2 8
8 2 3
9 8 255
10
Sample output 1
0
Explanation
The optimum quantum values are {0,1,2,3,4,5,7,8,9,255} Leading the minimum sum of costs |7-7| + |2-2| + |8-8| + |8-8| + |2-2| + |3-3| + |9-9| + |8-8| + |255-255| = 0+0+0+0+0+0+0+0+0 = 0
can anyone help me to reach the solution ?
Clearly if we have as many or more quantums available than distinct pixels, we can return 0 as we set at least enough quantums to each equal one distinct pixel. Now consider setting the quantum at the lowest number of the sorted, grouped list.
M = [
[7, 2, 8],
[8, 2, 3],
[9, 8, 255]
]
[(2, 2), (3, 1), (7, 1), (8, 3), (9, 1), (255, 1)]
2
We record the required sum of differences:
0 + 0 + 1 + 5 + 6 + 6 + 6 + 7 + 253 = 284
Now to update by incrementing the quantum by 1, we observe that we have a movement of 1 per element so all we need is the count of affected elements.
Incremenet 2 to 3
3
1 + 1 + 0 + 4 + 5 + 5 + 5 + 6 + 252 = 279
or
284 + 2 * 1 - 7 * 1
= 284 + 2 - 7
= 279
Consider traversing from the left with a single quantum, calculating only the effect on pixels in the sorted, grouped list that are on the left side of the quantum value.
To only update the left side when adding a quantum, we have:
left[k][q] = min(left[k-1][p] + effect(A, p, q))
where effect is the effect on the elements in A (the sorted, grouped list) as we reduce p incrementally and update the effect on the pixels in the range, [p, q] according to whether they are closer to p or q. As we increase q for each round of k, we can keep the relevant place in the sorted, grouped pixel list with a pointer that moves incrementally.
If we have a solution for
left[k][q]
where it is the best for pixels on the left side of q when including k quantums with the rightmost quantum set as the number q, then the complete candidate solution would be given by:
left[k][q] + effect(A, q, list_end)
where there is no quantum between q and list_end
Time complexity would be O(n + k * q * q) = O(n + quantums ^ 3), where n is the number of elements in the input matrix.
Python code:
def f(M, quantums):
pixel_freq = [0] * 256
for row in M:
for colour in row:
pixel_freq[colour] += 1
# dp[k][q] stores the best solution up
# to the qth quantum value, with
# considering the effect left of
# k quantums with the rightmost as q
dp = [[0] * 256 for _ in range(quantums + 1)]
pixel_count = pixel_freq[0]
for q in range(1, 256):
dp[1][q] = dp[1][q-1] + pixel_count
pixel_count += pixel_freq[q]
predecessor = [[None] * 256 for _ in range(quantums + 1)]
# Main iteration, where the full
# candidate includes both right and
# left effects while incrementing the
# number of quantums.
for k in range(2, quantums + 1):
for q in range(k - 1, 256):
# Adding a quantum to the right
# of the rightmost doesn't change
# the left cost already calculated
# for the rightmost.
best_left = dp[k-1][q-1]
predecessor[k][q] = q - 1
q_effect = 0
p_effect = 0
p_count = 0
for p in range(q - 2, k - 3, -1):
r_idx = p + (q - p) // 2
# When the distance between p
# and q is even, we reassign
# one pixel frequency to q
if (q - p - 1) % 2 == 0:
r_freq = pixel_freq[r_idx + 1]
q_effect += (q - r_idx - 1) * r_freq
p_count -= r_freq
p_effect -= r_freq * (r_idx - p)
# Either way, we add one pixel frequency
# to p_count and recalculate
p_count += pixel_freq[p + 1]
p_effect += p_count
effect = dp[k-1][p] + p_effect + q_effect
if effect < best_left:
best_left = effect
predecessor[k][q] = p
dp[k][q] = best_left
# Records the cost only on the right
# of the rightmost quantum
# for candidate solutions.
right_side_effect = 0
pixel_count = pixel_freq[255]
best = dp[quantums][255]
best_quantum = 255
for q in range(254, quantums-1, -1):
right_side_effect += pixel_count
pixel_count += pixel_freq[q]
candidate = dp[quantums][q] + right_side_effect
if candidate < best:
best = candidate
best_quantum = q
quantum_list = [best_quantum]
prev_quantum = best_quantum
for i in range(k, 1, -1):
prev_quantum = predecessor[i][prev_quantum]
quantum_list.append(prev_quantum)
return best, list(reversed(quantum_list))
Output:
M = [
[7, 2, 8],
[8, 2, 3],
[9, 8, 255]
]
k = 3
print(f(M, k)) # (3, [2, 8, 255])
M = [
[7, 2, 8],
[8, 2, 3],
[9, 8, 255]
]
k = 10
print(f(M, k)) # (0, [2, 3, 7, 8, 9, 251, 252, 253, 254, 255])
I would propose the following:
step 0
Input is:
image = 7 2 8
8 2 3
9 8 255
quantums = 3
step 1
Then you can calculate histogram from the input image. Since your image is grayscale, it can contain only values from 0-255.
It means that your histogram array has length equal to 256.
hist = int[256] // init the histogram array
for each pixel color in image // iterate over image
hist[color]++ // and increment histogram values
hist:
value 0 0 2 1 0 0 0 1 2 1 0 . . . 1
---------------------------------------------
color 0 1 2 3 4 5 6 7 8 9 10 . . . 255
How to read the histogram:
color 3 has 1 occurrence
color 8 has 2 occurrences
With tis approach, we have reduced our problem from N (amount of pixels) to 256 (histogram size).
Time complexity of this step is O(N)
step 2
Once we have histogram in place, we can calculate its # of quantums local maximums. In our case, we can calculate 3 local maximums.
For the sake of simplicity, I will not write the pseudo code, there are numerous examples on internet. Just google ('find local maximum/extrema in array'
It is important that you end up with 3 biggest local maximums. In our case it is:
hist:
value 0 0 2 1 0 0 0 1 2 1 0 . . . 1
---------------------------------------------
color 0 1 2 3 4 5 6 7 8 9 10 . . . 255
^ ^ ^
These values (2, 8, 266) are your tops of the mountains.
Time complexity of this step is O(quantums)
I could explain why it is not O(1) or O(256), since you can find local maximums in a single pass. If needed I will add a comment.
step 3
Once you have your tops of the mountains, you want to isolate each mountain in a way that it has the maximum possible surface.
So, you will do that by finding the minimum value between two tops
In our case it is:
value 0 0 2 1 0 0 0 1 2 1 0 . . . 1
---------------------------------------------
color 0 1 2 3 4 5 6 7 8 9 10 . . . 255
^ ^
| \ / \
- - _ _ _ _ . . . _ ^
So our goal is to find between index values:
from 0 to 2 (not needed, first mountain start from beginning)
from 2 to 8 (to see where first mountain ends, and second one starts)
from 8 to 255 (to see where second one ends, and third starts)
from 255 to end (just noted, also not needed, last mountain always reaches the end)
There are multiple candidates (multiple zeros), and it is not important which one you choose for minimum. Final surface of the mountain is always the same.
Let's say that our algorithm return two minimums. We will use them in next step.
min_1_2 = 6
min_2_3 = 254
Time complexity of this step is O(256). You need just a single pass over histogram to calculate all minimums (actually you will do multiple smaller iterations, but in total you visit each element only once.
Someone could consider this as O(1)
Step 4
Calculate the median of each mountain.
This can be the tricky one. Why? Because we want to calculate the median using the original values (colors) and not counters (occurrences).
There is also the formula that can give us good estimate, and this one can be performed quite fast (looking only at histogram values) (https://medium.com/analytics-vidhya/descriptive-statistics-iii-c36ecb06a9ae)
If that is not precise enough, then the only option is to "unwrap" the calculated values. Then, we could sort these "raw" pixels and easily find the median.
In our case, those medians are 2, 8, 255
Time complexity of this step is O(nlogn) if we have to sort the whole original image. If approximation works fine, then time complexity of this step is almost the constant.
step 5
This is final step.
You now know the start and end of the "mountain".
You also know the median that belongs to that "mountain"
Again, you can iterate over each mountain and calculate the DIFF.
diff = 0
median_1 = 2
median_2 = 8
median_3 = 255
for each hist value (color, count) between START and END // for first mountain -> START = 0, END = 6
// for second mountain -> START = 6, END = 254
// for third mountain -> START = 254, END = 255
diff = diff + |color - median_X| * count
Time complexity of this step is again O(256), and it can be considered as constant time O(1)
For example, an n*n matrix
A_n=[2 1 1 ... 1]
[1 2 1 ... 1]
[... ...]
[1 1 1 ... 2]
has determinant n+1. Can I compute this result by these softwares?
This
f[n_]:=Det[Table[1,{n},{n}]+IdentityMatrix[n]];
f[12]
returns 13 and seems to work for any modest sized matrix.
Since Maple may not easily provide a structure representing an abstract Vector or Matrix of indeterminate size you might try to recast the problem in terms of some known properties of determinants.
For example, using Sylvester's determinant theorem you could take utilize the nx1 Matrix V(n) having all entries being identically 1.
So det(A(n)) = det(Id(n) + V(n).transpose(V(n))) = 1 + det(transpose(V(n)).V(n)) = 1 + add(i,i=1..n) = 1 + n
Another way might be to consider a minor expansion along the nth row. When the nth row and nth column are removed from the initial Matrix denoted by A(n) then you obtain A(n-1). Removing the nth row and the jth column (j in 1..n-1) produces a Matrix with determinant (-1)^j. I do not show that here. But perhaps note that any of those minors are a single row-exchange away from a (n-1)x(n-1) Matrix with A(n-2) in the upper left and 1's elsewhere.
In consequence Q(n) the determinant of A(n) is given by Q(n)=2*Q(n-1)-(n-1) I believe. And Q(1)=2. Using Maple's rsolve command,
rsolve({Q(n)=2*Q(n-1)-(n-1), Q(1)=2}, Q(n));
n + 1
Another approach might be to consider row-reduction along rows 2..n. You should be able to produce a new Matrix (having the same determinant) whose diagonal entries are: 2, (i+1)/i, i=2..n. The determinant is thus a conveniently telescoping product.
simplify( 2*product((i+1)/i, i=2..n) );
n + 1
Of course it is easy enough to deal with such Matrices for size n taken at a specific value,
H:=n->1+Matrix(n,n,1):
with(LinearAlgebra):
seq(Determinant(H(i)),i=1..5);
2, 3, 4, 5, 6
U7:=LUDecomposition(H(6),output=U);
[2 1 1 1 1 1]
[ ]
[ 3 1 1 1 1]
[0 - - - - -]
[ 2 2 2 2 2]
[ ]
[ 4 1 1 1]
[0 0 - - - -]
[ 3 3 3 3]
[ ]
U7 := [ 5 1 1]
[0 0 0 - - -]
[ 4 4 4]
[ ]
[ 6 1]
[0 0 0 0 - -]
[ 5 5]
[ ]
[ 7]
[0 0 0 0 0 -]
[ 6]
# The product of the main diagonalentries telescopes.
mul(U7[i,i],i=1..6);
7
Suppose, we're given two series of integer numbers as X[..] And Y[..], which
has the same length. We can choose any position i of the series X[] and
doing the operation like , X[i]=X[i] + 3, X[i + 2] = X[i + 2] + 2 , X[i + 4] = X[i + 4] + 1.
After manipulating the series with any number of time, is it possible to
find the same series like Y[..]?
I am thinking to implement it by brute force and normal combinational matching after manipulation. Is there any other process which can make it faster?
Given two series,
X [ 1, 2, 3 ,4, 5 ,6,8 ]
Y [ 1, 5, 6 ,6, 7 ,7,9 ]
if i=2 then
X [ 1, 5, 3 ,6, 5 ,7,8 ]
Y [ 1, 5, 6 ,6, 7 ,7,9 ]
and if i=3 then
X [ 1, 5, 6 ,6, 7 ,7,9 ]
Y [ 1, 5, 6 ,6, 7 ,7,9 ]
Matches the series.
You can see that for every index p resulting cell could be represented as
Y[p] = X[p] + F(p-4) + 2 * F(p-2) + 3 * F[p]
where F[p] is number of operation at p-th index.
So you have system of p linear equations for p unknowns Fi.
This is tridiagonal (sparse) system, it could be solved with some fast methods or with usual Gaussian elimination.
System might be inconsistent - in this case there are no solutions
Since an operation at index i modifies only elements present at index i, i + 2 and i + 4, that is, all indices >= i, we can build a greedy algorithm which iterates over the array X from left-to-right and at every index i compares the value with array Y.
Case X[i] > Y[i]: Then it's not possible to update X[i] to Y[i], hence return not possible.
Case X[i] == Y[i]: Then continue iterating over the next element at i + 1
Case X[i] < Y[i]: If (Y[i] - X[i]) mod 3 != 0, then return not possible, else compute m = (Y[i] - X[i])/3 and increment X[i] by 3 * m, X[i + 2] by 2 * m and X[i + 4] by m and continue iterating.
If we reach the end of array X, then it means it's possible to construct array Y from X using these operations.
Overall time complexity of the solution is O(n).
Start with an array of integers so that the sum of the values is some positive integer S. The following routine always terminates in the same number of steps with the same results. Why is this?
Start with an array x = [x_0, x_1, ..., x_N-1] such that all x_i's are integers. While there is a negative entry, do the following:
Choose any index i such that x_i < 0.
Add x_i (a negative number) to x_(i-1 % N).
Add x_i (a negative number) to x_(i+1 % N).
Replace x_i with -x_i (a positive number).
This process maintains the property that x_0 + x_1 + ... + x_N-1 = S. For any given starting array x, no matter which index is chosen at any step, the number of times one goes through these steps is the same as is the resulting vector. It is not even obvious (to me, at least) that this process terminates in finite time, let alone has this nice invariant property.
EXAMPLE:
Take x = [4 , -1, -2] and flipping x_1 to start, the result is
[4, -1, -2]
[3, 1, -3]
[0, -2, 3]
[-2, 2, 1]
[2, 0, -1]
[1, -1, 1]
[0, 1, 0]
On the other hand, flipping x_2 to start gives
[4, -1, -2]
[2, -3, 2]
[-1, 3, -1]
[1, 2, -2]
[-1, 0, 2]
[1, -1, 1]
[0, 1, 0]
and the final way give this solution with arrays reversed from the third on down if you choose x_2 instead of x_0 to flip at the third array. In all cases, 6 steps lead to [0,1,0].
I have an argument for why this is true, but it seems to me to be overly complicated (it has to do with Coxeter groups). Does anyone have a more direct way to think about why this happens? Even finding a reason why this should terminate would be great.
Bonus points to anyone who finds a way to determine the number of steps for a given array (without going through the process).
I think the easiest way to see why the output vector and the number of steps are the same no matter what index you choose at each step is to look at the problem as a bunch of matrix and vector multiplications.
For the case where x has 3 components, think of x as a 3x1 vector: x = [x_0 x_1 x_2]' (where ' is the transpose operation). Each iteration of the loop will choose to flip one of x_0,x_1,x_2, and the operation it performs on x is identical to multiplication by one of the following matrices:
-1 0 0 1 1 0 1 0 1
s_0 = 1 1 0 s_1 = 0 -1 0 s_2 = 0 1 1
1 0 1 0 1 1 0 0 -1
where multiplication by s_0 is the operation performed if the index i=0, s_1 corresponds to i=1, and s_2 corresponds to i=2. With this view, you can interpret the algorithm as multiplying the corresponding s_i matrix by x at each iteration. So in the first example where x_1 is flipped at the start, the algorithm computes: s_1*s_2*s_0*s_1*s_2*s_1[4 -1 -2]' = [0 1 0]'
The fact that the index you choose doesn't affect the final output vector arises from two interesting properties of the s matrices. First, s_i*s_(i-1)*s_i = s_(i-1)*s_i*s(i-1), where i-1 is computed modulo n, the number of matrices. This property is the only one needed to see why you get the same result in the examples with 3 elements:
s_1*s_2*s_0*s_1*s_2*s_1 = s_1*s_2*s_0*(s_1*s_2*s_1) = s_1*s_2*s_0*(s_2*s_1*s_2), which corresponds to choosing x_2 at the start, and lastly:
s_1*s_2*s_0*s_2*s_1*s_2 = s_1*(s_2*s_0*s_2)*s_1*s_2 = s_1*(s_0*s_2*s_0)*s1*s2, which corresponds to choosing to flip x_2 at the start, but then choosing to flip x_0 in the third iteration.
The second property only applies when x has 4 or more elements. It is s_i*s_k = s_k*s_i whenever k <= i-2 where i-2 is again computed modulo n. This property is apparent when you consider the form of matrices when x has 4 elements:
-1 0 0 0 1 1 0 0 1 0 0 0 1 0 0 1
s_0 = 1 1 0 0 s_1 = 0 -1 0 0 s_2 = 0 1 1 0 s_3 = 0 1 0 0
0 0 1 0 0 1 1 0 0 0 -1 0 0 0 1 1
1 0 0 1 0 0 0 1 0 0 1 1 0 0 0 -1
The second property essentially says that you can exchange the order in which non-conflicting flips occur. For example, in a 4 element vector, if you first flipped x_1 and then flipped x_3, this has the same effect as first flipping x_3 and then flipping x_1.
I picture pushing the negative value(s) out in two directions until they dampen. Since addition is commutative, it doesn't matter what order you process the elements.
Here is an observation for when N is divisible by 3... Probably not useful, but I feel like writing it down.
Let w (complex) be a primitive cube root of 1; that is, w^3 = 1 and 1 + w + w^2 = 0. For example, w = cos(2pi/3) + i*sin(2pi/3).
Consider the sum x_0 + x_1*w + x_2*w^2 + x_3 + x_4*w + x_5*w^2 + .... That is, multiply each element of the sequence by consecutive powers of w and add them all up.
Something moderately interesting happens to this sum on each step.
Consider three consecutive numbers [a, -b, c] from the sequence, with b positive. Suppose these elements line up with the powers of w such that these three numbers contribute a - b*w + c*w^2 to the sum.
Now perform the step on the middle element.
After the step, these numbers contribute (a-b) + b*w + (c-b)*w^2 to the sum.
But since 1 + w + w^2 = 0, b + b*w + b*w^2 = 0 too. So we can add this to the previous expression to get a + 2*b*w + c. Which is very similar to what we had before the step.
In other words, the step merely added 3*b*w to the sum.
If the three consecutive numbers had lined up with powers of w to contribute (say) a*w - b*w^2 + c, it turns out that the step will add 3*b*w^2.
In other words, no matter how the powers of w line up with the three numbers, the step increases the sum by 3*b, 3*b*w, or 3*b*w^2.
Unfortunately, since w^2 = -(w+1), this does not actually yield a steadily increasing function. So, as I said, probably not useful. But it still seems like a reasonable strategy is to seek a "signature" for each position that changes monotonically with each step...