cost of a call to calculatematrix method - matrix

In terms of usage API is there diference between call n times a 1 X n calculatematrix and call 1 time a n x n calculatematrix?
For example, if I need a matrix of 3 origens and 3 destinations. Is there difference of cost if I call 3 times a calculatematrix with 1 origen and 3 destinations or I call one time a calculatematrix with 3 origens and 3 destinations?

Related

PRAM CREW algorithm for counting odd numbers

So I try to solve the following task:
Develop an CREW PRAM algorithm for counting the odd numbers of a sequence of integers x_1,x_2,...x_n.
n is the number of processors - the complexity should be O(log n) and log_2 n is a natural number
My solution so far:
Input: A:={x_1,x_2,...,x_n} Output:=oddCount
begin
1. global_read(A(n),a)
2. if(a mod 2 != 0) then
oddCount += 1
The problem is, due to CREW I am not allowed to use multiple write instructions at the same time oddCount += 1 is reading oddCount and then writes oddCount + 1, so there would be multiple writes.
Do I have to do something like this
Input: A:={x_1,x_2,...,x_n} Output:=oddCount
begin
1. global_read(A(n),a)
2. if(a mod 2 != 0) then
global_write(1, B(n))
3. if(n = A.length - 1) then
for i = 0 to B.length do
oddCount += B(i)
So first each process determines wether it is a odd or even number and the last process calculates the sum? But how would this affect the complexity and is there a better solution?
Thanks to libik I came to this solution: (n starts with 0)
Input: A:={x_1,x_2,...,x_n} Output:=A(0):=number off odd numbers
begin
1. if(A(n) mod 2 != 0) then
A(n) = 1
else
A(n) = 0
2. for i = 1 to log_2(n) do
if (n*(2^i)+2^(i-1) < A.length)
A(n*(2^i)) += A(n*(2^i) + (2^(i-1)))
end
i = 1 --> A(n * 2): 0 2 4 6 8 10 ... A(n*2 + 2^0): 1 3 5 7 ...
i = 2 --> A(n * 4): 0 4 8 12 16 ... A(n*4 + 2^1): 2 6 10 14 18 ...
i = 3 --> A(n * 8): 0 8 16 24 32 ... A(n*8 + 2^2): 4 12 20 28 36 ...
So the first if is the 1st Step and the for is representing log_2(n)-1 steps so over all there are log_2(n) steps. Solution should be in A(0).
Your solution is O(n) as there is for cycle that has to go through all the numbers (which means you dont utilize multiple processors at all)
The CREW means you cannot write into the same cell (in your example cell=processor memory), but you can write into multiple cells at once.
So how to do it as fast as possible?
At initialization all processors start with 1 or 0 (having odd number or not)
In first round just sum the neighbours x_2 with x_1, then x_4 with x_3 etc.
It will be done in O(1) as every second processor "p_x" look to "p_x+1" processor in parallel and add 0 or 1 (is there odd number or not)
Then in processors p1,p3,p5,p7.... you have part of solution. Lets do this again but now with p1 looks to p3, p5 looks to p7 and p_x looks to o_x+2
Then you have part of the solution only in processors p1, p5, p9 etc.
Repeat the process. Every step the number of processors halves, so you need log_2(n) steps.
If this would be real-life example, there is often calculated cost of synchronization. Basically after each step, all processors have to synchronize themselves so they now, they can do the second step (as you run the described code in each processor, but how do you know if you can already add number from processor p_x, because you can do it after p_x finished work).
You need either some kind of "clock" or synchronization.
At this example, the final complexity would be log(n)*k, where k is the complexity of synchronization.
The cost depends on machine, or definition. One way how to notify processors that you have finished is basically the same one as the one described here for counting the odd numbers. Then it would also cost k=log(n) which would result in log^2(n)

Bipartie Matching to form an Array

I am given a number from 1 to N , and there are M relationship given in the form a and b where we can connect number a and b.
We have to form the valid array , A array is said to be valid if for any two consecutive indexes A[i] and A[i+1] is one of the M relationship
We have to construct a valid Array of Size N, it's always possible to construct that.
Solution: Make A Bipartite Graph of the following , but there is a loophole on this,
let N=6
M=6
1 2
2 3
1 3
4 5
5 6
3 4
So Bipartite Matching gives this:
Match[1]=2
Match[2]=3
Match[3]=1 // Here it form a Loop
Match[4]=5
Match[5]=6
So how to i print a valid Array of size N , since N can be very large so many loops can be formed ? Is there any other solution ?
Another Example:
let N=6
M=6
1 3
3 5
2 5
5 1
4 2
6 4
It's will form a loop 1->3->5->1
1 3 5 2 4 6

Algorithm for read matrixes

An algorithm that need process a matrix n x m that is scalable.
E.g. I have a time series of 3 seconds containing the values: 2,1,4.
I need to decompose it to take a 3 x 4 matrix, where 3 is the number of elements of time series and 4 the maximum value. The resulting matrix that would look like this:
1 1 1
1 0 1
0 0 1
0 0 1
Is this a bad solution or is it only considered a data entry problem?
The question is,
do I need to distribute information from each row of the matrix for various elements without losing the values?

Converting a number into a special base system

I want to convert a number in base 10 into a special base form like this:
A*2^2 + B*3^1 + C*2^0
A can take on values of [0,1]
B can take on values of [0,1,2]
C can take on values of [0,1]
For example, the number 8 would be
1*2^2 + 1*3 + 1.
It is guaranteed that the given number can be converted to this specialized base system.
I know how to convert from this base system back to base-10, but I do not know how to convert from base-10 to this specialized base system.
In short words, treat every base number (2^2, 3^1, 2^0 in your example) as weight of an item, and the whole number as the capacity of a bag. This problem wants us to find a combination of these items which they fill the bag exactly.
In the first place this problem is NP-complete. It is identical to the subset sum problem, which can also be seen as a derivative problem of the knapsack problem.
Despite this fact, this problem can however be solved by a pseudo-polynomial time algorithm using dynamic programming in O(nW) time, which n is the number of bases, and W is the number to decompose. The details can be find in this wikipedia page: http://en.wikipedia.org/wiki/Knapsack_problem#Dynamic_programming and this SO page: What's it called when I want to choose items to fill container as full as possible - and what algorithm should I use?.
Simplifying your "special base":
X = A * 4 + B * 3 + C
A E {0,1}
B E {0,1,2}
C E {0,1}
Obviously the largest number that can be represented is 4 + 2 * 3 + 1 = 11
To figure out how to get the values of A, B, C you can do one of two things:
There are only 12 possible inputs: create a lookup table. Ugly, but quick.
Use some algorithm. A bit trickier.
Let's look at (1) first:
A B C X
0 0 0 0
0 0 1 1
0 1 0 3
0 1 1 4
0 2 0 6
0 2 1 7
1 0 0 4
1 0 1 5
1 1 0 7
1 1 1 8
1 2 0 10
1 2 1 11
Notice that 2 and 9 cannot be expressed in this system, while 4 and 7 occur twice. The fact that you have multiple possible solutions for a given input is a hint that there isn't a really robust algorithm (other than a look up table) to achieve what you want. So your table might look like this:
int A[] = {0,0,-1,0,0,1,0,1,1,-1,1,1};
int B[] = {0,0,-1,1,1,0,2,1,1,-1,2,2};
int C[] = {0,1,-1,0,2,1,0,1,1,-1,0,1};
Then look up A, B, C. If A < 0, there is no solution.

Minimize maximum absolute difference in pairs of numbers

The problem statement:
Give n variables and k pairs. The variables can be distinct by assigning a value from 1 to n to each variable. Each pair p contain 2 variables and let the absolute difference between 2 variables in p is abs(p). Define the upper bound of difference is U=max(Abs(p)|every p).
Find an assignment that minimize U.
Limit:
n<=100
k<=1000
Each variable appear at least 2 times in list of pairs.
A problem instance:
Input
n=9, k=12
1 2 (meaning pair x1 x2)
1 3
1 4
1 5
2 3
2 6
3 5
3 7
3 8
3 9
6 9
8 9
Output:
1 2 5 4 3 6 7 8 9
(meaning x1=1,x2=2,x3=5,...)
Explaination: An assignment of x1=1,x2=2,x3=3,... will result in U=6 (3 9 has greastest abs value). The output assignment will get U=4, the minimum value (changed pair: 3 7 => 5 7, 3 8 => 5 8, etc. and 3 5 isn't changed. In this case, abs(p)<=4 for every pair).
There is an important point: To achieve the best assignments, the variables in the pairs that have greatest abs must be change.
Base on this, I have thought of a greedy algorithm:
1)Assign every x to default assignment (x(i)=i)
2)Locate pairs that have largest abs and x(i)'s contained in them.
3)For every i,j: Calculate U. Swap value of x(i),x(j). Calculate U'. If U'<U, stop and repeat step 3. If U'>=U for every i,j, end and output the assignment.
However, this method has a major pitfall, if we need an assignment like this:
x(a)<<x(b), x(b)<<x(c), x(c)<<x(a)
, we have to swap in 2 steps, like: x(a)<=>x(b), then x(b)<=>x(c), then there is a possibility that x(b)<<x(a) in first step has its abs become larger than U and the swap failed.
Is there any efficient algorithm to solve this problem?
This looks like http://en.wikipedia.org/wiki/Graph_bandwidth (NP complete, even for special cases). It looks like people run http://en.wikipedia.org/wiki/Cuthill-McKee_algorithm when they need to do this to try and turn a sparse matrix into a banded diagonal matrix.

Resources