I am trying for a day now to find an algorithm to swap two indices in a symmetric matrix so that the result is also a symmetric matrix.
Let´s say I have following matrix:
0 1 2 3
1 0 4 5
2 4 0 6
3 5 6 0
Let´s say I want to swap line 1 and line 3 (where line 0 is the first line). Just swapping results in:
0 1 2 3
3 5 6 0
2 4 0 6
1 0 4 5
But this matrix is not symmetric anymore. What I really want is following matrix as a result:
0 3 2 1
3 0 6 5
2 6 0 4
1 5 4 0
But I am not able to find a suitable algorithm. And that really cracks me up, because it looks like in easy task.
Does anybody know?
UPDATE
Phylogenesis gave a really simple answer and I feel silly that I could not think of it myself. But here is a follow-up task:
Let´s say I store this matrix as a two-dimensional array. And to save memory I do not save the redundant values and I also leave out the diagonal which has always 0 values. My array looks like that:
[ [1, 2, 3], [4, 5], [6] ]
My goal is to transform that array to:
[ [3, 2, 1], [6, 5], [4] ]
How can I swap the rows and then the columns in an efficient way using the given array?
It is simple!
As you are currently doing, swap row 1 with row 3. Then swap column 1 with column 3.
Related
I read that Shell algorithm is an improved version of insertion sort, but I also read online sometimes it is about shifting, and sometimes I read it is about swapping, which one is correct?
For example: [5 3 2 10 0]
If we take the gap to be 2, then we will compare first 5 and 2, as a first step then, the result will be:
[2 3 5 10 0] by swapping and [2 5 3 10 0] by shifting, which one is Shell algorithm?
If we take the gap to be 2, then we will compare first 5 and 2, as a first step then, the result will be: [2 3 5 10 0] by swapping and [2 5 3 10 0] by shifting, which one is Shell algorithm?
The main principle in Shell sort is that with the chosen gap we look at the data as a collection of interleaved, shorter arrays. Each of those shorter arrays has their first entry at an index less than gap. These shorter arrays are sorted independently. Once that is done, the gap is reduced.
In the example, there are two interleaved arrays, which we can picture like this:
interleaved array: 5 2 0
interleaved array: 3 10
The first algorithm would fit under Shell sort. But the second one does not sort the interleaved arrays independently, as such rotations (shifts) move values from one interleaved array to another:
interleaved array: 5 🠔 2 0
⬊ ⬈
interleaved array: 3 10
...resulting in:
interleaved array: 2 3 0
interleaved array: 5 10
Unless other precautions are made, the second algorithm will not ensure a rotation improves the situation. For instance, if the input is [3 1 2 4] and gap is 2, then the comparison of 3 and 2 will lead to a rotation, and we get [2 3 1 4]. But now we still have two values in the first interleaved array that are not in order (2 is greater than 1).
Shifting?
Shifting does not occur like you depicted it (crossing multiple interleaved arrays), but within one interleaved array it is generally done, just like it is done in insertion sort. So to apply that to your example:
interleaved array: 5 2 0
interleaved array: 3 10
The value 2 is picked up and preceding values are shifted forward within the same interleaved array until the right slot is found for the picked up value. In this case only one value is shifted (5), which makes it a swap:
interleaved array: 2 5 0
interleaved array: 3 10
Now 0 is picked up, and two values are shifted (2 and 5):
interleaved array: 0 2 5
interleaved array: 3 10
Now the first interleaved array is sorted. The second interleaved array happens to be sorted already. Then the gap is reduced to 1:
array: 0 3 2 10 5
Here 2 is picked up and one value (3) is shifted:
array: 0 2 3 10 5
Finally 5 is picked up and one value (10) is shifted:
array: 0 2 3 5 10
I'm solving a reverse 0/1 knapsack problem, i.e. I'm trying to recreate the list of weights and values of all the items using only the DP-table.
I have this table:
[0][1] [4][5][6] [12]
[0] 0 0 0 0 0 0 0 0 0 0 0 0 0
[1] 0 4 4 4 4 4 4 4 4 4 4 4 4
[2] 0 4 4 4 6 10 10 10 10 10 10 10 10
I don't understand how row [2] is possible.
[0] - it is clear that if we do not put anything in the knapsack, the answer total value 0.
[1] - in row [1] I see that [1][1]=4 and I hope that I correctly conclude that the first item has weight = 1 and value = 4. So, since we put only 1 item it is the only weight we can hope for in this row.
[2] - when we reach [2][4], we have 6, 6 > [2-1][4] and I assume that we use 2 items here, one weight = 1 and value = 4 (the old one) and weight = 4-1 and value = 6-4 = weight = 3 and value = 2, which is the new one.
Question: How is it possible to have [2][5] = 10? We can't put more than 1 item on a row, as I understand this chart. If we have two items in use here, shouldn't we have 6 for all the elements in row [2] starting from [2][4] to the end of the row?
This seems possible if you have two items, one with weight 1 and value 4 and one with weight 4, value 6.
How? When you're at index (2, 4) you have a weight capacity of 4 for the first time in the row that considers item 2 (weight 4, value 6). This lets you take the item with value 6 instead of the weight 1, value 4 item you previously took at index (2, 3), effectively building from the subproblem at index (2, 0).
Now, when you're at index (2, 5) with a weight capacity of 5, the total value of 10 is possible because you can take both items. That's the best you can do for the rest of the row.
See also How to find which elements are in the bag, using Knapsack Algorithm [and not only the bag's value]?
Working on a dicegame for school and I have trouble figuring out how to do automatic calculation of the result. (we don't have to do it automatically, so I could just let the player choose which dice to use and then just check that the user choices are valid) but now that I have started to think about it I can't stop...
the problem is as follows:
I have six dice, the dice are normal dice with the value of 1-6.
In this example I have already roled the dice and they have the following values:
[2, 2, 2, 1, 1, 1]
But I don't know how to calulate all combinations so that as many dicecombinations as possible whose value combined(addition) are 3 (in this example) are used.
The values should be added together (for example a die with value 1 and another die with the value 2 are together 3) then there are different rounds in the game where the aim is to get different values (which can be a combination(addition) of die-values for example
dicevalues: [2, 2, 2, 2, 2, 2]
could give the user a total of 12 points if 4 is the goal for the current round)
2 + 2 = 4
2 + 2 = 4
2 + 2 = 4
if the goal of the round instead where 6 then the it would be
2 + 2 + 2 = 6
2 + 2 + 2 = 6
instead which would give the player 12 points (6 + 6)
[1, 3, 6, 6, 6, 6]
with the goal of 3 would only use the dice with value 3 and discard the rest since there is no way to add them up to get three.
2 + 1 = 3
2 + 1 = 3
2 + 1 = 3
would give the user 9 points.
but if it where calculated the wrong way and the ones where used up together instead of each 1 getting apierd with a two 1 + 1 + 1 which would only give the player 3 points och the twos couldn't be used.
Another example is:
[1, 2, 3, 4, 5, 6]
and all combinations that are equal to 6 gives the user points
[6], [5, 1], [4 ,2]
user gets 18 points (3 * 6)
[1 ,2 ,3], [6]
user gets 12 points (2 * 6) (Here the user gets six points less due to adding upp 1 + 2 + 3 instead of doing like in the example above)
A dice can have a value between 1 and 6.
I haven't really done much more than think about it and I'm pretty sure that I could do it right now, but it would be a solution that would scale really bad if I for example wanted to use 8 dices instead and every time I start programming on it I start to think that have to be a better/easier way of doing it... Anyone have any suggestion on where to start? I tried searching for an answer and I'm sure it's out there but I have problem forumulating a query that gives me relevant result...
With problems that look confusing like this, it is a really good idea to start with some working and examples. We have 6 die, with range [1 to 6]. The possible combinations we could make therefore are:
target = 2
1 combination: 2
2 combination: 1+1
target = 3
1 combination: 3
2 combination: 2+1
3 combination: 1+1+1
target = 4
1 combination: 4
2 combination: 3+1
2+2
3 combination: 2+1+1
4 combination: 1+1+1+1
target = 5
1 combination: 5
2 combination: 4+1
3+2
3 combination: 2+2+1
4 combination: 2+1+1+1
5 combination: 1+1+1+1+1
See the pattern? Hint, we go backwards from target to 1 for the first number we can add, and then given this first number, and the size of the combination, there is a limit to how big subsequent numbers can be!
There is a finite list of possible combinations. You can by looking for 1 combination scores, and remove these from the die available. Then move on to look for 2 combination scores, etc.
If you want to read more about this sub-field of mathematics, the term you need to look for is "Combinatorics". Have fun!
I am looking for a general function to tile or repeat matrices along an arbitrary number of dimensions an arbitrary number of times. Python and Matlab have these features in NumPy's tile and Matlab's repmat functions. Julia's repmat function only seems to support up to 2-dimensional arrays.
The function should look like repmatnd(a, (n1,n2,...,nk)). a is an array of arbitrary dimension. And the second argument is a tuple specifying the number of times the array is repeated for each dimension k.
Any idea how to tile a Julia array on greater than 2 dimensions? In Python I would use np.tile and in matlab repmat, but the repmat function in Julia only supports 2 dimensions.
For instance,
x = [1 2 3]
repmatnd(x, 3, 1, 3)
Would result in:
1 2 3
1 2 3
1 2 3
1 2 3
1 2 3
1 2 3
1 2 3
1 2 3
1 2 3
And for
x = [1 2 3; 1 2 3; 1 2 3]
repmatnd(x, (1, 1, 3))
would result in the same thing as before. I imagine the Julia developers will implement something like this in the standard library, but until then, it would be nice to have a fix.
Use repeat:
julia> X = [1 2 3]
1x3 Array{Int64,2}:
1 2 3
julia> repeat(X, outer = [3, 1, 3])
3x3x3 Array{Int64,3}:
[:, :, 1] =
1 2 3
1 2 3
1 2 3
[:, :, 2] =
1 2 3
1 2 3
1 2 3
[:, :, 3] =
1 2 3
1 2 3
1 2 3
I've been searching for an algorithm for the solution of all possible matrices of dimension 'n' that can be obtained with two arrays, one of the sum of the rows, and another, of the sum of the columns of a matrix. For example, if I have the following matrix of dimension 7:
matriz= [ 1 0 0 1 1 1 0
1 0 1 0 1 0 0
0 0 1 0 1 0 0
1 0 0 1 1 0 1
0 1 1 0 1 0 1
1 1 1 0 0 0 1
0 0 1 0 1 0 1 ]
The sum of the columns are:
col= [4 2 5 2 6 1 4]
The sum of the rows are:
row = [4 3 2 4 4 4 3]
Now, I want to obtain all possible matrices of "ones and zeros" where the sum of the columns and the rows fulfil the condition of "col" and "row" respectively.
I would appreciate ideas that can help solve this problem.
One obvious way is to brute-force a solution: for the first row, generate all the possibilities that have the right sum, then for each of these, generate all the possibilities for the 2nd row, and so on. Once you have generated all the rows, you check if the sum of the columns is right. But this will take a lot of time. My math might be rusty at this time of the day, but I believe the number of distinct possibilities for a row of length n of which k bits are 1 is given by the binomial coefficient or nchoosek(n,k) in Matlab. To determine the total number of possibilities, you have to multiply this number for every row:
>> n = 7;
>> row= [4 3 2 4 4 4 3];
>> prod(arrayfun(#(k) nchoosek(n, k), row))
ans =
3.8604e+10
This is a lot of possibilities to check! Doing the same for the columns gives
>> col= [4 2 5 2 6 1 4];
>> prod(arrayfun(#(k) nchoosek(n, k), col))
ans =
555891525
Still a large number, but 'only' a factor 70 smaller.
It might be possible to improve this brute-force method a little bit by seeing if the later rows are already constrained by the previous rows. If in your example, for a particular combination of the first two rows, both rows have a 1 in the second column, the rest of this column should all be 0, since the sum must be 2. This reduces the number of possibilities for the remaining rows a bit. Implementing such checks might complicate things a bit, but they might make the difference between a calculation that takes 2 days or one that takes just 1 hour.
An optimized version of this might alternatively generate rows and columns, and start with those for which the number of possibilities is the lowest. I don't know if there is a more elegant solution than this brute-force method, I would be interested to hear one.