Related
Find the first missing integer in the sequence of integers
[4,5,1,2,6,7] missing is 3
Then when there is repeated integers
[1,2,2,2,5,8,9] still missing 3
When you also have negative
[-2,0, 1,2,] missing -1
[1,2,3,4,5] missing 6 or 0
Can anyone help me find a good algorithm to cover all these cases. I have an algorithm which covers first 2 cases but not sure how to cover all the cases in effective manner.
What I consider the classic O(n) solution for this problem is to rely on the fact that the array can contain at most N unique numbers, where N is the input's length. Therefore the range for our record is restricted to N.
Since you seem to allow the expected sequence to start anywhere, including negative numbers, we can start by iterating once over the array and recording, L, the lowest number seen. Now use L as an offset so that 0 + L equals the first number we expect to be present.
Initialise an array record of length (N + 1) and set each entry to false. Iterate over the input and for each entry, A[i], if (A[i] - L) is not greater than N, set record[ A[i] - L ] to true. For example:
[-2, 0, 1, 2] ->
N = 4
L = -2
-2 -> -2 - (-2) = 0
-> record[0] = true
0 -> 0 - (-2) = 2
-> record[2] = true
1 -> 1 - (-2) = 3
-> record[3] = true
2 -> 2 - (-2) = 4
-> record[4] = true
record -> [true, false, true, true, true]
Now iterate over the record. Output the first entry at index i that is set to false as i + L. In our example above, this would be:
record[1] is false
output: 1 + (-2) -> -1
#include <stdio.h>
#include <string.h>
#include <math.h>
#include <stdlib.h>
int main()
{
int n;
scanf("%d",&n);
int a[n],i=0;
//Reading elements
for(i=0;i<n;i++){
scanf("%d",&a[i]);
}
int min=__INT_MAX__,max=0;
//Finding the minimun and maximum from given elements
for(i=0;i<n;i++){
if(a[i]>max)
max=a[i];
if(a[i]<min)
min=a[i];
}
int len=max-min,diff=0-min,miss;
int b[len];
//Creating a new array and assigning 0
for(i=0;i<len;i++)
b[i]=0;
//The corresponding index value is incremented based on the given numbers
for(i=0;i<n;i++){
b[a[i]+diff]++;
}
//Finding the missed value
for(i=0;i<len;i++){
if(b[i]==0){
miss=i-diff;
break;
}
}
printf("%d",miss);
}
Code Explanation:
1.Find the minimum and maximum in the given numbers.
2.Create an count array of size (maximum-minimum) and iniatizing to 0, which maintains the count of the given numbers.
3.Now by iterating, for each given element increment the corresponding index by 1.
4.Finally iterate through the count array and find the first missing number.
This might help you in solving your problem. Correct me if i'm wrong.
I think, it will be easy to solve sort of problems using data-structure like TreeMap in JAVA, e.g:
treeMap.put(array[i], treeMap.get(array[i]) == null ? 1 : treeMap.get(array[i]) + 1);
So, you are putting key and value to the TreeMap the key represent the digit itself e.g, 1,2,3... and the value represent the occurrence times.
Thus, and by taking advantage of this data-structure (Sort elements for us) you can loop through this data-structure and check which key is missing in the sequence, e.g:
for key in treeMap
if(key > currentIndex) // this is the missing digit
if(loop-completed-without-missing-key) // it's not in the array.
Add the numbers to a running array and keep them sorted.
You may also have optional minimum and maximum bounds for the array (to handle your third case, "6 is missing even if not in array"
On examination of a new number:
- try inserting it in the sorting array.
- already present: discard
- below minimum or above maximum: nullify minimum or maximum accordingly
- otherwise add in proper position.
To handle an array: sort it, compare first and last elements to expected minimum / maximum. Nullify minimum if greater than first element, nullify maximum if smaller than last element.
There might be a special case if minimum and maximum are both above first or both above last:
min=5 max=8 array = [ 10, 11, 13 ]
Here 5, 6, 7, 8 and 12 are missing, but what about 9? Should it be considered missing?
When checking for missing numbers include:
- if minimum is not null, all numbers from minimum to first element.
- if maximum is not null, all numbers from last element to maximum.
- if (last - first) = number of elements, no numbers are missing
(total numbers examined minus array size is duplicate count)
- otherwise walk the array and report all missing numbers: when
checking array[i], if array[i]-array[i-1] != 1 you have a gap.
only "first" missing
You still have to manage the whole array even if you're only interested in one missing number. For if you discarded part of the array, and the missing number arrived, then the new missing number might well have been in the discarded part of the array.
However you might keep trace of what the smallest missing number is, and recalculate with cost of o(log n) only when/if it arrives; then you'd be able to tell which is it in o(1) time. To quickly zero on that missing number, consider that there is a gap between arr[i] and arr[j] iff arr[j]-arr[i] > j-i.
So you can use the bisection method: start with i = first, j = last; if gap(i,j) then c = ceil(i+j)/2. If gap(i, c) then j = c, else i = c, and repeat until j-i = 1. At that point arr[i]+1 is your smallest missing number.
Suppose you have an array containing indexes of set bits for a really big number (up to 10000 bits, which can't be represented with primitive types). Example: array: [0, 2, 3, 6, 10] corresponds to 10001001101.
In other words: num = pow(A[0]) + pow(A[1]) + .... + pow(A[n-1]), where pow(x) = 2x.
Suppose now that you have a number K = 3 * num.
How to find the number of bits set in K ?
I believe it is possible to do it in O(n), where n is the number of elements in the array.
My idea is to iterate through the items in the array, simulating an add between the number and itself shifted 1 position to the left. However I couldn't find a clean logic for it.
Working from LSB to MSB, you have runs of set bits separated by runs of cleared bits: gaps.
(Without a carry in,) Each single 1 will turn into 11 - the "more significant 1" in a place where there was a zero before: no carry out.
If you prepend 10 to a longer run of 1s and clear the least-significant-but-one bit, you get the binary representation of three times the original number - with exactly the same number of bits set. And a 1 where there might be the next, err, one as well as 1: a carry out. Any carry in gets absorbed: it just "exchanges" both least significant bits.
With the 11 that represents three times a single 1, a carry in causes a carry out - and clears two bits (try with 101011 * 3).
Gaps longer than one "absorb" carries.
Assume input indices in ascending order, presort, or surprise me with an O(n) algorithm. I'd be tempted to code a state machine using labels.
As I'm practising python:
def popcount3num(indices):
'''
given, in (strictly) ascending order, the indices of bits set
in the binary representation of a natural number num,
return the number of bits set in 3*num
'''
prev = -3 # position of last bit handled
count = carry = 0 # carry is for bitposition handled + 2
for i in indices:
# print(i, count, carry)
if prev == i-1: # in a run of set bits
if not carry:
carry = 1 # "move bit from count to carry"
count -= 1
else:
count += 1 # tally
else: # gap
if not carry:
# without a carry in, every run produces
# two bits at least, including carry out
count += 2
elif prev < i-2:
# if the carry was for a lower position, just tally
count += 3
carry = 0
else:
# with a carry in, a lone one will change nothing
# a run will add just as many as without carry
pass
prev = i
return count + carry
From this test:
I observe:
If sizeof(arrN)==1, then sizeof(arrRes)=2
If 'v' exists in both arrN and arr2N, then eliminate it and repeat with 'v+1' until 'v+x' doesn't exist or is last value. Then increase that 'v+x'.
Example N=3: '1' exists in boths, eliminate; '2' only in one, but is last; increment '2'.
After applying 2) if a value exists in arrN, but not in arr2N then it will exists in arrRes, size will be ++1. Same for a value in arr2N and not in arrN. Example N=9.
N=11 is a complex example. Detail:
'0' in arrN, not in arr2N, so put in arrRes.
'1' in both; skip '1' and test 1+1=2.
'2' in both (well, really only in arr2N, we built the fake '2' from
the other arr), skip and test '3'.
'3' built in arr2N exists in arrN. Skip and test '4'.
'4' exist an is the last. Increase it.
Final result [0,5]: '0' from step 1 and '5' from step 5
When drawing in random from a set of values in succession, where a drawn value is allowed to
be drawn again, a given value has (of course) a small chance of being drawn twice (or more) in immediate succession, but that causes an issue (for the purposes of a given application) and we would like to eliminate this chance. Any algorithmic ideas on how to do so (simple/efficient)?
Ideally we would like to set a threshold say as a percentage of the size of the data set:
Say the size of the set of values N=100, and the threshold T=10%, then if a given value is drawn in the current draw, it is guaranteed not to show up again in the next N*T=10 draws.
Obviously this restriction introduces bias in the random selection. We don't mind that a
proposed algorithm introduces further bias into the randomness of selection, what really
matters for this application is that the selection is just random enough to appear so
for a human observer.
As an implementation detail, the values are stored as database records, so database table flags/values can be used, or maybe external memory structures. Answers about the abstract case are welcome too.
Edit:
I just hit this other SO question here, which has good overlap with my own. Going through the good points there.
Here's an implementation that does the whole process in O(1) (for a single element) without any bias:
The idea is to treat the last K elements in the array A (which contains all the values) like a queue, we draw a value from the first N-k values in A, which is the random value, and swap it with an element in position N-Pointer, when Pointer represents the head of the queue, and it resets to 1 when it crosses K elements.
To eliminate any bias in the first K draws, the random value will be drawn between 1 and N-Pointer instead of N-k, so this virtual queue is growing in size at each draw until reaching the size of K (e.g. after 3 draws the number of possible values appear in A between indexes 1 and N-3, and the suspended values appear in indexes N-2 to N.
All operations are O(1) for drawing a single elemnent and there's no bias throughout the entire process.
void DrawNumbers(val[] A, int K)
{
N = A.size;
random Rnd = new random;
int Drawn_Index;
int Count_To_K = 1;
int Pointer = K;
while (stop_drawing_condition)
{
if (Count_To_K <= K)
{
Drawn_Index = Rnd.NextInteger(1, N-Pointer);
Count_To_K++;
}
else
{
Drawn_Index = Rnd.NextInteger(1, N-K)
}
Print("drawn value is: " + A[Drawn_Index])
Swap(A[Drawn_Index], A[N-Pointer])
Pointer--;
if (Pointer < 1) Pointer = K;
}
}
My previous suggestion, by using a list and an actual queue, is dependent on the remove method of the list, which I believe can be at best O(logN) by using an array to implement a self balancing binary tree, as the list has to have direct access to indexes.
void DrawNumbers(list N, int K)
{
queue Suspended_Values = new queue;
random Rnd = new random;
int Drawn_Index;
while (stop_drawing_condition)
{
if (Suspended_Values.count == K)
N.add(Suspended_Value.Dequeue());
Drawn_Index = Rnd.NextInteger(1, N.size) // random integer between 1 and the number of values in N
Print("drawn value is: " + N[Drawn_Index]);
Suspended_Values.Enqueue(N[Drawn_Index]);
N.Remove(Drawn_Index);
}
}
I assume you have an array, A, that contains the items you want to draw. At each time period you randomly select an item from A.
You want to prevent any given item, i, from being drawn again within some k iterations.
Let's say that your threshold is 10% of A.
So create a queue, call it drawn, that can hold threshold items. Also create a hash table that contains the drawn items. Call the hash table hash.
Then:
do
{
i = Get random item from A
if (i in hash)
{
// we have drawn this item recently. Don't draw it.
continue;
}
draw(i);
if (drawn.count == k)
{
// remove oldest item from queue
temp = drawn.dequeue();
// and from the hash table
hash.remove(temp);
}
// add new item to queue and hash table
drawn.enqueue(i);
hash.add(i);
} while (forever);
The hash table exists solely to increase lookup speed. You could do without the hash table if you're willing to do a sequential search of the queue to determine if an item has been drawn recently.
Say you have n items in your list, and you don't want any of the k last items to be selected.
Select at random from an array of size n-k, and use a queue of size k to stick the items you don't want to draw (adding to the front and removing from the back).
All operations are O(1).
---- clarification ----
Give n items, and a goal of not redrawing any of the last k draws, create an array and queue as follows.
Create an array A of size n-k, and put n-k of your items in the list (chosen at random, or seeded however you like).
Create a queue (linked list) Q and populate it with the remaining k items, again in random order or whatever order you like.
Now, each time you want to select an item at random:
Choose a random index from your array, call this i.
Give A[i] to whomever is asking for it, and add it to the front of Q.
Remove the element from the back of Q, and store it in A[i].
Everything is O(1) after the array and linked list are created, which is a one-time O(n) operation.
Now, you might wonder, what do we do if we want to change n (i.e. add or remove an element).
Each time we add an element, we either want to grow the size of A or of Q, depending on our logic for deciding what k is (i.e. fixed value, fixed fraction of n, whatever...).
If Q increases then the result is trivial, we just append the new element to Q. In this case I'd probably append it to the end of Q so that it gets in play ASAP. You could also put it in A, kicking some element out of A and appending it to the end of Q.
If A increases, you can use a standard technique for increasing arrays in amortized constant time. E.g., each time A fills up, we double it in size, and keep track of the number of cells of A that are live. (look up 'Dynamic Arrays' in Wikipedia if this is unfamiliar).
Set-based approach:
If the threshold is low (say below 40%), the suggested approach is:
Have a set and a queue of the last N*T generated values.
When generating a value, keep regenerating it until it's not contained in the set.
When pushing to the queue, pop the oldest value and remove it from the set.
Pseudo-code:
generateNextValue:
// once we're generated more than N*T elements,
// we need to start removing old elements
if queue.size >= N*T
element = queue.pop
set.remove(element)
// keep trying to generate random values until it's not contained in the set
do
value = getRandomValue()
while set.contains(value)
set.add(value)
queue.push(value)
return value
If the threshold is high, you can just turn the above on its head:
Have the set represent all values not in the last N*T generated values.
Invert all set operations (replace all set adds with removes and vice versa and replace the contains with !contains).
Pseudo-code:
generateNextValue:
if queue.size >= N*T
element = queue.pop
set.add(element)
// we can now just get a random value from the set, as it contains all candidates,
// rather than generating random values until we find one that works
value = getRandomValueFromSet()
//do
// value = getRandomValue()
//while !set.contains(value)
set.remove(value)
queue.push(value)
return value
Shuffled-based approach: (somewhat more complicated that the above)
If the threshold is a high, the above may take long, as it could keep generating values that already exists.
In this case, some shuffle-based approach may be a better idea.
Shuffle the data.
Repeatedly process the first element.
When doing so, remove it and insert it back at a random position in the range [N*T, N].
Example:
Let's say N*T = 5 and all possible values are [1,2,3,4,5,6,7,8,9,10].
Then we first shuffle, giving us, let's say, [4,3,8,9,2,6,7,1,10,5].
Then we remove 4 and insert it back in some index in the range [5,10] (say at index 5).
Then we have [3,8,9,2,4,6,7,1,10,5].
And continue removing the next element and insert it back, as required.
Implementation:
An array is fine if we don't care about efficient a whole lot - to get one element will cost O(n) time.
To make this efficient we need to use an ordered data structure that supports efficient random position inserts and first position removals. The first thing that comes to mind is a (self-balancing) binary search tree, ordered by index.
We won't be storing the actual index, the index will be implicitly defined by the structure of the tree.
At each node we will have a count of children (+ 1 for itself) (which needs to be updated on insert / remove).
An insert can be done as follows: (ignoring the self-balancing part for the moment)
// calling function
insert(node, value)
insert(node, N*T, value)
insert(node, offset, value)
// node.left / node.right can be defined as 0 if the child doesn't exist
leftCount = node.left.count - offset
rightCount = node.right.count
// Since we're here, it means we're inserting in this subtree,
// thus update the count
node.count++
// Nodes to the left are within N*T, so simply go right
// leftCount is the difference between N*T and the number of nodes on the left,
// so this needs to be the new offset (and +1 for the current node)
if leftCount < 0
insert(node.right, -leftCount+1, value)
else
// generate a random number,
// on [0, leftCount), insert to the left
// on [leftCount, leftCount], insert at the current node
// on (leftCount, leftCount + rightCount], insert to the right
sum = leftCount + rightCount + 1
random = getRandomNumberInRange(0, sum)
if random < leftCount
insert(node.left, offset, value)
else if random == leftCount
// we don't actually want to update the count here
node.count--
newNode = new Node(value)
newNode.count = node.count + 1
// TODO: swap node and newNode's data so that node's parent will now point to newNode
newNode.right = node
newNode.left = null
else
insert(node.right, -leftCount+1, value)
To visualize inserting at the current node:
If we have something like:
4
/
1
/ \
2 3
And we want to insert 5 where 1 is now, it will do this:
4
/
5
\
1
/ \
2 3
Note that when a red-black tree, for example, performs operations to keep itself balanced, none of these involve comparisons, so it doesn't need to know the order (i.e. index) of any already-inserted elements. But it will have to update the counts appropriately.
The overall efficiency will be O(log n) to get one element.
I'd put all "values" into a "list" of size N, then shuffle the list and retrieve values from the top of the list. Then you "insert" the retrieved value at a random position with any index >= N*T.
Unfortunately I'm not truly a math-guy :( So I simply tried it (in VB, so please take it as pseudocode ;) )
Public Class BiasedRandom
Private prng As New Random
Private offset As Integer
Private l As New List(Of Integer)
Public Sub New(ByVal size As Integer, ByVal threshold As Double)
If threshold <= 0 OrElse threshold >= 1 OrElse size < 1 Then Throw New System.ArgumentException("Check your params!")
offset = size * threshold
' initial fill
For i = 0 To size - 1
l.Add(i)
Next
' shuffle "Algorithm p"
For i = size - 1 To 1 Step -1
Dim j = prng.Next(0, i + 1)
Dim tmp = l(i)
l(i) = l(j)
l(j) = tmp
Next
End Sub
Public Function NextValue() As Integer
Dim tmp = l(0)
l.RemoveAt(0)
l.Insert(prng.Next(offset, l.Count + 1), tmp)
Return tmp
End Function
End Class
Then a simple check:
Public Class Form1
Dim z As Integer = 10
Dim k As BiasedRandom
Private Sub Form1_Load(sender As Object, e As EventArgs) Handles MyBase.Load
k = New BiasedRandom(z, 0.5)
End Sub
Private Sub Button1_Click(sender As Object, e As EventArgs) Handles Button1.Click
Dim j(z - 1)
For i = 1 To 10 * 1000 * 1000
j(k.NextValue) += 1
Next
Stop
End Sub
End Class
And when I check out the distribution it looks okay enough for an unarmed eye ;)
EDIT:
After thinking about RonTeller's argumentation, I have to admit that he is right. I don't think that there is a performance friendly way to achieve the wanted and to pertain a good (not more biased than required) random order.
I came to the follwoing idea:
Given a list (array whatever) like this:
0123456789 ' not shuffled to make clear what I mean
We return the first element which is 0. This one must not come up again for 4 (as an example) more draws but we also want to avoid a strong bias. Why not simply put it to the end of the list and then shuffle the "tail" of the list, i.e. the last 6 elements?
1234695807
We now return the 1 and repeat the above steps.
2340519786
And so on and so on. Since removing and inserting is kind of unnecessary work, one could use a simple array and a "pointer" to the actual element. I have changed the code from above to give a sample. It's slower than the first one, but should avoid the mentioned bias.
Public Function NextValue() As Integer
Static current As Integer = 0
' only shuffling a part of the list
For i = current + l.Count - 1 To current + 1 + offset Step -1
Dim j = prng.Next(current + offset, i + 1)
Dim tmp = l(i Mod l.Count)
l(i Mod l.Count) = l(j Mod l.Count)
l(j Mod l.Count) = tmp
Next
current += 1
Return l((current - 1) Mod l.Count)
End Function
EDIT 2:
Finally (hopefully), I think the solution is quite simple. The below code assumes that there is an array of N elements called TheArray which contains the elements in random order (could be rewritten to work with sorted array). The value DelaySize determines how long a value should be suspended after it has been drawn.
Public Function NextValue() As Integer
Static current As Integer = 0
Dim SelectIndex As Integer = prng.Next(0, TheArray.Count - DelaySize)
Dim ReturnValue = TheArray(SelectIndex)
TheArray(SelectIndex) = TheArray(TheArray.Count - 1 - current Mod DelaySize)
TheArray(TheArray.Count - 1 - current Mod DelaySize) = ReturnValue
current += 1
Return ReturnValue
End Function
I need to find out a method to determine how many items should appear per column in a multiple column list to achieve the most visual balance. Here are my criteria:
The list should only be split into multiple columns if the item count is greater than 10.
If multiple columns are required, they should contain no less than 5 (except for the last column in case of a remainder) and no more than 10 items.
If all columns cannot contain an equal number of items
All but the last column should be equal in number.
The number of items in each column should be optimized to achieve the smallest difference between the last column and the other column(s).
Well, your requirements and your examples appear a bit contradictory. For instance, your second example could be divided into two columns with 11 items in each, and satisfy your criteria. Let's assume that for rule #2 you meant that there should be <= 10 items / column.
In addition, I think you need to add another rule to make the requirements sensible:
The number of columns must not be greater than what is required to accomodate overflow.
Otherwise, you will often end up with degenerate solutions where you have far more columns than you need. For example, in the case of 26 items you probably don't want 13 columns of 2 items each.
If that's case, here's a simple calculation that should work well and is easy to understand:
int numberOfColumns = CEILING(numberOfItems / 10);
int numberOfItemsPerColumn = CEILING(numberOfItems / numberOfColumns);
Now you'll create N-1 columns of items (having `numberOfItemsPerColumn each) and the overflow will go in the last column. By this definition, the overflow should be minimized in the last column.
If you want to automatically determine the appropriate number of columns, and have no restrictions on its limits, I would suggest the following:
Calculate the square root of the total number of items. That would make an squared layout.
Divide that number by 1.618, and assign that to the total number of rows.
Multiply that same number by 1.618, and assign that to the total number of columns.
All columns but the right most one will have the same number of items.
By the way, the constant 1.618 is the Golden Ratio. That will achieve a more pleasant layout than a squared one.
Divide and multiply the other way round for vertical displays.
Hope this algorithm helps anyone with a similar problem.
Here's what you're trying to solve:
minimize y - z where n = xy + z and 5 <= y <= 10 and 0 <= z <= y
where you have n items split into x full columns of y items and one remainder column of z items.
There is almost certainly a smart way of doing this, but given these constraints a brute force implementation exploring all 6 + 7 + 8 + 9 + 10 = 40 possible combinations for y and z would take no time at all (only assignments where (n - z) mod y = 0 are solutions).
I think a brute force solution is easy, given the constraint on the number of items per columns: let v be the number of items per column (except the last one), then v belongs to [5,10] and can thus take a whooping 6 different values.
Evaluating 6 values is easy enough. Python one-liner (or not so far) to prove it:
# compute the difference between the number of items for the normal columns
# and for the last column, lesser is better
def helper(n,v):
modulo = n % v
if modulo == 0: return 0
else: return v - modulo
# values can only be in [5,10]
# we compute the difference with the last column for each
# build a list of tuples (difference, - number of items)
# (because the greater the value the better, it means less columns)
# extract the min automatically (in case of equality, less is privileged)
# and then pick the number of items from the tuple and re-inverse it
def compute(n): return - min([(helper(n,v), -v) for v in [5,6,7,8,9,10]])[1]
For 77 this yields: 7 meaning 7 items per columns
For 22 this yields: 8 meaning 8 items per columns
I have got a square matrix consisting of elements either 1
or 0. An ith row toggle toggles all the ith row elements (1
becomes 0 and vice versa) and jth column toggle toggles all
the jth column elements. I have got another square matrix of
similar size. I want to change the initial matrix to the
final matrix using the minimum number of toggles. For example
|0 0 1|
|1 1 1|
|1 0 1|
to
|1 1 1|
|1 1 0|
|1 0 0|
would require a toggle of the first row and of the last
column.
What will be the correct algorithm for this?
In general, the problem will not have a solution. To see this, note that transforming matrix A to matrix B is equivalent to transforming the matrix A - B (computed using binary arithmetic, so that 0 - 1 = 1) to the zero matrix. Look at the matrix A - B, and apply column toggles (if necessary) so that the first row becomes all 0's or all 1's. At this point, you're done with column toggles -- if you toggle one column, you have to toggle them all to get the first row correct. If even one row is a mixture of 0's and 1's at this point, the problem cannot be solved. If each row is now all 0's or all 1's, the problem is solvable by toggling the appropriate rows to reach the zero matrix.
To get the minimum, compare the number of toggles needed when the first row is turned to 0's vs. 1's. In the OP's example, the candidates would be toggling column 3 and row 1 or toggling columns 1 and 2 and rows 2 and 3. In fact, you can simplify this by looking at the first solution and seeing if the number of toggles is smaller or larger than N -- if larger than N, than toggle the opposite rows and columns.
It's not always possible. If you start with a 2x2 matrix with an even number of 1s you can never arrive at a final matrix with an odd number of 1s.
Algorithm
Simplify the problem from "Try to transform A into B" into "Try to transform M into 0", where M = A xor B. Now all the positions which must be toggled have a 1 in them.
Consider an arbitrary position in M. It is affected by exactly one column toggle and exactly one row toggle. If its initial value is V, the presence of the column toggle is C, and the presence of the row toggle is R, then the final value F is V xor C xor R. That's a very simple relationship, and it makes the problem trivial to solve.
Notice that, for each position, R = F xor V xor C = 0 xor V xor C = V xor C. If we set C then we force the value of R, and vice versa. That's awesome, because it means if I set the value of any row toggle then I will force all of the column toggles. Any one of those column toggles will force all of the row toggles. If the result is the 0 matrix, then we have a solution. We only need to try two cases!
Pseudo-code
function solve(Matrix M) as bool possible, bool[] rowToggles, bool[] colToggles:
For var b in {true, false}
colToggles = array from c in M.colRange select b xor Matrix(0, c)
rowToggles = array from r in M.rowRange select colToggles[0] xor M(r, 0)
if none from c in M.colRange, r in M.rowRange
where colToggle[c] xor rowToggle[r] xor M(r, c) != 0 then
return true, rowToggles, colToggles
end if
next var
return false, null, null
end function
Analysis
The analysis is trivial. We try two cases, within which we run along a row, then a column, then all cells. Therefore if there are r rows and c columns, meaning the matrix has size n = c * r, then the time complexity is O(2 * (c + r + c * r)) = O(c * r) = O(n). The only space we use is what is required for storing the outputs = O(c + r).
Therefore the algorithm takes time linear in the size of the matrix, and uses space linear in the size of the output. It is asymptotically optimal for obvious reasons.
I came up with a brute force algorithm.
The algorithm is based on 2 conjectures:
(so it may not work for all matrices - I'll verify them later)
The minimum (number of toggles) solution will contain a specific row or column only once.
In whatever order we apply the steps to convert the matrix, we get the same result.
The algorithm:
Lets say we have the matrix m = [ [1,0], [0,1] ].
m: 1 0
0 1
We generate a list of all row and column numbers,
like this: ['r0', 'r1', 'c0', 'c1']
Now we brute force, aka examine, every possible step combinations.
For example,we start with 1-step solution,
ksubsets = [['r0'], ['r1'], ['c0'], ['c1']]
if no element is a solution then proceed with 2-step solution,
ksubsets = [['r0', 'r1'], ['r0', 'c0'], ['r0', 'c1'], ['r1', 'c0'], ['r1', 'c1'], ['c0', 'c1']]
etc...
A ksubsets element (combo) is a list of toggle steps to apply in a matrix.
Python implementation (tested on version 2.5)
# Recursive definition (+ is the join of sets)
# S = {a1, a2, a3, ..., aN}
#
# ksubsets(S, k) = {
# {{a1}+ksubsets({a2,...,aN}, k-1)} +
# {{a2}+ksubsets({a3,...,aN}, k-1)} +
# {{a3}+ksubsets({a4,...,aN}, k-1)} +
# ... }
# example: ksubsets([1,2,3], 2) = [[1, 2], [1, 3], [2, 3]]
def ksubsets(s, k):
if k == 1: return [[e] for e in s]
ksubs = []
ss = s[:]
for e in s:
if len(ss) < k: break
ss.remove(e)
for x in ksubsets(ss,k-1):
l = [e]
l.extend(x)
ksubs.append(l)
return ksubs
def toggle_row(m, r):
for i in range(len(m[r])):
m[r][i] = m[r][i] ^ 1
def toggle_col(m, i):
for row in m:
row[i] = row[i] ^ 1
def toggle_matrix(m, combos):
# example of combos, ['r0', 'r1', 'c3', 'c4']
# 'r0' toggle row 0, 'c3' toggle column 3, etc.
import copy
k = copy.deepcopy(m)
for combo in combos:
if combo[0] == 'r':
toggle_row(k, int(combo[1:]))
else:
toggle_col(k, int(combo[1:]))
return k
def conversion_steps(sM, tM):
# Brute force algorithm.
# Returns the minimum list of steps to convert sM into tM.
rows = len(sM)
cols = len(sM[0])
combos = ['r'+str(i) for i in range(rows)] + \
['c'+str(i) for i in range(cols)]
for n in range(0, rows + cols -1):
for combo in ksubsets(combos, n +1):
if toggle_matrix(sM, combo) == tM:
return combo
return []
Example:
m: 0 0 0
0 0 0
0 0 0
k: 1 1 0
1 1 0
0 0 1
>>> m = [[0,0,0],[0,0,0],[0,0,0]]
>>> k = [[1,1,0],[1,1,0],[0,0,1]]
>>> conversion_steps(m, k)
['r0', 'r1', 'c2']
>>>
If you can only toggle the rows, and not the columns, then there will only be a subset of matrices that you can convert into the final result. If this is the case, then it would be very simple:
for every row, i:
if matrix1[i] == matrix2[i]
continue;
else
toggle matrix1[i];
if matrix1[i] == matrix2[i]
continue
else
die("cannot make similar");
This is a state space search problem. You are searching for the optimum path from a starting state to a destination state. In this particular case, "optimum" is defined as "minimum number of operations".
The state space is the set of binary matrices generatable from the starting position by row and column toggle operations.
ASSUMING that the destination is in the state space (NOT a valid assumption in some cases: see Henrik's answer), I'd try throwing a classic heuristic search (probably A*, since it is about the best of the breed) algorithm at the problem and see what happened.
The first, most obvious heuristic is "number of correct elements".
Any decent Artificial Intelligence textbook will discuss search and the A* algorithm.
You can represent your matrix as a nonnegative integer, with each cell in the matrix corresponding to exactly one bit in the integer On a system that supports 64-bit long long unsigned ints, this lets you play with anything up to 8x8. You can then use exclusive-OR operations on the number to implement the row and column toggle operations.
CAUTION: the raw total state space size is 2^(N^2), where N is the number of rows (or columns). For a 4x4 matrix, that's 2^16 = 65536 possible states.
Rather than look at this as a matrix problem, take the 9 bits from each array, load each of them into 2-byte size types (16 bits, which is probably the source of the arrays in the first place), then do a single XOR between the two.
(the bit order would be different depending on your type of CPU)
The first array would become: 0000000001111101
The second array would become: 0000000111110101
A single XOR would produce the output. No loops required. All you'd have to do is 'unpack' the result back into an array, if you still wanted to. You can read the bits without resorting to that, though.i
I think brute force is not necessary.
The problem can be rephrased in terms of a group. The matrices over the field with 2 elements constitute an commutative group with respect to addition.
As pointed out before, the question whether A can be toggled into B is equivalent to see if A-B can be toggled into 0. Note that toggling of row i is done by adding a matrix with only ones in the row i and zeros otherwise, while the toggling of column j is done by adding a matrix with only ones in column j and zeros otherwise.
This means that A-B can be toggled to the zero matrix if and only if A-B is contained in the subgroup generated by the toggling matrices.
Since addition is commutative, the toggling of columns takes place first, and we can apply the approach of Marius first to the columns and then to the rows.
In particular the toggling of the columns must make any row either all ones or all zeros. there are two possibilites:
Toggle columns such that every 1 in the first row becomes zero. If after this there is a row in which both ones and zeros occur, there is no solution. Otherwise apply the same approach for the rows (see below).
Toggle columns such that every 0 in the first row becomes 1. If after this there is a row in which both ones and zeros occur, there is no solution. Otherwise apply the same approach for the rows (see below).
Since the columns have been toggled successfully in the sense that in each row contains only ones or zeros, there are two possibilities:
Toggle rows such that every 1 in the first column becomes zero.
Toggle rows such that every 0 in the first row becomes zero.
Of course in the step for the rows, we take the possibility which results in less toggles, i.e. we count the ones in the first column and then decide how to toggle.
In total, only 2 cases have to be considered, namely how the columns are toggled; for the row step, the toggling can be decided by counting to minimuze the number of toggles in the second step.