Importance of order of the operation in backtracking algorithms - algorithm

Order of operation in each recursive step of a backtracking algorithms are how much important in terms of the efficiency of that particular algorithm?
For Ex.
In the Knight’s Tour problem.
The knight is placed on the first block of an empty board and, moving
according to the rules of chess, must visit each square exactly once.
In each step there are 8 possible (in general) ways to move.
int xMove[8] = { 2, 1, -1, -2, -2, -1, 1, 2 };
int yMove[8] = { 1, 2, 2, 1, -1, -2, -2, -1 };
If I change this order like...
int xmove[8] = { -2, -2, 2, 2, -1, -1, 1, 1};
int ymove[8] = { -1, 1,-1, 1, -2, 2, -2, 2};
Now,
for a n*n board
upto n=6
both the operation order does not affect any visible change in the execution time,
But if it is n >= 7
First operation (movement) order's execution time is much less than the later one.
In such cases, it is not feasible to generate all the O(m!) operation order and test the algorithm. So how do I determine the performance of such algorithms on a specific movement order, or rather how could it be possible to reach one (or a set) of operation orders such that the algorithm that is more efficient in terms of execution time.

This is an interesting problem from a Math/CS perspective. There definitely exists a permutation (or set of permutations) that would be most efficient for a given n . I don't know if there is a permutation that is most efficient among all n. I would guess not. There could be a permutation that is better 'on average' (however you define that) across all n.
If I was tasked to find an efficient permutation I might try doing the following: I would generate a fixed number x of randomly generated move orders. Measure their efficiency. For every one of the randomly generated movesets, randomly create a fixed number of permutations that are near the original. Compute their efficiencies. Now you have many more permutations than you started with. Take top x performing ones and repeat. This will provide some locally maxed algorithms, but I don't know if it leads up to the globally maxed algorithm(s).

Related

How can I sort a vector of boolean vectors in this way? ('ranking analysis')

We need to sort a large number of vectors (an array of arrays) containing only true and false (1's and 0's), all the same size.
We have the rules that 1 + 1 = 1 (true + true = true) and 1 + 0 = 1 and 0 + 0 = 0.
The first vector is the one with the most 1's.
The second vector is the one which brings more 1's in addition to the ones we already had in the first vector.
The third vector is the one which brings more 1's in addition to the ones we already had in the previous 2 vectors.
And so on.
For example, let's say we have these 3 vectors:
a. (0, 1, 0, 0, 1, 1, 0)
b. (1, 0, 1, 1, 0, 1, 1)
c. (0, 1, 1, 1, 0, 1, 0)
The first one in our sort is b because it has the most 1's.
The next one is a. Even though c has more 1's than a, a has more 1's in addition to the 1's we had in b.
By now, the sum of a + b is (1, 1, 1, 1, 1, 1, 1), so the last one is c because, it brings nothing new to the sorting.
If two vectors brings the same number of extra 1's, the order of them doesn't really matter. I believe there are multiple possible results for this kind of sorting and they are all as good.
We call this a 'ranking analysis' here, but we don't have a clear term for this kind of sort and google doesn't yield very useful info on it.
The easiest method is to just take them one by one with an O(n^2). However, we are working with big data and we already have a software for this which is too slow, so we need something really optimized.
How can we achieve this? Programming language doesn't matter, we can use anything. Can this be parallelized (run it on multiple CPU's to speed up the process)? Any sources or ideas are welcome.
Edit: I checked; apparently we have a case where the length of these vectors is 103, so they can be longer than 64 slots.

Maximum sum of n intervals in a sequence

I'm doing some programming "kata" which are skill building exercises for programming (and martial arts). I want to learn how to solve for algorithms like these in shorter amounts of time, so I need to develop my knowledge of the patterns. Eventually I want to solve in increasingly efficient time complexities (O(n), O(n^2), etc), but for now I'm fine with figuring out the solution with any efficiency to start.
The problem:
Given arr[10] = [4, 5, 0, 2, 5, 6, 4, 0, 3, 5]
Given various segment lengths, for example one 3-length segment, and two 2-length segments, find the optimal position of (or maximum sum contained by) the segments without overlapping the segments.
For example, solution to this array and these segments is 2, because:
{4 5} 0 2 {5 6 4} 0 {3 5}
What I have tried before posting on stackoverflow.com:
I've read through:
Algorithm to find maximum coverage of non-overlapping sequences. (I.e., the Weighted Interval Scheduling Prob.)
algorithm to find longest non-overlapping sequences
and I've watched MIT opencourseware and read about general steps for solving complex problems with dynamic programming, and completed a dynamic programming tutorial for finding Fibonacci numbers with memoization. I thought I could apply memoization to this problem, but I haven't found a way yet.
The theme of dynamic programming is to break the problem down into sub-problems which can be iterated to find the optimal solution.
What I have come up with (in an OO way) is
foreach (segment) {
- find the greatest sum interval with length of this segment
This produces incorrect results, because not always will the segments fit with this approach. For example:
Given arr[7] = [0, 3, 5, 5, 5, 1, 0] and two 3-length segments,
The first segment will take 5, 5, 5, leaving no room for the second segment. Ideally I should memoize this scenario and try the algorithm again, this time avoiding 5, 5, 5, as a first pick. Is this the right path?
How can I approach this in a "dynamic programming" way?
If you place the first segment, you get two smaller sub-arrays: placing one or both of the two remaining segments into one of these sub-arrays is a sub-problem of just the same form as the original one.
So this suggests a recursion: you place the first segment, then try out the various combinations of assigning remaining segments to sub-arrays, and maximize over those combinations. Then you memoize: the sub-problems all take an array and a list of segment sizes, just like the original problem.
I'm not sure this is the best algorithm but it is the one suggested by a "direct" dynamic programming approach.
EDIT: In more detail:
The arguments to the valuation function should have two parts: one is a pair of numbers which represent the sub-array being analysed (initially [0,6] in this example) and the second is a multi-set of numbers representing the lengths of the segments to be allocated ({3,3} in this example). Then in pseudo-code you do something like this:
valuation( array_ends, the_segments):
if sum of the_segments > array_ends[1] - array_ends[0]:
return -infinity
segment_length = length of chosen segment from the_segments
remaining_segments = the_segments with chosen segment removed
best_option = 0
for segment_placement = array_ends[0] to array_ends[1] - segment_length:
value1 = value of placing the chosen segment at segment_placement
new_array1 = [array_ends[0],segment_placement]
new_array2 = [segment_placement + segment_length,array_ends[1]]
for each partition of remaining segments into seg1 and seg2:
sub_value1 = valuation( new_array1, seg1)
sub_value2 = valuation( new_array2, seg2)
if value1 + sub_value1 + sub_value2 > best_option:
best_option = value1 + sub_value1 + sub_value2
return best_option
This code (modulo off by one errors and typos) calculates the valuation but it calls the valuation function more than once with the same arguments. So the idea of the memoization is to cache those results and avoid re-traversing equivalent parts of the tree. So we can do this just by wrapping the valuation function:
memoized_valuation(args):
if args in memo_dictionary:
return memo_dictionary[args]
else:
result = valuation(args)
memo_dictionary[args] = result
return result
Of course, you need to change the recursive call now to call memoized_valuation.

Can a Neural Network Find the i-th Permutation of a fixed size list?

Briefly
Can a neural network emulate factorial decomposition (or some other method) to provide a list permutation given the permutations unique index?
Application
I have a list of 10 things, and what they are is irrelevant. What I care about is that my 10 things can be placed into 3628800 (or 10!) unique orders, because then I can express any list order of my 10 things using an unsigned integer and factorial decomposition:
Order 0: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
Order 1: 0, 1, 2, 3, 4, 5, 6, 7, 9, 8
Order ....
Order 3628799: 9, 8, 7, 6, 5, 4, 3, 2, 1, 0
This allows for the parallel distribution of analysis on different list orders of my 10 things.
A common example being the travelling salesman problem:
1. I give 500 different computers each a range of unsigned integers:
0 -> 7257 for computer 0,
7257 -> 14516 for computer 1,
etc.
2. Each computer first calculates the list order from it's unsigned integer
index by using factorial decomposition.
ie. Order 1 -> 0, 1, 2, 3, 4, 5, 6, 7, 9, 8
3. The distance between the cities placed in the order described is calculated.
4. The shortest distances from each computer is collected, and the shortest
of those is taken. Leaving us with a single unsigned integer index that
describes the shortest possible permutation of cities.
The same process can be used to solve virtually any boundable error surface, given often far more than feasible computational power.
Recursive Algorithmic Solution
We can calculate the N-th permutation of any fixed size list (granted we will need large integer support for bigger lists) using factorial decomposition (outlined here in php), and provided here in javascript for clarity:
function ithPermutationOfNElements (n, i)
{
var j, k = 0;
var fact = [];
var perm = [];
// compute factorial numbers
fact[k] = 1;
while (++k < n)
fact[k] = fact[k - 1] * k;
// compute factorial code
for (k = 0; k < n; ++k)
{
perm[k] = Math.floor(i / fact[n - 1 - k]);
i = i % fact[n - 1 - k];
}
// readjust values to obtain the permutation
// start from the end and check if preceding values are lower
for (k = n - 1; k > 0; --k)
for (j = k - 1; j >= 0; --j)
if (perm[j] <= perm[k])
perm[k]++;
return perm;
}
console.log(ithPermutationOfNElements(4, 23)); // [ 3, 2, 1, 0 ]
Neural Network Solution?
Can any neural network architecture & training combination emulate this function given i as it's only input neuron and n output neurons representing each element of the permutation?
A neuron can operate as a logic gate, and thus a Neural Network can perform any calculation a computer can. However in that sense it simply emulates logic gates inefficiently, using high level code, so is not a good solution for this problem.
In general, neural networks are good with 'real' or 'natural' data. They also generally operate with floats, not integers. So if there is a pattern to be learnt, a NN might learn it, but the output answer you will get will be eg 0.783267. You could then denormalize this to 89743, but its unlikely to be exactly right. For your requirement, one integer off the right answer is completely wrong.
By contrast, for a face recognition NN returning 0.787 or 0.786 for a particular image, both could be considered correct.
Your problem is better suited to a traditional, procedural code solution, with only one correct answer for each input. Generally in AI, you are looking for the correct answer, within a certain range or probability distribution.
Regarding implementing algorithms with NNs:
You can have many neurons acting as logic gates, so now you have neuron nand gate / flipflops etc acting as adders/multipliers/latches etc, until you have essentially built a turing machine, but explicitly using high level code. It will in no way resemble a normal NN as they are used by the majority of the AI world. Further, you already have a perfectly good turing machine right in front of you.
Here is the code for a Neural Network AND gate in Matlab. No training is required. I've used configure instead of train, and just set the weights manually. So making the other logic types you could build an entire turing machine.
and = feedforwardnet(1);
truthTable = [0 0 1 1; 0 1 0 1];
and_out = [0 0 0 1];
and = configure(and, truthTable, and_out);
vals = [-2 -2 -1 2 0];
and.IW{1} = vals(1:2); % input1 to hidden, input2 to hidden
and.LW{2,1} = vals(3); % hidden to output
and.b{1} = vals(4); % bias to hidden
and.b{2} = vals(5); % bias to output
y = sim(and, truthTable)
round (y)
mse = mean ((y - and_out) .^ 2)
y =
0.0000 0.0180 0.0180 0.9820
ans =
0 0 0 1
mse =
2.4263e-04
A little known fact is that recurrent neural nets are Turing complete, and can thus perform any computation a computer can (see result by Siegelmann).
This does neither mean (a) that you can easily find the necessary weights by a learning algorithm or (b) that a feed forward net, as you probably are looking at, can do it.
Nevertheless, this seems like a task you don't want to do with a neural net.
Generally, neural networks are universal function approximators, so in theory yes.
More specifically, notice that for a particular (fixed) i value, the neural network that solves it is trivial (and in fact, requires no hidden nodes or activation functions. It is a linear problem).
As a brute force, naive solution, for the general problem with unfixed i: a neural network large enough to encode all 10! possible linear encodings, with a hidden layer essentially acting as a mux based on the i input, would solve the problem. More efficient networks probably exist, and it would be interesting to try a recurrent architecture for this problem.
In any case, while a solution certainly exists, the better question is whether this is a good way to solve it. If a problem boils down to some simple psuedocode, I would avoid a neural network implementation unless it is for academic purposes.
I think it might be possible. Input your data and a single number (starting at 0 or 1). Have it produce a single number representing the element number (round it). Add that number to your list. Then do it again, except increase the number you feed to the neural network by 1 (i.e. the number that represents the element in the list that you want to find.)
A recursive neural network would be ideal. But I'm still not certain if the underlying function can be learned or approximated effectively. I think that it could be.

How to determine best combinations from 2 lists

I'm looking for a way to make the best possible combination of people in groups. Let me sketch the situation.
Say we have persons A, B, C and D. Furthermore we have groups 1, 2, 3, 4 and 5. Both are examples and can be less or more. Each person gives a rating to each other person. So for example A rates B a 3, C a 2, and so on. Each person also rates each group. (Say ratings are 0-5). Now I need some sort of algorithm to distribute these people evenly over the groups while keeping them as happy as possible (as in: They should be in a highrated group, with highrated people). Now I know it's not possible for the people to be in the best group (the one they rated a 5) but I need them to be in the best possible solution for the entire group.
I think this is a difficult question, and I would be happy if someone could direct me to some more information about this types of problems, or help me with the algo I'm looking for.
Thanks!
EDIT:
I see a lot of great answers but this problem is too great for me too solve correctly. However, the answers posted so far give me a great starting point too look further into the subject. Thanks a lot already!
after establishing this is NP-Hard problem, I would suggest as a heuristical solution: Artificial Intelligence tools.
A possible approach is steepest ascent hill climbing [SAHC]
first, we will define our utility function (let it be u). It can be the sum of total happiness in all groups.
next,we define our 'world': S is the group of all possible partitions.
for each legal partition s of S, we define:
next(s)={all possibilities moving one person to a different group}
all we have to do now is run SAHC with random restarts:
1. best<- -INFINITY
2. while there is more time
3. choose a random partition as starting point, denote it as s.
4. NEXT <- next(s)
5. if max{ U(NEXT) } < u(s): //s is the top of the hill
5.1. if u(s) > best: best <- u(s) //if s is better then the previous result - store it.
5.2. go to 2. //restart the hill climbing from a different random point.
6. else:
6.1. s <- max{ NEXT }
6.2. goto 4.
7. return best //when out of time, return the best solution found so far.
It is anytime algorithm, meaning it will get a better result as you give it more time to run, and eventually [at time infinity] it will find the optimal result.
The problem is NP-hard: you can reduce from Maximum Triangle Packing, that is, finding at least k vertex-disjoint triangles in a graph, to the version where there are k groups of size 3, no one cares about which group he is in, and likes everyone for 0 or for 1. So even this very special case is hard.
To solve it, I would try using an ILP: have binary variables g_ik indicating that person i is in group k, with constraints to ensure a person is only in one group and a group has an appropriate size. Further, binary variables t_ijk that indicate that persons i and j are together in group k (ensured by t_ijk <= 0.5 g_ik + 0.5 g_jk) and binary variables t_ij that indicate that i and j are together in any group (ensured by t_ij <= sum_k t_ijk). You can then maximize the happiness function under these constraints.
This ILP has very many variables, but modern solvers are pretty good and this approach is very easy to implement.
This is an example of an optimization problem. It is a very well
studied type of problems with very good methods to solve them. Read
Programming Collective Intelligence which explains it much better
than me.
Basically, there are three parts to any kind of optimization problem.
The input to the problem solving function.
The solution outputted by the problem solving function.
A scoring function that evaluates how optimal the solution is by
scoring it.
Now the problem can be stated as finding the solution that produces
the highest score. To do that, you first need to come up with a format
to represent a possible solution that the scoring function can then
score. Assuming 6 persons (0-5) and 3 groups (0-2), this python data structure
would work and would be a possible solution:
output = [
[0, 1],
[2, 3],
[4, 5]
]
Person 0 and 1 is put in group 0, person 2 and 3 in group 1 and so
on. To score this solution, we need to know the input and the rules for
calculating the output. The input could be represented by this data
structure:
input = [
[0, 4, 1, 3, 4, 1, 3, 1, 3],
[5, 0, 1, 2, 1, 5, 5, 2, 4],
[4, 1, 0, 1, 3, 2, 1, 1, 1],
[2, 4, 1, 0, 5, 4, 2, 3, 4],
[5, 5, 5, 5, 0, 5, 5, 5, 5],
[1, 2, 1, 4, 3, 0, 4, 5, 1]
]
Each list in the list represents the rating the person gave. For
example, in the first row, the person 0 gave rating 0 to person 0 (you
can't rate yourself), 4 to person 1, 1 to person 2, 3 to 3, 4 to 4 and
1 to person 5. Then he or she rated the groups 0-2 3, 1 and 3
respectively.
So above is an example of a valid solution to the given input. How do
we score it? That's not specified in the question, only that the
"best" combination is desired therefore I'll arbitrarily decide that
the score for a solution is the sum of each persons happiness. Each
persons happiness is determined by adding his or her rating of the
group with the average of the rating for each person in the group,
excluding the person itself.
Here is the scoring function:
N_GROUPS = 3
N_PERSONS = 6
def score_solution(input, output):
tot_score = 0
for person, ratings in enumerate(input):
# Check what group the person is a member of.
for group, members in enumerate(output):
if person in members:
# Check what rating person gave the group.
group_rating = ratings[N_PERSONS + group]
# Check what rating the person gave the others.
others = list(members)
others.remove(person)
if not others:
# protect against zero division
person_rating = 0
else:
person_ratings = [ratings[o] for o in others]
person_rating = sum(person_ratings) / float(len(person_ratings))
tot_score += group_rating + person_rating
return tot_score
It should return a score of 37.0 for the given solution. Now what
we'll do is to generate valid outputs while keeping track of which one
is best until we are satisfied:
from random import choice
def gen_solution():
groups = [[] for x in range(N_GROUPS)]
for person in range(N_PERSONS):
choice(groups).append(person)
return groups
# Generate 10000 solutions
solutions = [gen_solution() for x in range(10000)]
# Score them
solutions = [(score_solution(input, sol), sol) for sol in solutions]
# Sort by score, take the best.
best_score, best_solution = sorted(solutions)[-1]
print 'The best solution is %s with score %.2f' % (best_solution, best_score)
Running this on my computer produces:
The best solution is [[0, 1], [3, 5], [2, 4]] with score 47.00
Obviously, you may think it is a really stupid idea to randomly just
generate solutions to throw at the problem, and it is. There are much
more sophisticated methods to generate solutions such as simulated
annealing or genetic optimization. But they all build upon the same
framework as given above.

Algorithm for merging sets that share at least 2 elements

Given a list of sets:
S_1 : [ 1, 2, 3, 4 ]
S_2 : [ 3, 4, 5, 6, 7 ]
S_3 : [ 8, 9, 10, 11 ]
S_4 : [ 1, 8, 12, 13 ]
S_5 : [ 6, 7, 14, 15, 16, 17 ]
What the most efficient way to merge all sets that share at least 2 elements? I suppose this is similar to a connected components problem. So the result would be:
[ 1, 2, 3, 4, 5, 6, 7, 14, 15, 16, 17] (S_1 UNION S_2 UNION S_5)
[ 8, 9, 10, 11 ]
[ 1, 8, 12, 13 ] (S_4 shares 1 with S_1, and 8 with S_3, but not merged because they only share one element in each)
The naive implementation is O(N^2), where N is the number of sets, which is unworkable for us. This would need to be efficient for millions of sets.
Let there be a list of many Sets named (S)
Perform a pass through all elements of S, to determine the range (LOW .. HIGH).
Create an array of pointer to Set, of dimensions (LOW, HIGH), named (M).
do
Init all elements of M to NULL.
Iterate though S, processing them one Set at a time, named (Si).
Permutate all ordered pairs in Si. (P1, P2) where P1 <= P2.
For each pair examine M(P1, P2)
if M(P1, P2) is NULL
Continue with the next pair.
otherwise
Merge Si, into the Set pointed to by, M(P1, P2).
Remove Si from S, as it has been merged.
Move on to processing Set S(i + 1)
If Si was not merged,
Permutate again through Si
For each pair, make M(P1, P2) point to Si.
while At least one set was merged during the pass.
My head is saying this is about Order (2N ln N).
Take that with a grain of salt.
If you can order the elements in the set, you can look into using Mergesort on the sets. The only modification needed is to check for duplicates during the merge phase. If one is found, just discard the duplicate. Since mergesort is O(n*log(n)), this will offer imrpoved speed when compared to the naive O(n^2) algorithm.
However, to really be effective, you should maintain a sorted set and keep it sorted, so that you can skip the sort phase and go straight to the merge phase.
I don't see how this can be done in less than O(n^2).
Every set needs to be compared to every other one to see if they contain 2 or more shared elements. That's n*(n-1)/2 comparisons, therefore O(n^2), even if the check for shared elements takes constant time.
In sorting, the naive implementation is O(n^2) but you can take advantage of the transitive nature of ordered comparison (so, for example, you know nothing in the lower partition of quicksort needs to be compared to anything in the upper partition, as it's already been compared to the pivot). This is what result in sorting being O(n * log n).
This doesn't apply here. So unless there's something special about the sets that allows us to skip comparisons based on the results of previous comparisons, it's going to be O(n^2) in general.
Paul.
One side note: It depends on how often this occurs. If most pairs of sets do share at least two elements, it might be most efficient to build the new set at the same time as you are stepping through the comparison, and throw it away if they don't match the condition. If most pairs do not share at least two elements, then deferring the building of the new set until confirmation of the condition might be more efficient.
If your elements are numerical in nature, or can be naturally ordered (ie. you can assign a value such as 1, 2, 42 etc...), I would suggest using a radix sort on the merged sets, and make a second pass to pick up on the unique elements.
This algorithm should be of O(n), and you can optimize the radix sort quite a bit using bitwise shift operators and bit masks. I have done something similar for a project I was working on, and it works like a charm.

Resources