Your favourite algorithm and the lesson it taught you [closed] - algorithm

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
What algorithm taught you the most about programming or a specific language feature?
We have all had those moments where all of a sudden we know, just know, we have learned an important lesson for the future based on finally understanding an algorithm written by a programmer a couple of steps up the evolutionary ladder. Whose ideas and code had the magic touch on you?

General algorithms:
Quicksort (and it's average complexity analysis), shows that randomizing your input can be a good thing!;
balanced trees (AVL trees for example), a neat way to balance search/insertion costs;
Dijkstra and Ford-Fulkerson algorithms on graphs (I like the fact that the second one has many applications);
the LZ* family of compression algorithms (LZW for example), data compression sounded kind of magic to me until I discovered it (a long time ago :) );
the FFT, ubiquitous (re-used in so many other algorithms);
the simplex algorithm, ubiquitous as well.
Numerical related:
Euclid's algorithm to compute the gcd of two integers: one of the first algorithms, simple and elegant, powerful, has lots of generalizations;
fast multiplication of integers (Cooley-Tukey for example);
Newton iterations to invert / find a root, a very powerful meta-algorithm.
Number theory-related:
AGM-related algorithms (examples): leads to very simple and elegant algorithms to compute pi (and much more!), though the theory is quite profound (Gauss introduced elliptic functions and modular forms from it, so you can say that it gave birth to algebraic geometry...);
the number field sieve (for integer factorization): very complicated, but quite a nice theoretical result (this also goes for the AKS algorithm, which proved that PRIMES is in P).
I also enjoyed studying quantum computing (Shor and Deutsch-Josza algorithms for example): this teaches you to think out of the box.
As you can see, I'm a bit biased towards maths-oriented algorithms :)

"To iterate is human, to recurse divine" - quoted in 1989 at college.
P.S. Posted by Woodgnome while waiting for invite to join

Floyd-Warshall all-pairs shortest paths algorithm
procedure FloydWarshall ()
for k := 1 to n
for i := 1 to n
for j := 1 to n
path[i][j] = min ( path[i][j], path[i][k]+path[k][j] );
Here's why it's cool: when you first learn about the shortest-path problem in your graph theory course, you probably start with Dijkstra's algorithm that solves single-source shortest path. It's quite complicated at first, but then you get over it, and you fully understood it.
Then the teacher says "Now we want to solve the same problem but for ALL sources". You think to yourself, "Oh god, this is going to be a much harder problem! It's going to be at least N times more complicated than Dijkstra's algorithm!!!".
Then the teacher gives you Floyd-Warshall. And your mind explodes. Then you start to tear up at how beautifully simple the algorithm is. It's just a triply-nested loop. It only uses a simple array for its data structure.
The most eye-opening part for me is the following realization: say you have a solution for problem A. Then you have a bigger "superproblem" B which contains problem A. The solution to problem B may in fact be simpler than the solution to problem A.

This one might sound trivial but it was a revelation for me at the time.
I was in my very first programming class(VB6) and the Prof had just taught us about random numbers and he gave the following instructions: "Create a virtual lottery machine. Imagine a glass ball full of 100 ping pong balls marked 0 to 99. Pick them randomly and display their number until they have all been selected, no duplicates."
Everyone else wrote their program like this: Pick a ball, put its number into an "already selected list" and then pick another ball. Check to see if its already selected, if so pick another ball, if not put its number on the "already selected list" etc....
Of course by the end they were making hundreds of comparisons to find the few balls that had not already been picked. It was like throwing the balls back into the jar after selecting them. My revelation was to throw balls away after picking.
I know this sounds mind-numbingly obvious but this was the moment that the "programming switch" got flipped in my head. This was the moment that programming went from trying to learn a strange foreign language to trying to figure out an enjoyable puzzle. And once I made that mental connection between programming and fun there was really no stopping me.

Huffman coding would be mine, I had originally made my own dumb version by minimizing the number of bits to encode text from 8 down to less, but had not thought about variable number of bits depending on frequency. Then I found the huffman coding described in an article in a magazine and it opened up lots of new possibilities.

Quicksort. It showed me that recursion can be powerful and useful.

Bresenham's line drawing algorithm got me interested in realtime graphics rendering. This can be used to render filled polygons, like triangles, for things like 3D model rendering.

Recursive Descent Parsing - I remember being very impressed how such simple code could do something so seemingly complex.

Quicksort in Haskell:
qsort [] = []
qsort (x:xs) = qsort (filter (< x) xs) ++ [x] ++ qsort (filter (>= x) xs)
Although I couldn'd write Haskell at the time, I did understand this code and with it recursion and the quicksort algorithm. It just made click and there it was...

The iterative algorithm for Fibonacci, because for me it nailed down the fact that the most elegant code (in this case, the recursive version) is not necessarily the most efficient.
To elaborate- The "fib(10) = fib(9) + fib(8)" approach means that fib(9) will be evaluated to fib(8) + fib(7). So evaluation of fib(8) (and therefor fib7, fib6) will all be evaluated twice.
The iterative method, (curr = prev1 + prev2 in a forloop) does not tree out this way, nor does it take as much memory since it's only 3 transient variables, instead of n frames in the recursion stack.
I tend to strive for simple, elegant code when I'm programming, but this is the algorithm that helped me realize that this isn't the end-all-be-all for writing good software, and that ultimately the end users don't care how your code looks.

For some reason I like the Schwartzian transform
#sorted = map { $_->[0] }
sort { $a->[1] cmp $b->[1] }
map { [$_, foo($_)] }
#unsorted;
Where foo($) represents a compute-intensive expression that takes $ (each item of the list in turn) and produces the corresponding value that is to be compared in its sake.

Minimax taught me that chess programs aren't smart, they can just think more moves ahead than you can.

I don't know if this qualifies as an algorithm, or just a classic hack. In either case, it helped to get me to start thinking outside the box.
Swap 2 integers without using an intermediate variable (in C++)
void InPlaceSwap (int& a, int &b) {
a ^= b;
b ^= a;
a ^= b;
}

Quicksort: Until I got to college, I had never questioned whether brute force Bubble Sort was the most efficient way to sort. It just seemed intuitively obvious. But being exposed to non-obvious solutions like Quicksort taught me to look past the obvious solutions to see if something better is available.

For me it's the weak-heapsort algorithm because it shows (1) how much a wise chosen data structure (and the algorithms working on it) can influence the performance and (2) that fascinating things can be discovered even in old, well-known things. (weak-heapsort is the best variant of all heap sorts, which was proven eight years later.)

This is a slow one :)
I learned lots about both C and computers in general by understanding Duffs Device and XOR swaps
EDIT:
#Jason Z, that's my XOR swap :) cool isn't it.

For some reason Bubble Sort has always stood out to me. Not because it's elegant or good just because it had/has a goofy name I suppose.

The iterative algorithm for Fibonacci, because for me it nailed down the fact that the most elegant code (in this case, the recursive version) is not necessarily the most efficient.
The iterative method, (curr = prev1 + prev2 in a forloop) does not tree out this way, nor does it take as much memory since it's only 3 transient variables, instead of n frames in the recursion stack.
You know that fibonacci has a closed form solution that allows direct computation of the result in a fixed number of steps, right? Namely, (phin - (1 - phi)n) / sqrt(5). It always strikes me as somewhat remarkable that this should yield an integer, but it does.
phi is the golden ratio, of course; (1 + sqrt(5)) / 2.

I don't have a favourite -- there are so many beautiful ones to pick from -- but one I've always found intriguing is the Bailey–Borwein–Plouffe (BBP) formula, which enables you to calculate an arbitrary digit of pi without knowledge about the preceding digits.

RSA introduced me to the world of modular arithmetic, which can be used to solve a surprising number of interesting problems!

Hasn't taught me much, but the Johnson–Trotter Algorithm never fails to blow my mind.

Binary decision diagrams, though formally not an algorithm but a datastructure, lead to elegant and minimal solutions for various sorts of (boolean) logic problems. They were invented and developped to minimise the gate count in chip-design, and can be viewed as one of the fundaments of the silicon revolution. The resulting algorithms are amazingly simple.
What they taught me:
a compact representation of any problem is important; small is beautiful
a small set of constraints/reductions applied recursively can be used to accomplish this
for problems with symmetries, tranformation to a canonical form should be the first step to consider
not every piece of literature is read. Knuth found out about BDD's several years after their invention/introduction. (and spent almost a year investigating them)

For me, the simple swap in Kelly & Pohl's A Book on C to demonstrate call-by-reference flipped me out when I first saw it. I looked at that, and pointers snapped into place. Verbatim. . .
void swap(int *p, int *q)
{
int temp;
temp = *p;
*p = *q;
*q = temp;
}

The Towers of Hanoi algorithm is one of the most beautiful algorithms. It shows how you can use recursion to solve a problem in a much more elegant fashion than the iterative method.
Alternatively, the recursion algorithm for Fibonacci series and calculating powers of a number demonstrate the reverse situation of recursive algorithm being used for the sake of recursion instead of providing good value.

An algorithm that generates a list of primes by comparing each number to the current list of primes, adding it if it's not found, and returning the list of primes at the end. Mind-bending in several ways, not the least of which being the idea of using the partially-completed output as the primary search criteria.

Storing two pointers in a single word for a doubly linked list tought me the lesson that you can do very bad things in C indeed (with which a conservative GC will have lots of trouble).

The most proud I've been of a solution was writing something very similar to the DisplayTag package. It taught me a lot about code design, maintainability, and reuse. I wrote it well before DisplayTag, and it was sunk into an NDA agreement, so I couldn't open source it, but I can still speak gushingly about that one in job interviews.

Map/Reduce. Two simple concepts that fit together to make a load of data-processing tasks easier to parallelize.
Oh... and it's only the basis of massively-parallel indexing:
http://labs.google.com/papers/mapreduce.html

Not my favorite, but the Miller Rabin Algorithm for testing primality showed me that being right almost all the time, is good enough almost all the time. (i.e. Don't mistrust a probabilistic algorithm just because it has a probability of being wrong.)

#Krishna Kumar
The bitwise solution is even more fun than the recursive solution.

Related

Is it recommended to use recursive algorithm to calculate sum of n cubes in terms of time and space efficiency?

Is it recommended to use recursive algorithm to calculate sum of n cubes in terms of time and space efficiency? comparing to a non-recursive?
What exactly do you mean? Summing the first n cubes is best done by computing (n^2*(n + 1)^2)/4, but if you're given a list of numbers to sum their cubes, that's not much of an option.
If you are in a language that does tail call optimization, a tail call recursive implementation is certainly recommended. If you're not, it may still be worth writing the recursive function if that is easier for you to reason about (a very important aspect of organizing code!). But do keep in mind that a recursion of depth n will take, depending on your language, compiler, etc., anywhere from 4*n to at least several 100*n bytes of memory, and stack space isn't unlimited.
I'd go for a loop in most languages. For large n, because it is more resource efficient, and for small n, because I find it easier to read than a recursive version. But that's tied to my personal background and experience, and what is easier for you and whoever else needs to se your code maybe completely different.
It is dependent on what you want to accomplish. If you want it to be dependent on previous outcomes, you can make it recursive. Otherwise I would suggest to make it non-recursive.
Most compiled languages have tail recursion removal and for simple case like this it will not be a problem. Math people find it much easier to write functional languages and recursions come more natural to them. You can however you can write very efficiently:
var sumOf0To10Cubes = Enumerable.Range(0, 10).Select(o => Math.Pow(o, 3)).Sum();
Note that math people prefer:
Sum[x^3,x->{0,10}]

What's the worst-case valid sudoku puzzle for simple backtracking brute force algorithm?

The "simple/naive backtracking brute force algorithm", "Straightforward Depth-First Search" for sudoku is commonly known and implemented.
and no different implementation seems to exist.
(when i first wrote this question.. i wanted to mean we could completely standardize it, but the wording is bad..)
This guy has described the algorithm well i think: https://stackoverflow.com/a/2075498/3547717
Edit: So let me have it more specified with pseudo code...
var field[9][9]
set the givens in 'field'
if brute (first empty grid) = true then
output solution
else
output no solution
end if
function brute (cx, cy)
for n = 1 to 9
if (n doesn't present in row cy) and (n doesn't present in column cx) and (n doesn't present in block (cx div 3, cy div 3)) then
let field[cx][cy] = n
if (cx, cy) this is the last empty grid then
return true
elseif brute (next empty grid) = true then
return true
end if
let field[cx][cy] = empty
end if
next n
end function
I want to find the puzzle that requires most time. We may call it "hardest" for this particular "standardized" algorithm, but this one is not like those questions asking for "Hardest sudoku".
In fact, a "hard" puzzle under this definition may turn super easy when simply rotated or flipped.
According to the rule "for each grid try number 1 to 9", it tries from 1 on, so we may somehow let it try more by using proper number, by the way there won't be permutation problem.
The sudoku puzzle must be valid, i.e. it should have exactly 1 solution. Some guy got a puzzle requiring 1439 seconds, but it's not valid because of having no solution.
I define the time required (or say time complexity) equivalent to how many times the recursive function is entered. (in my implementation, it's slightly different from the pseudo code above, because of the last entrance, and ensuring unique solution, etc.)
Is there any good way to construct it, or we have to use approximate ones like heuristic algorithms to find inexact solutions?
I've implemented a backtracking with both naive strategy (that I referred to as "simple" above, it's unique) and Peter Norvig's "Least Candidates First" strategy (my implementation is deterministic, but not unique. As Peter has also mentioned, the order of python dict changes the result a lot, in case of a tie on the number of candidates).
https://github.com/farteryhr/labs/blob/master/sudoku.c
The no-solution one:
.....5.8....6.1.43..........1.5........1.6...3.......553.....61........4.........
takes 60 seconds on my laptop to get the no-solution conclusion, entering the recursion function 2549798781 times (called "cycles" later). With my implementation of LCF, 78308087 cycles in 30 seconds to conclude. It's because finding the grid with least candidates needs more operations, a single cycle of LCF strategy uses about 16x more time.
The topmost one on the Hardest list:
4.....8.5.3..........7......2.....6.....8.4......1.......6.3.7.5..2.....1.4......
takes 3.0s, found the solution at cycle 9727397, and 142738236 cycles for ensuring unique solution. (my LCF: 981/7216 in 0.004s)
Many in the "hard" list are still easy for naive, though a larger portion of them needs 10^7 to 10^9 cycles.
On Wikipedia: Sudoku solving algorithms (Original) it's stated that such puzzles against backtracking algorithm can be constructed, by making as many empty grids at the beginning as possible and the permutation of the top row 987654321.
Well the test..
..............3.85..1.2.......5.7.....4...1...9.......5......73..2.1........4...9
takes 1.4s, 69175317 cycles for finding solution, 69207227 cycles ensuring unique solution. Not as good as the hard one provided by Peter, but OK, and it's almost right after finding the solution, the search ends. That's probably how the first row works by being lexicographically large. (my LCF: 29206/46160 in 0.023s)
Yes these are obvious, I'm just asking for better ways...
There are also other ways of measuring the difficulty of Sudoku (through solving)
Sudoku Analyst will get stuck with the multiple-solution puzzle given by Peter (naive 419195/419256, LCF 2529478/2529482, yes, there are some puzzles that make LCF do worse):
.....6....59.....82....8....45........3........6..3.54...325..6..................
This one is easy for both naive backtracking (10008/76703) and LCF backtracking (313/1144), but also gets Sudoku Analyst stuck.
..53.....8......2..7..1.5..4....53...1..7...6..32...8..6.5....9..4....3......97..
Another update:
The most difficult Sudoku puzzles are quickly solved by a straightforward depth-first search algorithm
Ha, finally someone also looking for it, and a super tough one is given! The following valid puzzle:
9..8...........5............2..1...3.1.....6....4...7.7.86.........3.1..4.....2..
In this paper, the algorithm is named SDFS, Straightforward Depth-First Search. The number of cycles stated by the author is 1553023932/1884424814, and with my implementation, it's 1305263522/1584688020. Yes, there will be some difference on precisely where to pop the counter, but the basic behavior matches. On repl.it 's server, it took 97s to find the answer and 119s to finish the search.
You can easily generate the worst case by recording the time taken / no. of operations taken by your code to solve hard sudoku puzzles. You can either use a random generator that generates valid sudoku puzzles (or) you can take hard sudoku puzzles from the internet and run your code against it to measure the time/number of operations. Once you run your code against 10000 such cases the slowest 5 (and the unsolved ones) would be the worst cases for your solution.

recursion versus iteration

Is it correct to say that everywhere recursion is used a for loop could be used? And if recursion is usually slower what is the technical reason for ever using it over for loop iteration?
And if it is always possible to convert an recursion into a for loop is there a rule of thumb way to do it?
Recursion is usually much slower because all function calls must be stored in a stack to allow the return back to the caller functions. In many cases, memory has to be allocated and copied to implement scope isolation.
Some optimizations, like tail call optimization, make recursions faster but aren't always possible, and aren't implemented in all languages.
The main reasons to use recursion are
that it's more intuitive in many cases when it mimics our approach of the problem
that some data structures like trees are easier to explore using recursion (or would need stacks in any case)
Of course every recursion can be modeled as a kind of loop : that's what the CPU will ultimately do. And the recursion itself, more directly, means putting the function calls and scopes in a stack. But changing your recursive algorithm to a looping one might need a lot of work and make your code less maintainable : as for every optimization, it should only be attempted when some profiling or evidence showed it to be necessary.
Is it correct to say that everywhere recursion is used a for loop could be used?
Yes, because recursion in most CPUs is modeled with loops and a stack data structure.
And if recursion is usually slower what is the technical reason for using it?
It is not "usually slower": it's recursion that is applied incorrectly that's slower. On top of that, modern compilers are good at converting some recursions to loops without even asking.
And if it is always possible to convert an recursion into a for loop is there a rule of thumb way to do it?
Write iterative programs for algorithms best understood when explained iteratively; write recursive programs for algorithms best explained recursively.
For example, searching binary trees, running quicksort, and parsing expressions in many programming languages is often explained recursively. These are best coded recursively as well. On the other hand, computing factorials and calculating Fibonacci numbers are much easier to explain in terms of iterations. Using recursion for them is like swatting flies with a sledgehammer: it is not a good idea, even when the sledgehammer does a really good job at it+.
+ I borrowed the sledgehammer analogy from Dijkstra's "Discipline of Programming".
Question :
And if recursion is usually slower what is the technical reason for ever using it over for loop iteration?
Answer :
Because in some algorithms are hard to solve it iteratively. Try to solve depth-first search in both recursively and iteratively. You will get the idea that it is plain hard to solve DFS with iteration.
Another good thing to try out : Try to write Merge sort iteratively. It will take you quite some time.
Question :
Is it correct to say that everywhere recursion is used a for loop could be used?
Answer :
Yes. This thread has a very good answer for this.
Question :
And if it is always possible to convert an recursion into a for loop is there a rule of thumb way to do it?
Answer :
Trust me. Try to write your own version to solve depth-first search iteratively. You will notice that some problems are easier to solve it recursively.
Hint : Recursion is good when you are solving a problem that can be solved by divide and conquer technique.
Besides being slower, recursion can also result in stack overflow errors depending on how deep it goes.
To write an equivalent method using iteration, we must explicitly use a stack. The fact that the iterative version requires a stack for its solution indicates that the problem is difficult enough that it can benefit from recursion. As a general rule, recursion is most suitable for problems that cannot be solved with a fixed amount of memory and consequently require a stack when solved iteratively.
Having said that, recursion and iteration can show the same outcome while they follow different pattern.To decide which method works better is case by case and best practice is to choose based on the pattern that problem follows.
For example, to find the nth triangular number of
Triangular sequence: 1 3 6 10 15 …
A program that uses an iterative algorithm to find the n th triangular number:
Using an iterative algorithm:
//Triangular.java
import java.util.*;
class Triangular {
public static int iterativeTriangular(int n) {
int sum = 0;
for (int i = 1; i <= n; i ++)
sum += i;
return sum;
}
public static void main(String args[]) {
Scanner stdin = new Scanner(System.in);
System.out.print("Please enter a number: ");
int n = stdin.nextInt();
System.out.println("The " + n + "-th triangular number is: " +
iterativeTriangular(n));
}
}//enter code here
Using a recursive algorithm:
//Triangular.java
import java.util.*;
class Triangular {
public static int recursiveTriangular(int n) {
if (n == 1)
return 1;
return recursiveTriangular(n-1) + n;
}
public static void main(String args[]) {
Scanner stdin = new Scanner(System.in);
System.out.print("Please enter a number: ");
int n = stdin.nextInt();
System.out.println("The " + n + "-th triangular number is: " +
recursiveTriangular(n));
}
}
Yes, as said by Thanakron Tandavas,
Recursion is good when you are solving a problem that can be solved by divide and conquer technique.
For example: Towers of Hanoi
N rings in increasing size
3 poles
Rings start stacked on pole 1. Goal is to move rings so
that they are stacked on pole 3 ...But
Can only move one ring at a time.
Can’t put larger ring on top of smaller.
Iterative solution is “powerful yet ugly”; recursive solution is “elegant”.
I seem to remember my computer science professor say back in the day that all problems that have recursive solutions also have iterative solutions. He says that a recursive solution is usually slower, but they are frequently used when they are easier to reason about and code than iterative solutions.
However, in the case of more advanced recursive solutions, I don't believe that it will always be able to implement them using a simple for loop.
Most of the answers seem to assume that iterative = for loop. If your for loop is unrestricted (a la C, you can do whatever you want with your loop counter), then that is correct. If it's a real for loop (say as in Python or most functional languages where you cannot manually modify the loop counter), then it is not correct.
All (computable) functions can be implemented both recursively and using while loops (or conditional jumps, which are basically the same thing). If you truly restrict yourself to for loops, you will only get a subset of those functions (the primitive recursive ones, if your elementary operations are reasonable). Granted, it's a pretty large subset which happens to contain every single function you're likely to encouter in practice.
What is much more important is that a lot of functions are very easy to implement recursively and awfully hard to implement iteratively (manually managing your call stack does not count).
recursion + memorization could lead to a more efficient solution compare with a pure iterative approach, e.g. check this:
http://jsperf.com/fibonacci-memoized-vs-iterative-for-large-n
Short answer: the trade off is recursion is faster and for loops take up less memory in almost all cases. However there are usually ways to change the for loop or recursion to make it run faster

What is a "naive" algorithm, and what is a "closed - form" solution?

I have a few questions regarding the semantics of terminology used when describing algorithms.
Firstly, what is meant by a 'naive' algorithm? How does this differ from other solutions to a given problem? What other forms can solutions take?
Secondly, I have heard much reference to having a 'closed - form' solution. I have no idea what this means either - but often it appears when trying to solve recurrence relations...
Thanks for your time
A Naive algorithm is usually the most obvious solution when one is asked a problem. It may not be a smart algorithm but will probably get the job done (...eventually.)
Eg. Trying to search for an element in a sorted array.
A Naive algorithm would be to use a Linear Search.
A Not-So Naive Solution would be to use the Binary Search.
A better example, would be in case of substring search Naive Algorithm is far less efficient than Boyer–Moore or Knuth–Morris–Pratt Algorithm
A Closed Form Solution is a simple Solution that works instantly without any loops,functions etc..
Eg:
Iterative Algorithm for sum of integer from 1 to n
s= 0
for i in 1 to n
s = s + i
end for
print s
Closed Form (for the same problem)
s = n * (n + 1 ) /2
Naive algorithm is a very simple algorithm, one with very simple rules. Sometimes the first one that comes to mind. It may be stupid and very slow, it may not even solve the problem. It may sometimes be the best possible. Here's an example of a problem and "naive" algorithms:
Problem: You are in a (2-dimensional) maze. Find your way out. (meaning: to a spot with an "EXIT" sign :)
Naive algorithm 1: Start walking and choose the right one in every intersection you meet (until you find "EXIT").
Naive algorithm 2: Start walking and choose a random one in every intersection you meet (until you find "EXIT").
Algorithm 1 will not even get you out of some mazes!
Algorithm 2 will get you out of all mazes (although this is rather hard to prove).
Closed form means you can give the one expression as solution, that does solve it without recurrence/recursive. Here one should remark, that it is not always possible to find such a closed form.
Naive means just that what it says: A first, stupid solution to the problem, that solves it, but maybe not very time-/space efficient. What one really considers 'naive' depends on the speaker, the context, and the weather of the next day. Often it is used to distinguish a very sophisticated solution (that uses some kind of trick) from the obvious implementation.

Find the priority function / alphabet order for extreme higher order elements relation

This question is an extension to the following one. The difference is that now our function to optimize will have higher order relations between elements:
We have an array of elements a1,a2,...aN from an alphabet E. Assuming |N| >> |E|.
For each symbol of the alphabet we define an unique integer priority = V(sym). Let's define V{i} := V(symbol(ai)) for the simplicity.
The task is to find a priority function V for which:
Count(i)->MIN | V{i} > V{i+1} <= V{i+2}
In other words, I need to find the priorities / permutation of the alphabet for which the number of positions i, satisfying the condition V{i}>V{i+1}<=V{i+2}, is minimum.
Maximum required abstraction (low priority for me). I guess once the solution model for the initial question is extended to cover the first part of this one, extending it farther (see below) will be easier.
Given a matrix of signs B of size MxK (basically B[i,j] is from the set {<,>,<=,>=}), find the priority function V for which:
Sum(for all j in range [1,M]) {Count(i)}->EXTREMUM | V{i} B[j,1] V{i+1} B[j,2] ... B[j,K] V{i+K}
As an example, find the priority function V, for which the number of i, satisfying V{i}<V{i+1}<V{i+2} or V{i}>V{i+1}>V{i+2}, is minimum.
My intuition is that all variations on this problem will prove to be NP-hard. So I'd begin looking for heuristics that produce reasonable answers. This may involve some trial and error.
A simplistic approach is to write down a possible permutation. And then try possible swaps until you've arrived at a local minimum. Try several times, and pick the best answer.
Simulated annealing provides a more sophisticated version of this approach, see http://en.wikipedia.org/wiki/Simulated_annealing for a description. It may take some experimentation to find a set of parameters that seems to converge relatively well.
Another idea is to look for a genetic algorithm. Based on a quick Google search it looks like the standard way to do this is to try to turn an NP-complete problem into a SAT problem, and then use a genetic algorithm on that problem. This approach would require turning this into a SAT problem in some reasonable way. Unfortunately it is not obvious to me how one would go about doing this reduction. Indeed in the first version that you had, your problem was closely connected to a classic NP-hard problem. The fact that it is labeled NP-hard rather than NP-complete is evidence that people haven't found a good way to transform it into a SAT problem. So if it isn't obvious how to turn the simple version into a SAT problem, then you are unlikely to convert the hard problem either.
But you could still try some variation on genetic algorithms. Mutation is pretty simple, just swap some elements around. One way to combine elements would be to take 3 permutations and use quicksort to find the combination as follows: take a random pivot, and then use "majority wins" to bucket elements into bigger and smaller. Sort each half in the same way.
I'm sorry that I can't just give you an approach and say, "This should work." You've got what looks like an open-ended research project, and the best I can do is give you some ideas about things you can try that might work reasonably well.

Resources