How does this work? Weird Towers of Hanoi Solution - algorithm

I was lost on the internet when I discovered this unusual, iterative solution to the towers of Hanoi:
for (int x = 1; x < (1 << nDisks); x++)
{
FromPole = (x & x-1) % 3;
ToPole = ((x | x-1) + 1) % 3;
moveDisk(FromPole, ToPole);
}
This post also has similar Delphi code in one of the answers.
However, for the life of me, I can't seem to find a good explanation for why this works.
Can anyone help me understand it?

the recursive solution to towers of Hanoi works so that if you want to move N disks from peg A to C, you first move N-1 from A to B, then you move the bottom one to C, and then you move again N-1 disks from B to C. In essence,
hanoi(from, to, spare, N):
hanoi(from, spare, to, N-1)
moveDisk(from, to)
hanoi(spare, to, from, N-1)
Clearly hanoi( _ , _ , _ , 1) takes one move, and hanoi ( _ , _ , _ , k) takes as many moves as 2 * hanoi( _ , _ , _ , k-1) + 1. So the solution length grows in the sequence 1, 3, 7, 15, ... This is the same sequence as (1 << k) - 1, which explains the length of the loop in the algorithm you posted.
If you look at the solutions themselves, for N = 1 you get
FROM TO
; hanoi(0, 2, 1, 1)
0 2 movedisk
For N = 2 you get
FROM TO
; hanoi(0, 2, 1, 2)
; hanoi(0, 1, 2, 1)
0 1 ; movedisk
0 2 ; movedisk
; hanoi(1, 2, 0, 1)
1 2 ; movedisk
And for N = 3 you get
FROM TO
; hanoi(0, 2, 1, 3)
; hanoi(0, 1, 2, 2)
; hanoi(0, 2, 1, 1)
0 2 ; movedisk
0 1 ; movedisk
; hanoi(2, 1, 0, 1)
2 1 ; movedisk
0 2 ; movedisk ***
; hanoi(1, 2, 0, 2)
; hanoi(1, 0, 2, 1)
1 0 ; movedisk
1 2 ; movedisk
; hanoi(0, 2, 1, 1)
0 2 ; movedisk
Because of the recursive nature of the solution, the FROM and TO columns follow a recursive logic: if you take the middle entry on the columns, the parts above and below are copies of each others, but with the numbers permuted. This is an obvious consequence of the algorithm itself which does not perform any arithmetics on the peg numbers but only permutes them. In the case N=4 the middle row is at x=4 (marked with three stars above).
Now the expression (X & (X-1)) unsets the least set bit of X, so it maps e.g. numbers form 1 to 7 like this:
1 -> 0
2 -> 0
3 -> 2
4 -> 0 (***)
5 -> 4 % 3 = 1
6 -> 4 % 3 = 1
7 -> 6 % 3 = 0
The trick is that because the middle row is always at an exact power of two and thus has exactly one bit set, the part after the middle row equals the part before it when you add the middle row value (4 in this case) to the rows (i.e. 4=0+4, 6=2+6). This implements the "copy" property, the addition of the middle row implements the permutation part. The expression (X | (X-1)) + 1 sets the lowest zero bit which has ones to its right, and clears these ones, so it has similar properties as expected:
1 -> 2
2 -> 4 % 3 = 1
3 -> 4 % 3 = 1
4 -> 8 (***) % 3 = 2
5 -> 6 % 3 = 0
6 -> 8 % 3 = 2
7 -> 8 % 3 = 2
As to why these sequences actually produce the correct peg numbers, let's consider the FROM column. The recursive solution starts with hanoi(0, 2, 1, N), so at the middle row (2 ** (N-1)) you must have movedisk(0, 2). Now by the recursion rule, at (2 ** (N-2)) you need to have movedisk(0, 1) and at (2 ** (N-1)) + 2 ** (N-2) movedisk (1, 2). This creates the "0,0,1" pattern for the from pegs which is visible with different permutations in the table above (check rows 2, 4 and 6 for 0,0,1 and rows 1, 2, 3 for 0,0,2, and rows 5, 6, 7 for 1,1,0, all permuted versions of the same pattern).
Now then of all the functions that have this property that they create copies of themselves around powers of two but with offsets, the authors have selected those that produce modulo 3 the correct permutations. This isn't an overtly difficult task because there are only 6 possible permutations of the three integers 0..2 and the permutations progress in a logical order in the algorithm. (X|(X-1))+1 is not necessarily deeply linked with the Hanoi problem or it doesn't need to be; it's enough that it has the copying property and that it happens to produce the correct permutations in the correct order.

antti.huima's solution is essentially correct, but I wanted something more rigorous, and it was too big to fit in a comment. Here goes:
First note: at the middle step x = 2N-1 of this algorithm, the "from" peg is 0, and the "to" peg is 2N % 3. This leaves 2(N-1) % 3 for the "spare" peg.
This is also true for the last step of the algorithm, so we see that actually the authors' algorithm
is a slight "cheat": they're moving the disks from peg 0 to peg 2N % 3, rather than a fixed,
pre-specified "to" peg. This could be changed with not much work.
The original Hanoi algorithm is:
hanoi(from, to, spare, N):
hanoi(from, spare, to, N-1)
move(from, to)
hanoi(spare, to, from, N-1)
Plugging in "from" = 0, "to" = 2N % 3, "spare" = 2N-1 % 3, we get (suppressing the %3's):
hanoi(0, 2**N, 2**(N-1), N):
(a) hanoi(0, 2**(N-1), 2**N, N-1)
(b) move(0, 2**N)
(c) hanoi(2**(N-1), 2**N, 0, N-1)
The fundamental observation here is:
In line (c), the pegs are exactly the pegs of hanoi(0, 2N-1, 2N, N-1) shifted by 2N-1 % 3, i.e.
they are exactly the pegs of line (a) with this amount added to them.
I claim that it follows that when we
run line (c), the "from" and "to" pegs are the corresponding pegs of line (a) shifted by 2N-1 % 3. This
follows from the easy, more general lemma that in hanoi(a+x, b+x, c+x, N), the "from and "to" pegs are shifted exactly x from in hanoi(a, b, c, N).
Now consider the functions
f(x) = (x & (x-1)) % 3
g(x) = (x | (x-1)) + 1 % 3
To prove that the given algorithm works, we only have to show that:
f(2N-1) == 0 and g(2N-1) == 2N
for 0 < i < 2N, we have f(2N - i) == f(2N + i) + 2N % 3, and g(2N - i) == g(2N + i) + 2N % 3.
Both of these are easy to show.

This isn't directly answering the question, but it was too long to put in a comment.
I had always done this by analyzing the size of disk you should move next. If you look at the disks moved, it comes out to:
1 disk : 1
2 disks : 1 2 1
3 disks : 1 2 1 3 1 2 1
4 disks : 1 2 1 3 1 2 1 4 1 2 1 3 1 2 1
Odd sizes always move in the opposite direction of even ones, in order if pegs (0, 1, 2, repeat) or (2, 1, 0, repeat).
If you take a look at the pattern, the ring to move is the highest bit set of the xor of the number of moves and the number of moves + 1.

Related

Implementation: Algorithm for a special distribution Problem

We are given a number x, and a set of n coins with denominations v1, v2, …, vn.
The coins are to be divided between Alice and Bob, with the restriction that each person's coins must add up to at least x.
For example, if x = 1, n = 2, and v1 = v2 = 2, then there are two possible distributions: one where Alice gets coin #1 and Bob gets coin #2, and one with the reverse. (These distributions are considered distinct even though both coins have the same denomination.)
I'm interested in counting the possible distributions. I'm pretty sure this can be done in O(nx) time and O(n+x) space using dynamic programming; but I don't see how.
Count the ways for one person to get just less than x, double it and subtract from the doubled total number of ways to divide the collection in two, (Stirling number of the second kind {n, 2}).
For example,
{2, 3, 3, 5}, x = 5
i matrix
0 2: 1
1 3: 1 (adding to 2 is too much)
2 3: 2
3 N/A (≥ x)
3 ways for one person to get
less than 5.
Total ways to partition a set
of 4 items in 2 is {4, 2} = 7
2 * 7 - 2 * 3 = 8
The Python code below uses MBo's routine. If you like this answer, please consider up-voting that answer.
# Stirling Algorithm
# Cod3d by EXTR3ME
# https://extr3metech.wordpress.com
def stirling(n,k):
n1=n
k1=k
if n<=0:
return 1
elif k<=0:
return 0
elif (n==0 and k==0):
return -1
elif n!=0 and n==k:
return 1
elif n<k:
return 0
else:
temp1=stirling(n1-1,k1)
temp1=k1*temp1
return (k1*(stirling(n1-1,k1)))+stirling(n1-1,k1-1)
def f(coins, x):
a = [1] + (x-1) * [0]
# Code by MBo
# https://stackoverflow.com/a/53418438/2034787
for c in coins:
for i in xrange(x - 1, c - 1, -1):
if a[i - c] > 0:
a[i] = a[i] + a[i - c]
return 2 * (stirling(len(coins), 2) - sum(a) + 1)
print f([2,3,3,5], 5) # 8
print f([1,2,3,4,4], 5) # 16
If sum of all coins is S, then the first person can get x..S-x of money.
Make array A of length S-x+1 and fill it with numbers of variants of changing A[i] with given coins (like kind of Coin Change problem).
To provide uniqueness (don't count C1+C2 and C2+C1 as two variants), fill array in reverse direction
A[0] = 1
for C in Coins:
for i = S-x downto C:
if A[i - C] > 0:
A[i] = A[i] + A[i - C]
//we can compose value i as i-C and C
then sum A entries in range x..S-x
Example for coins 2, 3, 3, 5 and x=5.
S = 13, S-x = 8
Array state after using coins in order:
0 1 2 3 4 5 6 7 8 //idx
1 1
1 1 1 1
1 1 2 2 1 1
1 1 2 3 1 1 3
So there are 8 variants to distribute these coins. Quick check (3' denotes the second coin 3):
2 3 3' 5
2 3' 3 5
2 3 3' 5
2 5 3 3'
3 3' 2 5
3 5 2 3'
3' 5 2 3
5 2 3 3'
You can also solve it in O(A * x^2) time and memory adding memoization to this dp:
solve(A, pos, sum1, sum2):
if (pos == A.length) return sum1 == x && sum2 == x
return solve(A, pos + 1, min(sum1 + A[pos], x), sum2) +
solve(A, pos + 1, sum1, min(sum2 + A[pos], x))
print(solve(A, 0, 0, 0))
So depending if x^2 < sum or not you could use this or the answer provided by #Mbo (in terms of time complexity). If you care more about space, this is better only when A * x^2 < sum - x

algorithmic puzzle for calculating the number of combinations of numbers sum to a fixed result

This is a puzzle i think of since last night. I have come up with a solution but it's not efficient so I want to see if there is better idea.
The puzzle is this:
given positive integers N and T, you will need to have:
for i in [1, T], A[i] from { -1, 0, 1 }, such that SUM(A) == N
additionally, the prefix sum of A shall be [0, N], while when the prefix sum PSUM[A, t] == N, it's necessary to have for i in [t + 1, T], A[i] == 0
here prefix sum PSUM is defined to be: PSUM[A, t] = SUM(A[i] for i in [1, t])
the puzzle asks how many such A's exist given fixed N and T
for example, when N = 2, T = 4, following As work:
1 1 0 0
1 -1 1 1
0 1 1 0
but following don't:
-1 1 1 1 # prefix sum -1
1 1 -1 1 # non-0 following a prefix sum == N
1 1 1 -1 # prefix sum > N
following python code can verify such rule, when given N as expect and an instance of A as seq(some people may feel easier reading code than reading literal description):
def verify(expect, seq):
s = 0
for j, i in enumerate(seq):
s += i
if s < 0:
return False
if s == expect:
break
else:
return s == expect
for k in range(j + 1, len(seq)):
if seq[k] != 0:
return False
return True
I have coded up my solution, but it's too slow. Following is mine:
I decompose the problem into two parts, a part without -1 in it(only {0, 1} and a part with -1.
so if SOLVE(N, T) is the correct answer, I define a function SOLVE'(N, T, B), where a positive B allows me to extend prefix sum to be in the interval of [-B, N] instead of [0, N]
so in fact SOLVE(N, T) == SOLVE'(N, T, 0).
so I soon realized the solution is actually:
have the prefix of A to be some valid {0, 1} combination with positive length l, and with o 1s in it
at position l + 1, I start to add 1 or more -1s and use B to track the number. the maximum will be B + o or depend on the number of slots remaining in A, whichever is less.
recursively call SOLVE'(N, T, B)
in the previous N = 2, T = 4 example, in one of the search case, I will do:
let the prefix of A be [1], then we have A = [1, -, -, -].
start add -1. here i will add only one: A = [1, -1, -, -].
recursive call SOLVE', here i will call SOLVE'(2, 2, 0) to solve the last two spots. here it will return [1, 1] only. then one of the combinations yields [1, -1, 1, 1].
but this algorithm is too slow.
I am wondering how can I optimize it or any different way to look at this problem that can boost the performance up?(I will just need the idea, not impl)
EDIT:
some sample will be:
T N RESOLVE(N, T)
3 2 3
4 2 7
5 2 15
6 2 31
7 2 63
8 2 127
9 2 255
10 2 511
11 2 1023
12 2 2047
13 2 4095
3 3 1
4 3 4
5 3 12
6 3 32
7 3 81
8 3 200
9 3 488
10 3 1184
11 3 2865
12 3 6924
13 3 16724
4 4 1
5 4 5
6 4 18
an exponential time solution will be following in general(in python):
import itertools
choices = [-1, 0, 1]
print len([l for l in itertools.product(*([choices] * t)) if verify(n, l)])
An observation: assuming that n is at least 1, every solution to your stated problem ends in something of the form [1, 0, ..., 0]: i.e., a single 1 followed by zero or more 0s. The portion of the solution prior to that point is a walk that lies entirely in [0, n-1], starts at 0, ends at n-1, and takes fewer than t steps.
Therefore you can reduce your original problem to a slightly simpler one, namely that of determining how many t-step walks there are in [0, n] that start at 0 and end at n (where each step can be 0, +1 or -1, as before).
The following code solves the simpler problem. It uses the lru_cache decorator to cache intermediate results; this is in the standard library in Python 3, or there's a recipe you can download for Python 2.
from functools import lru_cache
#lru_cache()
def walks(k, n, t):
"""
Return the number of length-t walks in [0, n]
that start at 0 and end at k. Each step
in the walk adds -1, 0 or 1 to the current total.
Inputs should satisfy 0 <= k <= n and 0 <= t.
"""
if t == 0:
# If no steps allowed, we can only get to 0,
# and then only in one way.
return k == 0
else:
# Count the walks ending in 0.
total = walks(k, n, t-1)
if 0 < k:
# ... plus the walks ending in 1.
total += walks(k-1, n, t-1)
if k < n:
# ... plus the walks ending in -1.
total += walks(k+1, n, t-1)
return total
Now we can use this function to solve your problem.
def solve(n, t):
"""
Find number of solutions to the original problem.
"""
# All solutions stick at n once they get there.
# Therefore it's enough to find all walks
# that lie in [0, n-1] and take us to n-1 in
# fewer than t steps.
return sum(walks(n-1, n-1, i) for i in range(t))
Result and timings on my machine for solve(10, 100):
In [1]: solve(10, 100)
Out[1]: 250639233987229485923025924628548154758061157
In [2]: %timeit solve(10, 100)
1000 loops, best of 3: 964 µs per loop

How many permutations of a given array result in BST's of height 2?

A BST is generated (by successive insertion of nodes) from each permutation of keys from the set {1,2,3,4,5,6,7}. How many permutations determine trees of height two?
I been stuck on this simple question for quite some time. Any hints anyone.
By the way the answer is 80.
Consider how the tree would be height 2?
-It needs to have 4 as root, 2 as the left child, 6 right child, etc.
How come 4 is the root?
-It needs to be the first inserted. So we have one number now, 6 still can move around in the permutation.
And?
-After the first insert there are still 6 places left, 3 for the left and 3 for the right subtrees. That's 6 choose 3 = 20 choices.
Now what?
-For the left and right subtrees, their roots need to be inserted first, then the children's order does not affect the tree - 2, 1, 3 and 2, 3, 1 gives the same tree. That's 2 for each subtree, and 2 * 2 = 4 for the left and right subtrees.
So?
In conclusion: C(6, 3) * 2 * 2 = 20 * 2 * 2 = 80.
Note that there is only one possible shape for this tree - it has to be perfectly balanced. It therefore has to be this tree:
4
/ \
2 6
/ \ / \
1 3 5 7
This requires 4 to be inserted first. After that, the insertions need to build up the subtrees holding 1, 2, 3 and 5, 6, 7 in the proper order. This means that we will need to insert 2 before 1 and 3 and need to insert 6 before 5 and 7. It doesn't matter what relative order we insert 1 and 3 in, as long as they're after the 2, and similarly it doesn't matter what relative order we put 5 and 7 in as long as they're after 6. You can therefore think of what we need to insert as 2 X X and 6 Y Y, where the X's are the children of 2 and the Y's are the children of 6. We can then find all possible ways to get back the above tree by finding all interleaves of the sequences 2 X X and 6 Y Y, then multiplying by four (the number of ways of assigning X and Y the values 1, 3, 5, and 7).
So how many ways are there to interleave? Well, you can think of this as the number of ways to permute the sequence L L L R R R, since each permutation of L L L R R R tells us how to choose from either the Left sequence or the Right sequence. There are 6! / 3! 3! = 20 ways to do this. Since each of those twenty interleaves gives four possible insertion sequences, there end up being a total of 20 × 4 = 80 possible ways to do this.
Hope this helps!
I've created a table for the number of permutations possible with 1 - 12 elements, with heights up to 12, and included the per-root break down for anybody trying to check that their manual process (described in other answers) is matching with the actual values.
http://www.asmatteringofit.com/blog/2014/6/14/permutations-of-a-binary-search-tree-of-height-x
Here is a C++ code aiding the accepted answer, here I haven't shown the obvious ncr(i,j) function, hope someone will find it useful.
int solve(int n, int h) {
if (n <= 1)
return (h == 0);
int ans = 0;
for (int i = 0; i < n; i++) {
int res = 0;
for (int j = 0; j < h - 1; j++) {
res = res + solve(i, j) * solve(n - i - 1, h - 1);
res = res + solve(n - i - 1, j) * solve(i, h - 1);
}
res = res + solve(i, h - 1) * solve(n - i - 1, h - 1);
ans = ans + ncr(n - 1, i) * res;
}
return ans
}
The tree must have 4 as the root and 2 and 6 as the left and right child, respectively. There is only one choice for the root and the insertion should start with 4, however, once we insert the root, there are many insertion orders. There are 2 choices for, the second insertion 2 or 6. If we choose 2 for the second insertion, we have three cases to choose 6: choose 6 for the third insertion, 4, 2, 6, -, -, -, - there are 4!=24 choices for the rest of the insertions; fix 6 for the fourth insertion, 4, 2, -, 6, -,-,- there are 2 choices for the third insertion, 1 or 3, and 3! choices for the rest, so 2*3!=12, and the last case is to fix 6 in the fifth insertion, 4, 2, -, -, 6, -, - there are 2 choices for the third and fourth insertion ((1 and 3), or (3 and 1)) as well as for the last two insertions ((5 and 7) or (7 and 5)), so there are 4 choices. In total, if 2 is the second insertion we have 24+12+4=40 choices for the rest of the insertions. Similarly, there are 40 choices if the second insertion is 6, so the total number of different insertion orders is 80.

How to iterate through array combinations with constant sum efficiently?

I have an array and its length is X. Each element of the array has range 1 .. L. I want to iterate efficiently through all array combinations that has sum L.
Correct solutions for: L = 4 and X = 2
1 3
3 1
2 2
Correct solutions for: L = 5 and X = 3
1 1 3
1 3 1
3 1 1
1 2 2
2 1 2
2 2 1
The naive implementation is (no wonder) too slow for my problem (X is up to 8 in my case and L is up to 128).
Could anybody tell me how is this problem called or where to find a fast algorithm for the problem?
Thanks!
If I understand correctly, you're given two numbers 1 ≤ X ≤ L and you want to generate all sequences of positive integers of length X that sum to L.
(Note: this is similar to the integer partition problem, but not the same, because you consider 1,2,2 to be a different sequence from 2,1,2, whereas in the integer partition problem we ignore the order, so that these are considered to be the same partition.)
The sequences that you are looking for correspond to the combinations of X − 1 items out of L − 1. For, if we put the numbers 1 to L − 1 in order, and pick X − 1 of them, then the lengths of intervals between the chosen numbers are positive integers that sum to L.
For example, suppose that L is 16 and X is 5. Then choose 4 numbers from 1 to 15 inclusive:
Add 0 at the beginning and 16 at the end, and the intervals are:
and 3 + 4 + 1 + 6 + 2 = 16 as required.
So generate the combinations of X − 1 items out of L − 1, and for each one, convert it to a partition by finding the intervals. For example, in Python you could write:
from itertools import combinations
def partitions(n, t):
"""
Generate the sequences of `n` positive integers that sum to `t`.
"""
assert(1 <= n <= t)
def intervals(c):
last = 0
for i in c:
yield i - last
last = i
yield t - last
for c in combinations(range(1, t), n - 1):
yield tuple(intervals(c))
>>> list(partitions(2, 4))
[(1, 3), (2, 2), (3, 1)]
>>> list(partitions(3, 5))
[(1, 1, 3), (1, 2, 2), (1, 3, 1), (2, 1, 2), (2, 2, 1), (3, 1, 1)]
There are (L − 1)! / (X − 1)!(L − X)! combinations of X − 1 items out of L − 1, so the runtime of this algorithm (and the size of its output) is exponential in L. However, if you don't count the output, it only needs O(L) space.
With L = 128 and X = 8, there are 89,356,415,775 partitions, so it'll take a while to output them all!
(Maybe if you explain why you are computing these partitions, we might be able to suggest some way of meeting your requirements without having to actually produce them all.)

Hanoi configuration at a certain move

I am interested in finding how many disks are on each peg at a given move in the towers of Hanoi puzzle. For example, given n = 3 disks we have this sequence of configurations for optimally solving the puzzle:
0 1 2
0. 3 0 0
1. 2 0 1 (move 0 -> 2)
2. 1 1 1 (move 0 -> 1)
3. 1 2 0 (move 2 -> 1)
4. 0 2 1 (move 0 -> 2)
5. 1 1 1 (move 1 -> 0)
6. 1 0 2 (move 1 -> 2)
7. 0 0 3 (move 0 -> 2)
So given move number 5, I want to return 1 1 1, given move number 6, I want 1 0 2 etc.
This can easily be done by using the classical algorithm and stopping it after a certain number of moves, but I want something more efficient. The wikipedia page I linked to above gives an algorithm under the Binary solutions section. I think this is wrong however. I also do not understand how they calculate n.
If you follow their example and convert the disk positions it returns to what I want, it gives 4 0 4 for n = 8 disks and move number 216. Using the classical algorithm however, I get 4 2 2.
There is also an efficient algorithm implemented in C here that also gives 4 2 2 as the answer, but it lacks documentation and I don't have access to the paper it's based on.
The algorithm in the previous link seems correct, but can anyone explain how exactly it works?
A few related questions that I'm also interested in:
Is the wikipedia algorithm really wrong, or am I missing something? And how do they calculate n?
I only want to know how many disks are on each peg at a certain move, not on what peg each disk is on, which is what the literature seems to be more concerned about. Is there a simpler way to solve my problem?
1) If your algo says Wikipedia is broken I'd guess you are right...
2) As for calculating the number of disks in each peg, is it pretty straightfoward to do a recursive algorithm for it:
(Untested, unelegant and possibly full of +-1 errors code follows:)
function hanoi(n, nsteps, begin, middle, end, nb, nm, ne)
// n = number of disks to mive from begin to end
// nsteps = number of steps to move
// begin, middle, end = index of the pegs
// nb, nm, ne = number of disks currently in each of the pegs
if(nsteps = 0) return (begin, middle, end, nb, nm, ne)
//else:
//hanoi goes like
// a) h(n-1, begin, end, middle) | 2^(n-1) steps
// b) move 1 from begin -> end | 1 step
// c) h(n-1, middle, begin, end) | 2^(n-1) steps
// Since we know how the pile will look like after a), b) and c)
// we can skip those steps if nsteps is large...
if(nsteps <= 2^(n-1)){
return hanoi(n-1, nsteps, begin, end, middle, nb, ne, nm):
}
nb -= n;
nm += (n-1);
ne += 1;
nsteps -= (2^(n-1) + 1);
//we are now between b) and c)
return hanoi((n-1), nsteps, middle, begin, end, nm, nb, ne);
function(h, n, nsteps)
return hanoi(n, nsteps, 1, 2, 3, n, 0, 0)
If you want effieciency, it should try to convert this to an iterative form (shouldn't be hard - you don't need to mantain a stack anyways) and find a way to better represent the state of the program, instead of using 6+ variables willy nilly.
You can make use of the fact that the position at powers of two is easily known. For a tower of size T, we have:
Time Heights
2^T-1 | { 0, 0, T }
2^(T-1) | { 0, T-1, 1 }
2^(T-1)-1 | { 1, T-1, 0 }
2^(T-2) | { 1, 1, T-2 }
2^(T-2)-1 | { 2, 0, T-2 }
2^(T-2) | { 2, T-3, 1 }
2^(T-2)-1 | { 3, T-3, 0 }
...
0 | { T, 0, 0 }
It is easy to find out in between which those levels your move k is; simply look at log2(k).
Next, notice that between 2^(a-1) and 2^a-1, there are T-a disks which stay in the same place (the heaviest disks). All the other blocks will move however, since at this stage the algorithm is moving the subtower of size a. Hence use an iterative approach.
It might be a bit tricky to get the book-keeping right, but here you have the ingredients to find the heights for any k, with time complexity O(log2(T)).
Cheers
If you look at the first few moves of the puzzle, you'll see an important pattern. Each move (i - j) below means on turn i, move disc j. Discs are 0-indexed, where 0 is the smallest disc.
1 - 0
2 - 1
3 - 0
4 - 2
5 - 0
6 - 1
7 - 0
8 - 3
9 - 0
10 - 1
11 - 0
12 - 2
13 - 0
14 - 1
15 - 0
Disc 0 is moved every 2 turns, starting on turn 1. Disc 1 is moved every 4 turns, starting on turn 2 ... Disc i is moved every 2^(i+1) turns, starting on turn 2^i.
So, in constant time we can determine how many times a given disc has moved, given m:
moves = (m + 2^i) / (2^(i+1)) [integer division]
The next thing to note is that each disc moves in a cyclic pattern. Namely, the odd-numbered discs move to the left each time they move (2, 3, 1, 2, 3, 1...) and the even-numbered discs move to the right (1, 3, 2, 1, 3, 2...)
So once you know how many times a disc has moved, you can easily determine which peg it ends on by taking mod 3 (and doing a little bit of figuring).

Resources