what is the algorithm for going around anti clockwise - algorithm

I was able to make an algorithm for going around clockwise: 1 -> 2 -> 3 -> 4 -> 1 -> 2 -> 3 ...
It's like this:
(i % 4) + 1
Now I need to do the opposite thing: 4 -> 3 -> 2 -> 1 -> 4 -> 3 -> 2 ...
Can you please help me, this is making me mad. :D
Thanks!

Try the following formula for anti clockwise values
4-(i % 4)

How about
5 - ((i % 4) + 1)
seems to do the trick. May not be optimal but it works

Take a look at your "clockwise" series - you'll note the "counterclockwise" series is always five minus the corresponding clockwise value. Thus,
5 - ((i % 4) + 1)
should work. The parentheses may be omitted as well in most languages as % is typically performed before addition and subtraction.

I'm surprised it wasn't mentioned here, but there's no real reason to subtract anything (I personally think it's less readable that way) - an addition modulo can do the same -
(i + 3) % 4
In general -
(i + (N-1)) % N
Or for going back any arbitrary step m -
(i + (N-m)) % N

Related

Count the total number ways to reach the nth stair using step 1, 2 or 3 but the step 3 can be taken only once

For any given value N we have to find the number of ways to reach the top while using steps of 1,2 or 3 but we can use 3 steps only once.
for example if n=7
then possible ways could be
[1,1,1,1,1,1,1]
[1,1,1,1,1,2]
etc but we cannot have [3,3,1] or [1,3,3]
I have managed to solve the general case without the constraint of using 3 only once with dynamic programming as it forms a sort of fibonacci series
def countWays(n) :
res = [0] * (n + 1)
res[0] = 1
res[1] = 1
res[2] = 2
for i in range(3, n + 1) :
res[i] = res[i - 1] + res[i - 2] + res[i - 3]
return res[n]
how do I figure out the rest of it?
Let res0[n] be the number of ways to reach n steps without using a 3-step, and let res1[n] be the number of ways to reach n steps after having used a 3-step.
res0[i] and res1[i] are easily calculated from the previous values, in a manner similar to your existing code.
This is an example of a pretty common technique that is often called "graph layering". See, for example: Shortest path in a maze with health loss
Let us first ignore the three steps here. Imagine that we can only use steps of one and two. Then that means that for a given number n. We know that we can solve this with n steps of 1 (one solution), or n-2 steps of 1 and one step of 2 (n-1 solutions); or with n-4 steps of 1 and two steps of 2, which has n-2×n-3/2 solutions, and so on.
The number of ways to do that is related to the Fibonacci sequence. It is clear that the number of ways to construct 0 is one: just the empty list []. It is furthermore clear that the number of ways to construct 1 is one as well: a list [1]. Now we can proof that the number of ways Wn to construct n is the sum of the ways Wn-1 to construct n-1 plus the number of ways Wn-2 to construct n-2. The proof is that we can add a one at the end for each way to construct n-1, and we can add 2 at the end to construct n-2. There are no other options, since otherwise we would introduce duplicates.
The number of ways Wn is thus the same as the Fibonacci number Fn+1 of n+1. We can thus implement a Fibonacci function with caching like:
cache = [0, 1, 1, 2]
def fib(n):
for i in range(len(cache), n+1):
cache.append(cache[i-2] + cache[i-1])
return cache[n]
So now how can we fix this for a given step of three? We can here use a divide and conquer method. We know that if we use a step of three, it means that we have:
1 2 1 … 1 2 3 2 1 2 2 1 2 … 1
\____ ____/ \_______ _____/
v v
sum is m sum is n-m-3
So we can iterate over m, and each time multiply the number of ways to construct the left part (fib(m+1)) and the right part (fib(n-m-3+1)) we here can range with m from 0 to n-3 (both inclusive):
def count_ways(n):
total = 0
for m in range(0, n-2):
total += fib(m+1) * fib(n-m-2)
return total + fib(n+1)
or more compact:
def count_ways(n):
return fib(n+1) + sum(fib(m+1) * fib(n-m-2) for m in range(0, n-2))
This gives us:
>>> count_ways(0) # ()
1
>>> count_ways(1) # (1)
1
>>> count_ways(2) # (2) (1 1)
2
>>> count_ways(3) # (3) (2 1) (1 2) (1 1 1)
4
>>> count_ways(4) # (3 1) (1 3) (2 2) (2 1 1) (1 2 1) (1 1 2) (1 1 1 1)
7

Bellman-Ford algorithm proof of correctness

I'm trying to learn about Bellman-Ford algorithm but I'm stucked with the proof of the correctness.
I have used Wikipedia, but I simply can't understand the proof. I did not find anything on Youtube that's helpfull.
Hope anyone of you can explain it briefly. This page "Bellman-ford correctness can we do better" does not answer my question.
Thank you.
Let's see the problem from the perspective of dynamic programming of a graph with no negative cycle.
We can visualize the memoization table of the dynamic programming as follows:
The columns represent nodes and the rows represent update steps(node 0 is the source node), and the arrows directing from one box in a step to another in the next step are the min updates(step 0 is the initialization).
We choose one path from all shortest paths and illustrate why it is correct. Let's choose the 0 -> 3 -> 2 -> 4 -> 5. It is the shortest path from 0 to 5, we can choose any other one otherwise. We can prove the correctness by reduction. The initial is the source 0, and obviously, the distance between 0 and itself should be 0, the shortest. And we assume 0 -> 3 -> 2 is the shortest path between 0 and 2, and we are going to prove that 0 -> 3 -> 2 -> 4 is the shortest path between 0 and 4 after the third iteration.
First, we prove that after the third iteration the node 4 must be fixed/tightened. If node 4 is not fixed it means that there is at least one path other than 0 -> 3 -> 2 -> 4 that can reach 4 and that path should be shorter than 0 -> 3 -> 2 -> 4, which contradicts our assumption that 0 -> 3 -> 2 -> 4 -> 5 is the shortest path between 0 and 5. Then after the third iteration, 2 and 4 should be connected.
Second, we prove that that relaxation should be the shortest. It cannot be greater and smaller because it is the only shortest path.
Let's see a graph with a negative cycle.
And here is its memoization table:
Let's prove that at |V|'s iteration, here |V| is the number of vertices 6, the update should not be stopped.
We assume that the update stopped(and there is a negative cycle). Let's see the cycle 3 -> 2 -> 4 -> 5 -> 3.
dist(2) <= dist(3) + w(3, 2)
dist(4) <= dist(2) + w(2, 4)
dist(5) <= dist(4) + w(4, 5)
dist(3) <= dist(5) + w(5, 3)
And we can obtain the following inequlity from the above four inequalities by summing up the left-hand side and the right-hand side:
dist(2) + dist(4) + dist(5) + dist(3) <= dist(3) + dist(2) + dist(4) + dist(5) + w(3, 2) + w(2, 4) + w(4, 5) + w(5, 3)
We subtract the distances from both sides and obtain that:
w(3, 2) + w(2, 4) + w(4, 5) + w(5, 3) >= 0, which contradicts our claim that 3 -> 2 -> 4 -> 5 -> 3 is a negative cycle.
So we are certain that at |V|'s step and after that step the update would never stop.
My code is here on Gist.
Reference:
dynamic programming - bellman-ford algorithm
Lecture 14: Bellman-Ford Algorithm

Number of steps taken to split a number

I cannot get my head around this:
Say I got a number 9. I want to know the minimum steps needed to split it so that no number is greater than 3.
I always thought that the most efficient way is to halve it every loop.
So, 9 -> 4,5 -> 2,2,5 -> 2,2,2,3 so 3 steps in total. However, I just realised a smarter way: 9 -> 3,6 -> 3,3,3 which is 2 steps only...
After some research, the number of steps is in fact (n-1)/target, where target=3 in my example.
Can someone please explain this behaviour to me?
If we want to cut a stick of length L into pieces of size no greater than S, we need ceiling(L/S) pieces. Each time we make a new cut, we increase the number of pieces by 1. It doesn't matter what order we make the cuts in, only where. For example, if we want to break a stick of length 10 into pieces of size 2 or less:
-------------------
0 1 2 3 4 5 6 7 8 9 10
we should cut it in the following places:
---|---|---|---|---
0 1 2 3 4 5 6 7 8 9 10
and any order of cuts is fine, as long as these are the cuts that are made. On the other hand, if we start by breaking it in half:
---------|---------
0 1 2 3 4 5 6 7 8 9 10
we have made a cut that isn't part of the optimal solution, and we have wasted our time.
I really like #user2357112's explanation of why cutting in half is not the right first step, but I also like algebra, and you can prove that ceil(n / target) - 1 is optimal using induction.
Let's prove first that you can always do it in ceil(n / target) - 1 steps.
If n <= target, obviously no step are required, so the formula works. Suppose n > target. Split n into target and n - target (1 step). By induction, n - target can be split in ceil((n - target)/target) - 1 steps. Therefore the total number of steps is
1 + ceil((n - target) / target) - 1
= 1 + ceil(n / target) - target/target - 1
= ceil(n / target) - 1.
Now let's prove that you can't do it in fewer than ceil(n / target) - 1 steps. This is obvious if n <= target. Suppose n > target and the first step is n -> a + b. By induction, a requires at least ceil(a / target) - 1 steps and b requires at least ceil(b / target) - 1 steps. The minimum number of steps required is therefore at least
1 + ceil(a / target) - 1 + ceil(b / target) - 1
>= ceil((a + b) / target) - 1 using ceil(x) + ceil(y) >= ceil(x + y)
= ceil(n / target) - 1 using a + b = n
Every n can be thought of as a priority queue of \lfloor n/target \rfloor target elements placed first on the queue and one element whose value is n%target. Every time you remove an element from the queue, you place it back on the queue. Remove all but the last element: you have clearly removed \lfloor (n-1)/target \rfloor elements. If the last element is less than or equal to the target, we are done. If it is greater than the target, we have a contradiction. So, after \lfloor (n-1)/target \rfloor steps we have a queue consisting only of elements less than or equal to target.

How many different possible ways can persons be seated in a round table?

I am developing an algorithm and am looking at a possibility of the maximum number of iterations before arriving at a conclusion.
In real world, it is similar to the Classical Round Table seating problem. Can you please tell me the maximum number of ways n persons be seated in a round table without repetitions ?
Thanks
Let's trace through the solution to this problem.
First, let's see how many ways we can arrange n people in a line. There are n different people we can choose to put at the front of the line. Of the n - 1 who remain, any n - 1 of them can be put in the second position. Of the n - 2 who remain, any n - 2 of them can be put into the third position, etc. More generally, we get the formula
Num arrangements = n x (n - 1) x (n - 2) x ... x 1 = n!
So there are n! different ways of permuting people in a line. More generally, there are n! different ways to reorder n unique elements.
Now, what happens when we arrange people in a ring? For each linear permutation, we can convert that arrangement into a ring arrangement by connecting the two ends. For example, with three people, there are six ways to order them in a line:
1 2 3
1 3 2
2 1 3
2 3 1
3 1 2
3 2 1
These map to the following rings:
1
1 2 3 -> / \
3---2
1
1 3 2 -> / \
2---3
2
2 1 3 -> / \
3---1
2
2 3 1 -> / \
1---3
3
3 1 2 -> / \
2---1
3
3 2 1 -> / \
1---2
However, we can't conclude from this that the number of seating arrangements in n! because we've created the same seating arrangement multiple times here. As a trick, let's suppose that we always write the cycle out so that 1 is at the top of the cycle. Then we've generated the following cycles:
1
1 2 3 -> / \
3---2
1
1 3 2 -> / \
2---3
1
2 1 3 -> / \
2---3
1
2 3 1 -> / \
3---2
1
3 1 2 -> / \
3---2
1
3 2 1 -> / \
2---3
Notice that we've generated the following:
1 1
/ \ x3 / \ x3
2---3 3---2
So really, there are only two different arrangements; we've just generated each of them three times.
The reason for this is that because the ring has no definitive start and end point, we will end up generating multiple rotations of each of the different arrangements. In particular, if there are n people that we need to seat, we'll end up generating n different copies of the same rotation, one with each of the different guests up at the top. Consequently, to get the total number of guests, for each of the different rings, we need to ignore all but one of them. Since there are n different copies of each ring, this means that the total number is given by
n! / n = (n - 1)!
So there are (n - 1)! different ways to seat people in a ring.
Hope this helps!
Classic permutation problem: Break it into two sections:
1) All possible combinations
2) Divide by n as the number of starting locations (since they don't matter)
I get (n-1)! possibilities. Am I missing anything here? (I don't do stats much, so I'm kinda rusty)

How can I take the modulus of two very large numbers?

I need an algorithm for A mod B with
A is a very big integer and it contains digit 1 only (ex: 1111, 1111111111111111)
B is a very big integer (ex: 1231, 1231231823127312918923)
Big, I mean 1000 digits.
To compute a number mod n, given a function to get quotient and remainder when dividing by (n+1), start by adding one to the number. Then, as long as the number is bigger than 'n', iterate:number = (number div (n+1)) + (number mod (n+1))Finally at the end, subtract one. An alternative to adding one at the beginning and subtracting one at the end is checking whether the result equals n and returning zero if so.
For example, given a function to divide by ten, one can compute 12345678 mod 9 thusly:
12345679 -> 1234567 + 9
1234576 -> 123457 + 6
123463 -> 12346 + 3
12349 -> 1234 + 9
1243 -> 124 + 3
127 -> 12 + 7
19 -> 1 + 9
10 -> 1
Subtract 1, and the result is zero.
1000 digits isn't really big, use any big integer library to get rather fast results.
If you really worry about performance, A can be written as 1111...1=(10n-1)/9 for some n, so computing A mod B can be reduced to computing ((10^n-1) mod (9*B)) / 9, and you can do that faster.
Try Montgomery reduction on how to find modulo on large numbers - http://en.wikipedia.org/wiki/Montgomery_reduction
1) Just find a language or package that does arbitrary precision arithmetic - in my case I'd try java.math.BigDecimal.
2) If you are doing this yourself, you can avoid having to do division by using doubling and subtraction. E.g. 10 mod 3 = 10 - 3 - 3 - 3 = 1 (repeatedly subtracting 3 until you can't any more) - which is incredibly slow, so double 3 until it is just smaller than 10 (e.g. to 6), subtract to leave 4, and repeat.

Resources