I have a program (a fractal) that draws lines in an interlaced order. Originally, given H lines to draw, it determines the number of frames N, and draws every Nth frame, then every N+1'th frame, etc.
For example, if H = 10 and N = 3, it draws them in order:
0, 3, 6, 9,
1, 4, 7,
2, 5, 8.
However I didn't like the way bands would gradually thicken, leaving large areas between undrawn for a long time. So the method was enhanced to recursively draw midpoint lines in each group instead of the immediately sebsequent lines, for example:
0, (32) # S (step size) = 32
8, (24) # S = 16
4, (12) # S = 8
2, 6, (10) # S = 4
1, 3, 5, 7, 9. # S = 2
(The numbers in parentheses are out of range and not drawn.) The algorithm's pretty simple:
Set S to a power of 2 greater than N*2, set F = 0.
While S > 1:
Draw frame F.
Set F = F + S.
If F >= H, then set S = S / 2; set F = S / 2.
When the odd numbered frames are drawn on the last step size, they are drawn in simple order just as an the initial (annoying) method. The same with every fourth frame, etc. It's not as bad because some intermediate frames have already been drawn.
But the same permutation could recursively be applied to the elements for each step size. In the example above, the last line would change to:
1, # the 0th element, S' = 16
9, # 4th, S' = 8
5, # 2nd, S' = 4
3, 7. # 1st and 3rd, S' = 2
The previous lines have too few elements for the recursion to take effect. But if N was large enough, some lines might require multiple levels of recursion. Any step size with 3 or more corresponding elements can be recursively permutated.
Question 1. Is there a common name for this permutation on N elements, that I could use to find additional material on it? I am also interested in any similar examples that may exist. I would be surprised if I'm the first person to want to do this.
Question 2. Are there some techniques I could use to compute it? I'm working in C but I'm more interested at the algorithm level at this stage; I'm happy to read code other language (within reason).
I have not yet tackled its implemention. I expect I will precompute the permutation first (contrary to the algorithm for the previous method, above). But I'm also interested if there is a simple way to get the next frame to draw without having to precomputing it, similar in complexity to the previous method.
It sounds as though you're trying to construct one-dimensional low-discrepancy sequences. Your permutation can be computed by reversing the binary representation of the index.
def rev(num_bits, i):
j = 0
for k in xrange(num_bits):
j = (j << 1) | (i & 1)
i >>= 1
return j
Example usage:
>>> [rev(4,i) for i in xrange(16)]
[0, 8, 4, 12, 2, 10, 6, 14, 1, 9, 5, 13, 3, 11, 7, 15]
A variant that works on general n:
def rev(n, i):
j = 0
while n >= 2:
m = i & 1
if m:
j += (n + 1) >> 1
n = (n + 1 - m) >> 1
i >>= 1
return j
>>> [rev(10,i) for i in xrange(10)]
[0, 5, 3, 8, 2, 7, 4, 9, 1, 6]
Related
I am doing some stuff on leetcode and came up with solution it works fine but some cases.
Here is the problem itself:
But in case like this it doesn't:
It doesn't make sense how can I rotate elements if k is bigger than length of array.
If you have any idea how to improve this solution I would be grateful
class Solution:
def rotate(self, nums: List[int], k: int) -> None:
"""
Do not return anything, modify nums in-place instead.
"""
if len(nums) > k:
self.swap(nums, 0, len(nums)-1)
self.swap(nums, 0,k-1)
self.swap(nums, k, len(nums)-1)
def swap(self, nums, start, end):
while start < end:
nums[start], nums[end] = nums[end], nums[start]
start+=1
end-=1
In order to understand why this doesn't work for the cases where k is larger than the array length, let me try to explain some of the logic behind rotating by such values of k.
The modulo operator, % will be useful. For example, if an array is 5 long, and you want to rotate by 5, you end up with the same array. So technically, you'd optimally want to rotate by 0. This is where the % operator comes into play. 5 % 5 = 0. If we want to rotate an array length 5 by 7 spots, we would end up with the same thing as rotating the array by 2, and it turns out that 7 % 5 = 2. Do you see where I am going with this?
This also holds true if the value of k is less than the length of the array. Say we want to rotate an array length 5 by 3, we do 3 % 5 = 3.
So for any rotation of amount k and array length L, optimization rotation amount n is equivalent to n = k % L.
You should modify your code at the beginning of your rotate method to adjust the rotation amount:
k = k % L
and use this value to rotate the correct amount.
The fastest and cleanest solution by far and large is:
def rotate_right(items, shift):
shift = -shift % len(items)
return items[shift:] + items[:shift]
ll = [i + 1 for i in range(7)]
# [1, 2, 3, 4, 5, 6, 7]
rotate_right(ll, 3)
# [5, 6, 7, 1, 2, 3, 4]
rotate_right([1, 2], 3)
# [2, 1]
of course, short of using numpy.roll() or itertools.cycle().
I'm trying to figure out how to solve a problem that seems a tricky variation of a common algorithmic problem but require additional logic to handle specific requirements.
Given a list of coins and an amount, I need to count the total number of possible ways to extract the given amount using an unlimited supply of available coins (and this is a classical change making problem https://en.wikipedia.org/wiki/Change-making_problem easily solved using dynamic programming) that also satisfy some additional requirements:
extracted coins are splittable into two sets of equal size (but not necessarily of equal sum)
the order of elements inside the set doesn't matter but the order of set does.
Examples
Amount of 6 euros and coins [1, 2]: solutions are 4
[(1,1), (2,2)]
[(1,1,1), (1,1,1)]
[(2,2), (1,1)]
[(1,2), (1,2)]
Amount of 8 euros and coins [1, 2, 6]: solutions are 7
[(1,1,2), (1,1,2)]
[(1,2,2), (1,1,1)]
[(1,1,1,1), (1,1,1,1)]
[(2), (6)]
[(1,1,1), (1,2,2)]
[(2,2), (2,2)]
[(6), (2)]
By now I tried different approaches but the only way I found was to collect all the possible solution (using dynamic programming) and then filter non-splittable solution (with an odd number of coins) and duplicates. I'm quite sure there is a combinatorial way to calculate the total number of duplication but I can't figure out how.
(The following method first enumerates partitions. My other answer generates the assignments in a bottom-up fashion.) If you'd like to count splits of the coin exchange according to coin count, and exclude redundant assignments of coins to each party (for example, where splitting 1 + 2 + 2 + 1 into two parts of equal cardinality is only either (1,1) | (2,2), (2,2) | (1,1) or (1,2) | (1,2) and element order in each part does not matter), we could rely on enumeration of partitions where order is disregarded.
However, we would need to know the multiset of elements in each partition (or an aggregate of similar ones) in order to count the possibilities of dividing them in two. For example, to count the ways to split 1 + 2 + 2 + 1, we would first count how many of each coin we have:
Python code:
def partitions_with_even_number_of_parts_as_multiset(n, coins):
results = []
def C(m, n, s, p):
if n < 0 or m <= 0:
return
if n == 0:
if not p:
results.append(s)
return
C(m - 1, n, s, p)
_s = s[:]
_s[m - 1] += 1
C(m, n - coins[m - 1], _s, not p)
C(len(coins), n, [0] * len(coins), False)
return results
Output:
=> partitions_with_even_number_of_parts_as_multiset(6, [1,2,6])
=> [[6, 0, 0], [2, 2, 0]]
^ ^ ^ ^ this one represents two 1's and two 2's
Now since we are counting the ways to choose half of these, we need to find the coefficient of x^2 in the polynomial multiplication
(x^2 + x + 1) * (x^2 + x + 1) = ... 3x^2 ...
which represents the three ways to choose two from the multiset count [2,2]:
2,0 => 1,1
0,2 => 2,2
1,1 => 1,2
In Python, we can use numpy.polymul to multiply polynomial coefficients. Then we lookup the appropriate coefficient in the result.
For example:
import numpy
def count_split_partitions_by_multiset_count(multiset):
coefficients = (multiset[0] + 1) * [1]
for i in xrange(1, len(multiset)):
coefficients = numpy.polymul(coefficients, (multiset[i] + 1) * [1])
return coefficients[ sum(multiset) / 2 ]
Output:
=> count_split_partitions_by_multiset_count([2,2,0])
=> 3
(Posted a similar answer here.)
Here is a table implementation and a little elaboration on algrid's beautiful answer. This produces an answer for f(500, [1, 2, 6, 12, 24, 48, 60]) in about 2 seconds.
The simple declaration of C(n, k, S) = sum(C(n - s_i, k - 1, S[i:])) means adding all the ways to get to the current sum, n using k coins. Then if we split n into all ways it can be partitioned in two, we can just add all the ways each of those parts can be made from the same number, k, of coins.
The beauty of fixing the subset of coins we choose from to a diminishing list means that any arbitrary combination of coins will only be counted once - it will be counted in the calculation where the leftmost coin in the combination is the first coin in our diminishing subset (assuming we order them in the same way). For example, the arbitrary subset [6, 24, 48], taken from [1, 2, 6, 12, 24, 48, 60], would only be counted in the summation for the subset [6, 12, 24, 48, 60] since the next subset, [12, 24, 48, 60] would not include 6 and the previous subset [2, 6, 12, 24, 48, 60] has at least one 2 coin.
Python code (see it here; confirm here):
import time
def f(n, coins):
t0 = time.time()
min_coins = min(coins)
m = [[[0] * len(coins) for k in xrange(n / min_coins + 1)] for _n in xrange(n + 1)]
# Initialize base case
for i in xrange(len(coins)):
m[0][0][i] = 1
for i in xrange(len(coins)):
for _i in xrange(i + 1):
for _n in xrange(coins[_i], n + 1):
for k in xrange(1, _n / min_coins + 1):
m[_n][k][i] += m[_n - coins[_i]][k - 1][_i]
result = 0
for a in xrange(1, n + 1):
b = n - a
for k in xrange(1, n / min_coins + 1):
result = result + m[a][k][len(coins) - 1] * m[b][k][len(coins) - 1]
total_time = time.time() - t0
return (result, total_time)
print f(500, [1, 2, 6, 12, 24, 48, 60])
I'm looking for solution to my problem. Say I have a number X, now I want to generate 20 random numbers whose sum would equal to X, but I want those random numbers to have enthropy in them. So for example, if X = 50, the algorithm should generate
3
11
0
6
19
7
etc. The sum of given numbres should equal to 50.
Is there any simple way to do that?
Thanks
Simple way:
Generate random number between 1 and X : say R1;
subtract R1 from X, now generate a random number between 1 and (X - R1) : say R2. Repeat the process until all Ri add to X : i.e. (X-Rn) is zero. Note: each consecutive number Ri will be smaller then the first. If you want the final sequence to look more random, simply permute the resulting Ri numbers. I.e. if you generate for X=50, an array like: 22,11,9,5,2,1 - permute it to get something like 9,22,2,11,1,5. You can also put a limit to how large any random number can be.
One fairly straightforward way to get k random values that sum to N is to create an array of size k+1, add values 0 and N, and fill the rest of the array with k-1 randomly generated values between 1 and N-1. Then sort the array and take the differences between successive pairs.
Here's an implementation in Ruby:
def sum_k_values_to_n(k = 20, n = 50)
a = Array.new(k + 1) { 1 + rand(n - 1) }
a[0] = 0
a[-1] = n
a.sort!
(1..(a.length - 1)).collect { |i| a[i] - a[i-1] }
end
p sum_k_values_to_n(3, 10) # produces, e.g., [2, 3, 5]
p sum_k_values_to_n # produces, e.g., [5, 2, 3, 1, 6, 0, 4, 4, 5, 0, 2, 1, 0, 5, 7, 2, 1, 1, 0, 1]
I try to solve the following problem. Given an array of real numbers [7, 2, 4, 8, 1, 1, 6, 7, 4, 3, 1] for every element I need to find most recent previous bigger element in the array.
For example there is nothing bigger then first element (7) so it has NaN. For the second element (2) 7 is bigger. So in the end the answer looks like:
[NaN, 7, 7, NaN, 8, 8, 8, 8, 7, 4, 3, 1]. Of course I can just check all the previous elements for every element, but this is quadratic in terms of the number of elements of the array.
My another approach was to maintain the sorted list of previous elements and then select the first element bigger then current. This sounds like a log linear to me (am not sure). Is there any better way to approach this problem?
Here's one way to do it
create a stack which is initially empty
for each number N in the array
{
while the stack is not empty
{
if the top item on the stack T is greater than N
{
output T (leaving it on the stack)
break
}
else
{
pop T off of the stack
}
}
if the stack is empty
{
output NAN
}
push N onto the stack
}
Taking your sample array [7, 2, 4, 8, 1, 1, 6, 7, 4, 3, 1], here's how the algorithm would solve it.
stack N output
- 7 NAN
7 2 7
7 2 4 7
7 4 8 NAN
8 1 8
8 1 1 8
8 1 6 8
8 6 7 8
8 7 4 7
8 7 4 3 4
8 7 4 3 1 3
The theory is that the stack doesn't need to keep small numbers since they will never be part of the output. For example, in the sequence 7, 2, 4, the 2 is not needed, because any number less than 2 will also be less than 4. Hence the stack only needs to keep the 7 and the 4.
Complexity Analysis
The time complexity of the algorithm can be shown to be O(n) as follows:
there are exactly n pushes (each number in the input array is
pushed onto the stack once and only once)
there are at most n pops (once a number is popped from the stack,
it is discarded)
there are at most n failed comparisons (since the number is popped
and discarded after a failed comparison)
there are at most n successful comparisons (since the algorithm
moves to the next number in the input array after a successful
comparison)
there are exactly n output operations (since the algorithm
generates one output for each number in the input array)
Hence we conclude that the algorithm executes at most 5n operations to complete the task, which is a time complexity of O(n).
We can keep for each array element the index of the its most recent bigger element. When we process a new element x, we check the previous element y. If y is greater then we found what we want. If not, we check which is the index of the most recent bigger element of y. We continue until we find our needed element and its index. Using python:
a = [7, 2, 4, 8, 1, 1, 6, 7, 4, 3, 1]
idx, result = [], []
for i, v in enumerate(a, -1):
while i >= 0 and v >= a[i]:
i = idx[i]
idx.append(i)
result.append(a[i] if i >= 0 else None)
Result:
[None, 7, 7, None, 8, 8, 8, 8, 7, 4, 3]
The algorithm is linear. When an index j is unsuccessfully checked because we are looking for the most recent bigger element of index i > j then from now on i will point to a smaller index than j and j won't be checked again.
Why not just define a variable 'current_largest' and iterate through your array from left to right? At each element, current largest is largest previous, and if the current element is larger, assign current_largest to the current element. Then move to the next element.
EDIT:
I just re-read your question and I may have misunderstood it. Do you want to find ALL larger previous elements?
EDIT2:
It seems to me like the current largest method will work. You just need to record current_largest before you assign it a new value. For example, in python:
current_largest = 0
for current_element in elements:
print("Largest previous is "+current_largest)
if(current_element>current_largest):
current_largest = current_element
If you want an array of these, then just push the value to an array in place of the print statement.
As per my best understanding of your question. Below is a solution.
Working Example : JSFIDDLE
var item = document.getElementById("myButton");
item.addEventListener("click", myFunction);
function myFunction() {
var myItems = [7, 2, 4, 8, 1, 1, 6, 7, 4, 3, 1];
var previousItem;
var currentItem;
var currentLargest;
for (var i = 0; i < myItems.length; i++) {
currentItem = myItems[i];
if (i == 0) {
previousItem = myItems[0];
currentItem = myItems[0];
myItems[i] = NaN;
}
else {
if (currentItem < previousItem) {
myItems[i] = previousItem;
currentLargest = previousItem;
}
if (currentItem > currentLargest) {
currentLargest = currentItem;
myItems[i] = NaN;
}
else {
myItems[i] = currentLargest;
}
previousItem = currentItem;
}
}
var stringItems = myItems.join(",");
document.getElementById("arrayAnswer").innerHTML = stringItems;
}
Given the list of numbers
1 15 2 5 10
I need to obtain
1 2 5 10 15
The only operation I can do is "move the number X at position Y".
In the above example I only need to do "move the number 15 at position 5".
I would like to minimize the number of operations but I can't find/remember a classical algorithm for that, given the operation available.
Some background :
I'm interacting with an API for a kanban-like service.
I have about 600 cards and some actions on our bug-tracker can imply a reordering of these 600 cards in the kanban (multiple cards can move at the same time if the priority of a project is changed)
I can do it in 600 calls to the API but I'm trying to reduce that number as much as possible.
Lemma: The minimum number of (delete element, insert element) pairs you can perform to sort a list L (in increasing order) is:
Smin(L) = |L| - |LIC(L)|
Where LIC(L) is the Longest Increasing Subsequence.
Thus, you have to:
Establish the LIC of your list.
Remove the elements not in it and insert them back at the appropriate position (using binary search).
Proof:
By induction.
For a list of size 1, the longest increasing subsequence is of length... 1! The list is already sorted so the number of (del,ins) pairs required is
|L| - |LIC(L)| = 1 - 1 = 0
Now let Ln be a list of length n, 1 ≤ n. Let Ln+1 be the list obtained by adding an element en+1 to the left of Ln.
This element may or may not influence the Longest Increasing Subsequence. Let's try to see how...
Let in,1 and in,2 be the two first elements of LIC(Ln) (*):
If en+1 > in,2, then LIC(Ln+1) = LIC(Ln)
If en+1 ≤ in,1, then LIC(Ln+1) = en+1 || LIC(Ln)
Else, LIC(Ln+1) = LIC(Ln) - in,1 + en+1. We keep the LIC with the highest first element. This is done by removing in,1 from the LIC and replacing it with en+1.
In the first case, we delete en+1, we thus get to sort Ln. By the induction hypothesis, this require n (deletion, insertion) pairs. We then have to insert en+1 at the appropriate position. Thus:
S(Ln+1)min = 1 + S(Ln)min
S(Ln+1)min = 1 + n - |LIC(Ln)|
S(Ln+1)min = |Ln+1| - |LIC(Ln+1|
In the second case, we ignore en+1. We begin by deleting elements not in LIC(Ln). These elements have to be inserted again! There are
S(Ln)min = |Ln| - |LIC(Ln)|
such elements.
Now, we just have to take care and insert them in the right order (relatively to en+1). In the end, it requires:
S(Ln+1)min = |Ln| - |LIC(Ln)|
S(Ln+1)min = |Ln| + 1 - (|LIC(Ln)| + 1)
Since we have |LIC(Ln+1)| = |LIC(Ln)| + 1 and |Ln+1| = |Ln| + 1, we have in the end:
S(Ln+1)min = |Ln+1| - |LIC(Ln+1)|
The last case can be proved by considering the list L'n obtained by removing in,1 from Ln+1. In that case LIC(L'n) = LIC(Ln+1) and thus:
|LIC(L'n)| = |LIC(Ln)| (1)
From there, we can sort L'n (which takes |L'n| - |LIC(L'n| by the induction hypothesis. The previous equality (1) leads to the result.
(*): If LIC(Ln) < 2, then in,2 doesn't exist. Just ignore the comparisons with it. In that case, only case 2 and case 3 apply... The result is still valid
One possible solution is to find the longest increasing subsequence and move only elements that aren't inside it.
I can't prove it's optimal, but it is easy to prove it is correct and better than N swaps.
Here is a proof-of-concept in Python 2. I implemented it as a O(n2) algorithm, but I'm pretty sure it can be reduced to O(n log n).
from operator import itemgetter
def LIS(V):
T = [1]*(len(V))
P = [-1]*(len(V))
for i, v in enumerate(V):
for j in xrange(i-1, -1, -1):
if T[j]+1 > T[i] and V[j] <= V[i]:
T[i] = T[j] + 1
P[i] = j
i, _ = max(enumerate(T), key=itemgetter(1))
while i != -1:
yield i
i = P[i]
def complement(L, n):
for a, b in zip(L, L[1:]+[n]):
for i in range(a+1, b):
yield i
def find_moves(V):
n = len(V)
L = list(LIS(V))[::-1]
SV = sorted(range(n), key=lambda i:V[i])
moves = [(x, SV.index(x)) for x in complement(L, n)]
while len(moves):
a, b = moves.pop()
yield a, b
moves = [(x-(x>a)+(x>b), y) for x, y in moves]
def make_and_print_moves(V):
print 'Initial array:', V
for a, b in find_moves(V):
x = V.pop(a)
V.insert(b, x)
print 'Move {} to {}. Result: {}'.format(a, b, V)
print '***'
make_and_print_moves([1, 15, 2, 5, 10])
make_and_print_moves([4, 3, 2, 1])
make_and_print_moves([1, 2, 4, 3])
It outputs something like:
Initial array: [1, 15, 2, 5, 10]
Move 1 to 4. Result: [1, 2, 5, 10, 15]
***
Initial array: [4, 3, 2, 1]
Move 3 to 0. Result: [1, 4, 3, 2]
Move 3 to 1. Result: [1, 2, 4, 3]
Move 3 to 2. Result: [1, 2, 3, 4]
***
Initial array: [1, 2, 4, 3]
Move 3 to 2. Result: [1, 2, 3, 4]
***