Matching between two series after manipulation - algorithm

Suppose, we're given two series of integer numbers as X[..] And Y[..], which
has the same length. We can choose any position i of the series X[] and
doing the operation like , X[i]=X[i] + 3, X[i + 2] = X[i + 2] + 2 , X[i + 4] = X[i + 4] + 1.
After manipulating the series with any number of time, is it possible to
find the same series like Y[..]?
I am thinking to implement it by brute force and normal combinational matching after manipulation. Is there any other process which can make it faster?
Given two series,
X [ 1, 2, 3 ,4, 5 ,6,8 ]
Y [ 1, 5, 6 ,6, 7 ,7,9 ]
if i=2 then
X [ 1, 5, 3 ,6, 5 ,7,8 ]
Y [ 1, 5, 6 ,6, 7 ,7,9 ]
and if i=3 then
X [ 1, 5, 6 ,6, 7 ,7,9 ]
Y [ 1, 5, 6 ,6, 7 ,7,9 ]
Matches the series.

You can see that for every index p resulting cell could be represented as
Y[p] = X[p] + F(p-4) + 2 * F(p-2) + 3 * F[p]
where F[p] is number of operation at p-th index.
So you have system of p linear equations for p unknowns Fi.
This is tridiagonal (sparse) system, it could be solved with some fast methods or with usual Gaussian elimination.
System might be inconsistent - in this case there are no solutions

Since an operation at index i modifies only elements present at index i, i + 2 and i + 4, that is, all indices >= i, we can build a greedy algorithm which iterates over the array X from left-to-right and at every index i compares the value with array Y.
Case X[i] > Y[i]: Then it's not possible to update X[i] to Y[i], hence return not possible.
Case X[i] == Y[i]: Then continue iterating over the next element at i + 1
Case X[i] < Y[i]: If (Y[i] - X[i]) mod 3 != 0, then return not possible, else compute m = (Y[i] - X[i])/3 and increment X[i] by 3 * m, X[i + 2] by 2 * m and X[i + 4] by m and continue iterating.
If we reach the end of array X, then it means it's possible to construct array Y from X using these operations.
Overall time complexity of the solution is O(n).

Related

Find greater number form self in left side and smaller number form self in right side

Consider an array a of n integers, indexed from 1 to n.
For every index i such that 1<i<n, define:
count_left(i) = number of indices j such that 1 <= j < i and a[j] > a[i];
count_right(i) = number of indices j such that i < j <= n and a[j] < a[i];
diff(i) = abs(count_left(i) - count_right(i)).
The problem is: given array a, find the maximum possible value of diff(i) for 1 < i < n.
I got solution by brute force. Can anyone give better solution?
Constraint: 3 < n <= 10^5
Example
Input Array: [3, 6, 9, 5, 4, 8, 2]
Output: 4
Explanation:
diff(2) = abs(0 - 3) = 3
diff(3) = abs(0 - 4) = 4
diff(4) = abs(2 - 2) = 0
diff(5) = abs(3 - 1) = 2
diff(6) = abs(1 - 1) = 0
maximum is 4.
O(nlogn) approach:
Walk through array left to right and add every element to augmented binary search tree (RB, AVL etc) containing fields of subtree size, initial index and temporary rank field. So immediately after adding we know rank of element in the current tree state.
lb = index - temprank
is number of left bigger elements - remember it in temprank field.
After filling the tree with all items traverse tree again, retrieving final element rank.
rs = finalrank - temprank
is number of right smaller elements. Now just get abs of difference of lb and rs
diff = abs(lb - rs) = abs(index - temprank - finalrank + temprank ) =
abs(index - finalrank)
But ... we can see that we don't need temprank at all.
Moreover - we don't need binary tree!
Just perform sorting of pairs (element; initial index) by element key and get max absolute difference of new_index - old_index (except for old indices 1 and n)
a 3, 6, 9, 5, 4, 8, 2
old 2 3 4 5 6
new 5 7 4 3 6
dif 3 4 0 2 0
Python code for concept checking
a = [3, 6, 9, 5, 4, 8, 2]
b = sorted([[e,i] for i,e in enumerate(a)])
print(b)
print(max([abs(n-o[1]) if 0<o[1]<len(a)-1 else 0 for n,o in enumerate(b)]))

Kth element in transformed array

I came across this question in recent interview :
Given an array A of length N, we are supposed to answer Q queries. Query form is as follows :
Given x and k, we need to make another array B of same length such that B[i] = A[i] ^ x where ^ is XOR operator. Sort an array B in descending order and return B[k].
Input format :
First line contains interger N
Second line contains N integers denoting array A
Third line contains Q i.e. number of queries
Next Q lines contains space-separated integers x and k
Output format :
Print respective B[k] value each on new line for Q queries.
e.g.
for input :
5
1 2 3 4 5
2
2 3
0 1
output will be :
3
5
For first query,
A = [1, 2, 3, 4, 5]
For query x = 2 and k = 3, B = [1^2, 2^2, 3^2, 4^2, 5^2] = [3, 0, 1, 6, 7]. Sorting in descending order B = [7, 6, 3, 1, 0]. So, B[3] = 3.
For second query,
A and B will be same as x = 0. So, B[1] = 5
I have no idea how to solve such problems. Thanks in advance.
This is solvable in O(N + Q). For simplicity I assume you are dealing with positive or unsigned values only, but you can probably adjust this algorithm also for negative numbers.
First you build a binary tree. The left edge stands for a bit that is 0, the right edge for a bit that is 1. In each node you store how many numbers are in this bucket. This can be done in O(N), because the number of bits is constant.
Because this is a little bit hard to explain, I'm going to show how the tree looks like for 3-bit numbers [0, 1, 4, 5, 7] i.e. [000, 001, 100, 101, 111]
*
/ \
2 3 2 numbers have first bit 0 and 3 numbers first bit 1
/ \ / \
2 0 2 1 of the 2 numbers with first bit 0, have 2 numbers 2nd bit 0, ...
/ \ / \ / \
1 1 1 1 0 1 of the 2 numbers with 1st and 2nd bit 0, has 1 number 3rd bit 0, ...
To answer a single query you go down the tree by using the bits of x. At each node you have 4 possibilities, looking at bit b of x and building answer a, which is initially 0:
b = 0 and k < the value stored in the left child of the current node (the 0-bit branch): current node becomes left child, a = 2 * a (shifting left by 1)
b = 0 and k >= the value stored in the left child: current node becomes right child, k = k - value of left child, a = 2 * a + 1
b = 1 and k < the value stored in the right child (the 1-bit branch, because of the xor operation everything is flipped): current node becomes right child, a = 2 * a
b = 1 and k >= the value stored in the right child: current node becomes left child, k = k - value of right child, a = 2 * a + 1
This is O(1), again because the number of bits is constant. Therefore the overall complexity is O(N + Q).
Example: [0, 1, 4, 5, 7] i.e. [000, 001, 100, 101, 111], k = 3, x = 3 i.e. 011
First bit is 0 and k >= 2, therefore we go right, k = k - 2 = 3 - 2 = 1 and a = 2 * a + 1 = 2 * 0 + 1 = 1.
Second bit is 1 and k >= 1, therefore we go left (inverted because the bit is 1), k = k - 1 = 0, a = 2 * a + 1 = 3
Third bit is 1 and k < 1, so the solution is a = 2 * a + 0 = 6
Control: [000, 001, 100, 101, 111] xor 011 = [011, 010, 111, 110, 100] i.e. [3, 2, 7, 6, 4] and in order [2, 3, 4, 6, 7], so indeed the number at index 3 is 6 and the solution (always talking about 0-based indexing here).

Implementation: Algorithm for a special distribution Problem

We are given a number x, and a set of n coins with denominations v1, v2, …, vn.
The coins are to be divided between Alice and Bob, with the restriction that each person's coins must add up to at least x.
For example, if x = 1, n = 2, and v1 = v2 = 2, then there are two possible distributions: one where Alice gets coin #1 and Bob gets coin #2, and one with the reverse. (These distributions are considered distinct even though both coins have the same denomination.)
I'm interested in counting the possible distributions. I'm pretty sure this can be done in O(nx) time and O(n+x) space using dynamic programming; but I don't see how.
Count the ways for one person to get just less than x, double it and subtract from the doubled total number of ways to divide the collection in two, (Stirling number of the second kind {n, 2}).
For example,
{2, 3, 3, 5}, x = 5
i matrix
0 2: 1
1 3: 1 (adding to 2 is too much)
2 3: 2
3 N/A (≥ x)
3 ways for one person to get
less than 5.
Total ways to partition a set
of 4 items in 2 is {4, 2} = 7
2 * 7 - 2 * 3 = 8
The Python code below uses MBo's routine. If you like this answer, please consider up-voting that answer.
# Stirling Algorithm
# Cod3d by EXTR3ME
# https://extr3metech.wordpress.com
def stirling(n,k):
n1=n
k1=k
if n<=0:
return 1
elif k<=0:
return 0
elif (n==0 and k==0):
return -1
elif n!=0 and n==k:
return 1
elif n<k:
return 0
else:
temp1=stirling(n1-1,k1)
temp1=k1*temp1
return (k1*(stirling(n1-1,k1)))+stirling(n1-1,k1-1)
def f(coins, x):
a = [1] + (x-1) * [0]
# Code by MBo
# https://stackoverflow.com/a/53418438/2034787
for c in coins:
for i in xrange(x - 1, c - 1, -1):
if a[i - c] > 0:
a[i] = a[i] + a[i - c]
return 2 * (stirling(len(coins), 2) - sum(a) + 1)
print f([2,3,3,5], 5) # 8
print f([1,2,3,4,4], 5) # 16
If sum of all coins is S, then the first person can get x..S-x of money.
Make array A of length S-x+1 and fill it with numbers of variants of changing A[i] with given coins (like kind of Coin Change problem).
To provide uniqueness (don't count C1+C2 and C2+C1 as two variants), fill array in reverse direction
A[0] = 1
for C in Coins:
for i = S-x downto C:
if A[i - C] > 0:
A[i] = A[i] + A[i - C]
//we can compose value i as i-C and C
then sum A entries in range x..S-x
Example for coins 2, 3, 3, 5 and x=5.
S = 13, S-x = 8
Array state after using coins in order:
0 1 2 3 4 5 6 7 8 //idx
1 1
1 1 1 1
1 1 2 2 1 1
1 1 2 3 1 1 3
So there are 8 variants to distribute these coins. Quick check (3' denotes the second coin 3):
2 3 3' 5
2 3' 3 5
2 3 3' 5
2 5 3 3'
3 3' 2 5
3 5 2 3'
3' 5 2 3
5 2 3 3'
You can also solve it in O(A * x^2) time and memory adding memoization to this dp:
solve(A, pos, sum1, sum2):
if (pos == A.length) return sum1 == x && sum2 == x
return solve(A, pos + 1, min(sum1 + A[pos], x), sum2) +
solve(A, pos + 1, sum1, min(sum2 + A[pos], x))
print(solve(A, 0, 0, 0))
So depending if x^2 < sum or not you could use this or the answer provided by #Mbo (in terms of time complexity). If you care more about space, this is better only when A * x^2 < sum - x

Finding the maximum possible sum/product combination of integers

Given an input of a list of N integers always starting with 1, for example: 1, 4, 2, 3, 5. And some target integer T.
Processing the list in order, the algorithm decides whether to add or multiply the number by the current score to achieve the maximum possible output < T.
For example: [input] 1, 4, 2, 3, 5 T=40
1 + 4 = 5
5 * 2 = 10
10 * 3 = 30
30 + 5 = 35 which is < 40, so valid.
But
1 * 4 = 4
4 * 2 = 8
8 * 3 = 24
24 * 5 = 120 which is > 40, so invalid.
I'm having trouble conceptualizing this in an algorithm -- I'm just looking for advice on how to think about it or at most pseudo-code. How would I go about coding this?
My first instinct was to think about the +/* as 1/0, and then test permutations like 0000 (where length == N-1, I think), then 0001, then 0011, then 0111, then 1111, then 1000, etc. etc.
But I don't know how to put that into pseudo-code given a general N integers. Any help would be appreciated.
You can use recursive to implement the permutations. Python code below:
MINIMUM = -2147483648
def solve(input, T, index, temp):
# if negative value exists in input, remove below two lines
if temp >= T:
return MINIMUM
if index == len(input):
return temp
ans0 = solve(input, T, index + 1, temp + input[index])
ans1 = solve(input, T, index + 1, temp * input[index])
return max(ans0, ans1)
print(solve([1, 4, 2, 3, 5], 40, 1, 1))
But this method requires O(2^n) time complexity.

algorithmic puzzle for calculating the number of combinations of numbers sum to a fixed result

This is a puzzle i think of since last night. I have come up with a solution but it's not efficient so I want to see if there is better idea.
The puzzle is this:
given positive integers N and T, you will need to have:
for i in [1, T], A[i] from { -1, 0, 1 }, such that SUM(A) == N
additionally, the prefix sum of A shall be [0, N], while when the prefix sum PSUM[A, t] == N, it's necessary to have for i in [t + 1, T], A[i] == 0
here prefix sum PSUM is defined to be: PSUM[A, t] = SUM(A[i] for i in [1, t])
the puzzle asks how many such A's exist given fixed N and T
for example, when N = 2, T = 4, following As work:
1 1 0 0
1 -1 1 1
0 1 1 0
but following don't:
-1 1 1 1 # prefix sum -1
1 1 -1 1 # non-0 following a prefix sum == N
1 1 1 -1 # prefix sum > N
following python code can verify such rule, when given N as expect and an instance of A as seq(some people may feel easier reading code than reading literal description):
def verify(expect, seq):
s = 0
for j, i in enumerate(seq):
s += i
if s < 0:
return False
if s == expect:
break
else:
return s == expect
for k in range(j + 1, len(seq)):
if seq[k] != 0:
return False
return True
I have coded up my solution, but it's too slow. Following is mine:
I decompose the problem into two parts, a part without -1 in it(only {0, 1} and a part with -1.
so if SOLVE(N, T) is the correct answer, I define a function SOLVE'(N, T, B), where a positive B allows me to extend prefix sum to be in the interval of [-B, N] instead of [0, N]
so in fact SOLVE(N, T) == SOLVE'(N, T, 0).
so I soon realized the solution is actually:
have the prefix of A to be some valid {0, 1} combination with positive length l, and with o 1s in it
at position l + 1, I start to add 1 or more -1s and use B to track the number. the maximum will be B + o or depend on the number of slots remaining in A, whichever is less.
recursively call SOLVE'(N, T, B)
in the previous N = 2, T = 4 example, in one of the search case, I will do:
let the prefix of A be [1], then we have A = [1, -, -, -].
start add -1. here i will add only one: A = [1, -1, -, -].
recursive call SOLVE', here i will call SOLVE'(2, 2, 0) to solve the last two spots. here it will return [1, 1] only. then one of the combinations yields [1, -1, 1, 1].
but this algorithm is too slow.
I am wondering how can I optimize it or any different way to look at this problem that can boost the performance up?(I will just need the idea, not impl)
EDIT:
some sample will be:
T N RESOLVE(N, T)
3 2 3
4 2 7
5 2 15
6 2 31
7 2 63
8 2 127
9 2 255
10 2 511
11 2 1023
12 2 2047
13 2 4095
3 3 1
4 3 4
5 3 12
6 3 32
7 3 81
8 3 200
9 3 488
10 3 1184
11 3 2865
12 3 6924
13 3 16724
4 4 1
5 4 5
6 4 18
an exponential time solution will be following in general(in python):
import itertools
choices = [-1, 0, 1]
print len([l for l in itertools.product(*([choices] * t)) if verify(n, l)])
An observation: assuming that n is at least 1, every solution to your stated problem ends in something of the form [1, 0, ..., 0]: i.e., a single 1 followed by zero or more 0s. The portion of the solution prior to that point is a walk that lies entirely in [0, n-1], starts at 0, ends at n-1, and takes fewer than t steps.
Therefore you can reduce your original problem to a slightly simpler one, namely that of determining how many t-step walks there are in [0, n] that start at 0 and end at n (where each step can be 0, +1 or -1, as before).
The following code solves the simpler problem. It uses the lru_cache decorator to cache intermediate results; this is in the standard library in Python 3, or there's a recipe you can download for Python 2.
from functools import lru_cache
#lru_cache()
def walks(k, n, t):
"""
Return the number of length-t walks in [0, n]
that start at 0 and end at k. Each step
in the walk adds -1, 0 or 1 to the current total.
Inputs should satisfy 0 <= k <= n and 0 <= t.
"""
if t == 0:
# If no steps allowed, we can only get to 0,
# and then only in one way.
return k == 0
else:
# Count the walks ending in 0.
total = walks(k, n, t-1)
if 0 < k:
# ... plus the walks ending in 1.
total += walks(k-1, n, t-1)
if k < n:
# ... plus the walks ending in -1.
total += walks(k+1, n, t-1)
return total
Now we can use this function to solve your problem.
def solve(n, t):
"""
Find number of solutions to the original problem.
"""
# All solutions stick at n once they get there.
# Therefore it's enough to find all walks
# that lie in [0, n-1] and take us to n-1 in
# fewer than t steps.
return sum(walks(n-1, n-1, i) for i in range(t))
Result and timings on my machine for solve(10, 100):
In [1]: solve(10, 100)
Out[1]: 250639233987229485923025924628548154758061157
In [2]: %timeit solve(10, 100)
1000 loops, best of 3: 964 µs per loop

Resources