Yesterday, I appeared in an interview. I was stuck in one of the question, and I am asking here the same.
There is an array given that shows points on x-axis, there are N points. and M coins also given.
Remember N >= M
You have to maximize the minimum distance between any two points.
Example: Array : 1 3 6 12
3 coins are given.
If you put coins on 1, 3, 6 points Then distance between coins are : 2, 3
So it gives 2 as answer
But we can improve further
If we put coins at 1, 6, 12 points.
Then distances are 5, 6
So correct answer is : 5
Please help me because I am totally stuck in this question.
Here's my O(N2) approach to this. First, generate all possible distances;
int dist[1000000][3], ndist = 0;
for(int i = 0; i < n; i ++) {
for(int j = i + 1; j < n; j ++) {
dist[ndist][0] = abs(points[i] - points[j]);
dist[ndist][1] = i; //save the first point
dist[ndist][2] = j; //save the second point
}
}
Now sort the distances in decreasing order:
sort(dist, dist + ndist, cmp);
Where cmp is:
bool cmp(int x[], int y[]) {
return (x[0] > y[0]);
}
Sweep through the array, adding points as long as as you don't have m points chosen:
int chosen = 0;
int ans;
for(int i = 0; i < ndist; i ++) {
int whatadd = (!visited[dist[i][1]]) + (!visited[dist[i][2]); //how many new points we'll get if we take this one
if(chosen + whatadd > m) {
break;
}
visited[dist[i][1]] = 1;
visited[dist[i][2]] = 1;
chosen += whatadd;
ans = dist[i][0]; //ans is the last dist[i][0] chosen, since they're sorted in decreasing order
}
if(chosen != m) {
//you need at most one extra point, choose the optimal available one
//left as an exercise :)
}
I hope that helped!
You could use a gready algorithm: you choose the first and last point of the ordered series (as this is the largest possible distance). Then you choose the point closest to the average of them (as any other answer will give a smaller distance in one side), than choose the larger partition. Then repeat it as many times as you need.
You have to use Dynamic Programming. Because, you need an optimum answer.
Your problem is similar to "Change -making of the coin" problem. Like the problem, you have no of coins and you want to find minimum distance.(Instead of minimum no of coins).
You can read following link: Change Coin problem & Dynamic Programming
Related
I am working on a code challenge problem -- "find lucky triples". "Lucky triple" is defined as "In a list lst, for any combination of triple like (lst[i], lst[j], lst[k]) where i < j < k, where lst[i] divides lst[j] and lst[j] divides lst[k].
My task is to find the number of lucky triples in a given list. The brute force way is to use three loops but it takes too much time to solve the problem. I wrote this one and the system respond "time exceed". The problems looks silly and easy but the array is unsorted so general methods like binary search do not work. I am stun in the problem for one day and hope someone can give me a hint. I am seeking a way to solve the problem faster, at least the time complexity should be lower than O(N^3).
A simple dynamic programming-like algorithm will do this in quadratic time and linear space. You just have to maintain a counter c[i] for each item in the list, that represents the number of previous integers that divides L[i].
Then, as you go through the list and test each integer L[k] with all previous item L[j], if L[j] divides L[k], you just add c[j] (which could be 0) to your global counter of triples, because that also implies that there exist exactly c[j] items L[i] such that L[i] divides L[j] and i < j.
int c[] = {0}
int nbTriples = 0
for k=0 to n-1
for j=0 to k-1
if (L[k] % L[j] == 0)
c[k]++
nbTriples += c[j]
return nbTriples
There may be some better algorithm that uses fancy discrete maths to do it faster, but if O(n^2) is ok, this will do just fine.
In regard to your comment:
Why DP? We have something that can clearly be modeled as having a left to right order (DP orange flag), and it feels like reusing previously computed values could be interesting, because the brute force algorithm does the exact same computations a lot of times.
How to get from that to a solution? Run a simple example (hint: it should better be by treating input from left to right). At step i, compute what you can compute from this particular point (ignoring everything on the right of i), and try to pinpoint what you compute over and over again for different i's: this is what you want to cache. Here, when you see a potential triple at step k (L[k] % L[j] == 0), you have to consider what happens on L[j]: "does it have some divisors on its left too? Each of these would give us a new triple. Let's see... But wait! We already computed that on step j! Let's cache this value!" And this is when you jump on your seat.
Full working solution in python:
c = [0] * len(l)
print c
count = 0
for i in range(0,len(l)):
j=0
for j in range(0, i):
if l[i] % l[j] == 0:
c[i] = c[i] + 1
count = count + c[j]
print j
print c
print count
Read up on the Sieve of Eratosthenes, a common technique for finding prime numbers, which could be adapted to find your 'lucky triples'. Essentially, you would need to iterate your list in increasing value order, and for each value, multiply it by an increasing factor until it is larger than the largest list element, and each time one of these multiples equals another value in the list, the multiple is divisible by the base number. If the list is sorted when given to you, then the i < j < k requirement would also be satisfied.
e.g. Given the list [3, 4, 8, 15, 16, 20, 40]:
Start at 3, which has multiples [6, 9, 12, 15, 18 ... 39] within the range of the list. Of those multiples, only 15 is contained in the list, so record under 15 that it has a factor 3.
Proceed to 4, which has multiples [8, 12, 16, 20, 24, 28, 32, 36, 40]. Mark those as having a factor 4.
Continue through the list. When you reach an element that has an existing known factor, then if you find any multiples of that number in the list, then you have a triple. In this case, for 16, this has a multiple 32 which is in the list. So now you know that 32 is divisible by 16, which is divisible by 4. Whereas for 15, that has no multiples in the list, so there is no value that can form a triplet with 3 and 15.
A precomputation step to the problem can help reduce time complexity.
Precomputation Step:
For every element(i), iterate the array to find which are the elements(j) such that lst[j]%lst[i]==0
for(i=0;i<n;i++)
{
for(j=i+1;j<n;j++)
{
if(a[j]%a[i] == 0)
// mark those j's. You decide how to store this data
}
}
This Precomputation Step will take O(n^2) time.
In the Ultimate Step, use the details of the Precomputation Step, to help find the triplets..
Forming a graph - an array of the indices which are multiples ahead of the current index. Then calculating the collective sum of multiples of these indices, referred from the graph. It has a complexity of O(n^2)
For example, for a list {1,2,3,4,5,6} there will be an array of the multiples. The graph will look like
{ 0:[1,2,3,4,5], 1:[3,5], 2: [5], 3:[],4:[], 5:[]}
So, total triplets will be {0->1 ->3/5} and {0->2 ->5} ie., 3
package com.welldyne.mx.dao.core;
import java.util.LinkedList;
import java.util.List;
public class LuckyTriplets {
public static void main(String[] args) {
int[] integers = new int[2000];
for (int i = 1; i < 2001; i++) {
integers[i - 1] = i;
}
long start = System.currentTimeMillis();
int n = findLuckyTriplets(integers);
long end = System.currentTimeMillis();
System.out.println((end - start) + " ms");
System.out.println(n);
}
private static int findLuckyTriplets(int[] integers) {
List<Integer>[] indexMultiples = new LinkedList[integers.length];
for (int i = 0; i < integers.length; i++) {
indexMultiples[i] = getMultiples(integers, i);
}
int luckyTriplets = 0;
for (int i = 0; i < integers.length - 1; i++) {
luckyTriplets += getLuckyTripletsFromMultiplesMap(indexMultiples, i);
}
return luckyTriplets;
}
private static int getLuckyTripletsFromMultiplesMap(List<Integer>[] indexMultiples, int n) {
int sum = 0;
for (int i = 0; i < indexMultiples[n].size(); i++) {
sum += indexMultiples[(indexMultiples[n].get(i))].size();
}
return sum;
}
private static List<Integer> getMultiples(int[] integers, int n) {
List<Integer> multiples = new LinkedList<>();
for (int i = n + 1; i < integers.length; i++) {
if (isMultiple(integers[n], integers[i])) {
multiples.add(i);
}
}
return multiples;
}
/*
* if b is the multiple of a
*/
private static boolean isMultiple(int a, int b) {
return b % a == 0;
}
}
I just wanted to share my solution, which passed. Basically, the problem can be condensed to a tree problem. You need to pay attention to the wording of the question, it only treats numbers different on basis of the index not value. so {1,1,1} will have only 1 triple, but {1,1,1,1} will have 4. the constraint is {li,lj,lk} such that the divide and i<j<k
def solution(l):
count = 0
data = l
max_element = max(data)
tree_list = []
for p,element in enumerate(data):
if element == 0:
tree_list.append([])
else:
temp = []
for el in data[p+1:]:
if el%element == 0:
temp.append(el)
tree_list.append(temp)
for p,element_list in enumerate(tree_list):
data[p] = 0
temp = data[:]
for element in element_list:
pos_element = temp.index(element)
count += len(tree_list[pos_element])
temp[pos_element] = 0
return count
I recently encountered this question in an interview. I couldn't really come up with an algorithm for this.
Given an array of unsorted integers, we have to find the minimum cost in which this array can be converted to an Arithmetic Progression where a cost of 1 unit is incurred if any element is changed in the array. Also, the value of the element ranges between (-inf,inf).
I sort of realised that DP can be used here, but I couldn't solve the equation. There were some constraints on the values, but I don't remember them. I am just looking for high level pseudo code.
EDIT
Here's a correct solution, unfortunately, while simple to understand it's not very efficient at O(n^3).
function costAP(arr) {
if(arr.length < 3) { return 0; }
var minCost = arr.length;
for(var i = 0; i < arr.length - 1; i++) {
for(var j = i + 1; j < arr.length; j++) {
var delta = (arr[j] - arr[i]) / (j - i);
var cost = 0;
for(var k = 0; k < arr.length; k++) {
if(k == i) { continue; }
if((arr[k] + delta * (i - k)) != arr[i]) { cost++; }
}
if(cost < minCost) { minCost = cost; }
}
}
return minCost;
}
Find the relative delta between every distinct pair of indices in the array
Use the relative delta to test the cost of transforming the whole array to AP using that delta
Return the minimum cost
Louis Ricci had the right basic idea of looking for the largest existing arithmetic progression, but assumed that it would have to appear in a single run, when in fact the elements of this progression can appear in any subset of the positions, e.g.:
1 42 3 69 5 1111 2222 8
requires just 4 changes:
42 69 1111 2222
1 3 5 8
To calculate this, notice that every AP has a rightmost element. We can suppose each element i of the input vector to be the rightmost AP position in turn, and for each such i consider all positions j to the left of i, determining the step size implied for each (i, j) combination and, when this is integer (indicating a valid AP), add one to the the number of elements that imply this step size and end at position i -- since all such elements belong to the same AP. The overall maximum is then the longest AP:
struct solution {
int len;
int pos;
int step;
};
solution longestArithProg(vector<int> const& v) {
solution best = { -1, 0, 0 };
for (int i = 1; i < v.size(); ++i) {
unordered_map<int, int> bestForStep;
for (int j = 0; j < i; ++j) {
int step = (v[i] - v[j]) / (i - j);
if (step * (i - j) == v[i] - v[j]) {
// This j gives an integer step size: record that j lies on this AP
int len = ++bestForStep[step];
if (len > best.len) {
best.len = len;
best.pos = i;
best.step = step;
}
}
}
}
++best.len; // We never counted the final element in the AP
return best;
}
The above C++ code uses O(n^2) time and O(n) space, since it loops over every pair of positions i and j, performing a single hash read and write for each. To answer the original problem:
int howManyChangesNeeded(vector<int> const& v) {
return v.size() - longestArithProg(v).len;
}
This problem has a simple geometric interpretation, which shows that it can be solved in O(n^2) time and probably can't be solved any faster than that (reduction from 3SUM). Suppose our array is [1, 2, 10, 3, 5]. We can write that array as a sequence of points
(0,1), (1,2), (2,10), (3,3), (4,5)
in which the x-value is the index of the array item and the y-value is the value of the array item. The question now becomes one of finding a line which passes the maximum possible number of points in that set. The cost of converting the array is the number of points not on a line, which is minimized when the number of points on a line is maximized.
A fairly definitive answer to that question is given in this SO posting: What is the most efficient algorithm to find a straight line that goes through most points?
The idea: for each point P in the set from left to right, find the line passing through that point and a maximum number of points to the right of P. (We don't need to look at points to the left of P because they would have been caught in an earlier iteration).
To find the maximum number of P-collinear points to the right of P, for each such point Q calculate the slope of the line segment PQ. Tally up the different slopes in a hash map. The slope which maps to the maximum number of hits is what you're looking for.
Technical issue: you probably don't want to use floating point arithmetic to calculate the slopes. On the other hand, if you use rational numbers, you potentially have to calculate the greatest common divisor in order to compare fractions by comparing numerator and denominator, which multiplies running time by a factor of log n. Instead, you should check equality of rational numbers a/b and c/d by testing whether ad == bc.
The SO posting referenced above gives a reduction from 3SUM, i.e., this problem is 3SUM-hard which shows that if this problem could be solved substantially faster than O(n^2), then 3SUM could also be solved substantially faster than O(n^2). This is where the condition that the integers are in (-inf,inf) comes in. If it is known that the integers are from a bounded set, the reduction from 3SUM is not definitive.
An interesting further question is whether the idea in the Wikipedia for solving 3SUM in O(n + N log N) time when the integers are in the bounded set (-N,N) can be used to solve the minimum cost to convert an array to an AP problem in time faster than O(n^2).
Given the array a = [a_1, a_2, ..., a_n] of unsorted integers, let diffs = [a_2-a_1, a_3-a_2, ..., a_n-a_(n-1)].
Find the maximum occurring value in diffs and adjust any values in a necessary so that all neighboring values differ by this amount.
Interestingly,even I had the same question in my campus recruitment test today.While doing the test itself,I realised that this logic of altering elements based on most frequent differences between 2 subsequent elements in the array fails in some cases.
Eg-4,5,8,9 .According to the logic of a2-a1,a3-a2 as proposed above,answer shud be 1 which is not the case.
As you suggested DP,I feel it can be on the lines of considering 2 values for each element in array-cost when it is modified as well as when it is not modified and return minimum of the 2.Finally terminate when you reach end of the array.
I have a problem that is baffling me. I think there's an elegant solution, but I haven't been able to find it yet. The problem is as such:
I have a set of unique items. I am allowed to pick a maximum of n items from this set, to form a sub-set. The question then is, how many unique sub-sets are possible?
Say I have this set of 3 items:
A B C
And I am allowed to pick a maximum of 2 items from the set. The answers would be:
(none)
A
B
C
AB
AC
BC
That is, the answer is 7. There are 7 unique sub sets possible.
How do I get to that solution pro grammatically?
If it matters, my sets have up to 8 items, and the maximum number of items on my answers is up to 3. They vary, though, hence why I wanted to see if there's an easier, more elegant way to get to this without brute force counting.
If you want the number of ways to choose at most k items from n, then you can sum the binomial coefficients n choose i for i from 0 to k, e.g.,
static int ways(int n, int k) {
int sum = 0, nci = 1, i;
for (i = 0; i <= k; i++) {
sum += nci;
nci *= n - i;
nci /= i + 1;
}
return sum;
}
I am not sure how much of variation it actually is, but here is the problem:
You are about to set out on a long journey, before you do so you need to pack your bag. You have a selection of N items which you could take along. Each item has a weight and value representing how useful it will be. You will not be able to carry more than than K kilograms. What is the maximum total value of the items you can take with? (You can only have one copy of each item.)
I have created an algorithm that I think will solve the problem using DP, but I am not sure if it will work, it would be great if one of you would take a look at it. Note: it is more like a combination of psuedo code and a algorithm, I am not too sure how to write either.
Assuming k is the max weight.
Two arrays: one containing the weight of each item w[] and the other the value v[].
for i = 0; i<numberOfItems; i++
{
value = 0
k = MaxWeight;
for(j = i; j<numberOfItems; j++
{
if(j = i)
{
if(k - w[i] >= 0)
{
k = k-w[i]
value = value + v[i]
}
}
else
{
if(k - w[j] >= 0)
{
k = k-w[j]
value = value + v[j]
}
}
}
}
No, your greedy algorithm won't work.
Check this example:
MaxWeight = 12
Items = 4 5 4 4 (each value is 1)
Your algorithm would choose items 1 and 2 (or items 2 and 3, or 3 and 4). Optimal solution is to take items 1, 3 and 4.
Suppose I have a set of coins having denominations a1, a2, ... ak.
One of them is known to be equal to 1.
I want to make change for all integers 1 to n using the minimum number of coins.
Any ideas for the algorithm.
eg. 1, 3, 4 coin denominations
n = 11
optimal selection is 3, 0, 2 in the order of coin denominations.
n = 12
optimal selection is 2, 2, 1.
Note: not homework just a modification of this problem
This is a classic dynamic programming problem (note first that the greedy algorithm does not always work here!).
Assume the coins are ordered so that a_1 > a_2 > ... > a_k = 1. We define a new problem. We say that the (i, j) problem is to find the minimum number of coins making change for j using coins a_i > a_(i + 1) > ... > a_k. The problem we wish to solve is (1, j) for any j with 1 <= j <= n. Say that C(i, j) is the answer to the (i, j) problem.
Now, consider an instance (i, j). We have to decide whether or not we are using one of the a_i coins. If we are not, we are just solving a (i + 1, j) problem and the answer is C(i + 1, j). If we are, we complete the solution by making change for j - a_i. To do this using as few coins as possible, we want to solve the (i, j - a_i) problem. We arrange things so that these two problems are already solved for us and then:
C(i, j) = C(i + 1, j) if a_i > j
= min(C(i + 1, j), 1 + C(i, j - a_i)) if a_i <= j
Now figure out what the initial cases are and how to translate this to the language of your choice and you should be good to go.
If you want to try you hands at another interesting problem that requires dynamic programming, look at Project Euler Problem 67.
Here's a sample implementation of a dynamic programming algorithm in Python. It is simpler than the algorithm that Jason describes, because it only calculates 1 row of the 2D table he describes.
Please note that using this code to cheat on homework will make Zombie Dijkstra cry.
import sys
def get_best_coins(coins, target):
costs = [0]
coins_used = [None]
for i in range(1,target + 1):
if i % 1000 == 0:
print '...',
bestCost = sys.maxint
bestCoin = -1
for coin in coins:
if coin <= i:
cost = 1 + costs[i - coin]
if cost < bestCost:
bestCost = cost
bestCoin = coin
costs.append(bestCost)
coins_used.append(bestCoin)
ret = []
while target > 0:
ret.append(coins_used[target])
target -= coins_used[target]
return ret
coins = [1,10,25]
target = 100033
print get_best_coins(coins, target)
solution in C# code
public static long findPermutations(int n, List<long> c)
{
// The 2-dimension buffer will contain answers to this question:
// "how much permutations is there for an amount of `i` cents, and `j`
// remaining coins?" eg. `buffer[10][2]` will tell us how many permutations
// there are when giving back 10 cents using only the first two coin types
// [ 1, 2 ].
long[][] buffer = new long[n + 1][];
for (var i = 0; i <= n; ++i)
buffer[i] = new long[c.Count + 1];
// For all the cases where we need to give back 0 cents, there's exactly
// 1 permutation: the empty set. Note that buffer[0][0] won't ever be
// needed.
for (var j = 1; j <= c.Count; ++j)
buffer[0][j] = 1;
// We process each case: 1 cent, 2 cent, etc. up to `n` cents, included.
for (int i = 1; i <= n; ++i)
{
// No more coins? No permutation is possible to attain `i` cents.
buffer[i][0] = 0;
// Now we consider the cases when we have J coin types available.
for (int j = 1; j <= c.Count; ++j)
{
// First, we take into account all the known permutations possible
// _without_ using the J-th coin (actually computed at the previous
// loop step).
var value = buffer[i][j - 1];
// Then, we add all the permutations possible by consuming the J-th
// coin itself, if we can.
if (c[j - 1] <= i)
value += buffer[i - c[j - 1]][j];
// We now know the answer for this specific case.
buffer[i][j] = value;
}
}
// Return the bottom-right answer, the one we were looking for in the
// first place.
return buffer[n][c.Count];
}
Following is the bottom up approach of dynamic programming.
int[] dp = new int[amount+ 1];
Array.Fill(dp,amount+1);
dp[0] = 0;
for(int i=1;i<=amount;i++)
{
for(int j=0;j<coins.Length;j++)
{
if(coins[j]<=i) //if the amount is greater than or equal to the current coin
{
//refer the already calculated subproblem dp[i-coins[j]]
dp[i] = Math.Min(dp[i],dp[i-coins[j]]+1);
}
}
}
if(dp[amount]>amount)
return -1;
return dp[amount];
}