Suppose we have two lists of lengths n and m respectively:
val l1 = Seq(1,2,3,4,5,6,7,8,9,0)
val l2 = Seq(2,4,6,8,10,12)
Is there a way to calculate their intersection wiht less than O(n·m)?
That is
val result = Seq(2,4,6,8)
EDIT: we can assume our lists are sorted.
For sorted lists the following Algorithm should work:
You can have 2 pointers say (i and j) one at l1 another at l2.
Now you can iterate on l1 and l2 such that
while (i< l1.size && j < l2.size ) {
if l1[i] < l2[j]
i++
else if (l1[i] == l2[j] )
i++; j++; output = output U {l1[i]}
else
j++
}
This should be in O(max(m,n))
Put one of the items into a hash set. O(min(n,m))
var set2 = new HashSet<int>(){2,4,6,8,10,12};
Take the other set and check if it exists in hash set. Each access is O(1) since we need the other set and we created the hash set with the shorter set that means the time is O(max(m,n)) if it return true in the other set
add it to your results.
Result is O(n+m) in time and O(min(n,m)) in memory.
Related
I was asked in an interview today below question. I gave O(nlgn) solution but I was asked to give O(n) solution. I could not come up with O(n) solution. Can you help?
An input array is given like [1,2,4] then every element of it is doubled and
appended into the array. So the array now looks like [1,2,4,2,4,8]. How
this array is randomly shuffled. One possible random arrangement is
[4,8,2,1,2,4]. Now we are given this random shuffled array and we want to
get original array [1,2,4] in O(n) time.
The original array can be returned in any order. How can I do it?
Here's an O(N) Java solution that could be improved by first making sure that the array is of the proper form. For example it shouldn't accept [0] as an input:
import java.util.*;
class Solution {
public static int[] findOriginalArray(int[] changed) {
if (changed.length % 2 != 0)
return new int[] {};
// set Map size to optimal value to avoid rehashes
Map<Integer,Integer> count = new HashMap<>(changed.length*100/75);
int[] original = new int[changed.length/2];
int pos = 0;
// count frequency for each number
for (int n : changed) {
count.put(n, count.getOrDefault(n,0)+1);
}
// now decide which go into the answer
for (int n : changed) {
int smallest = n;
for (int m=n; m > 0 && count.getOrDefault(m,0) > 0; m = m/2) {
//System.out.println(m);
smallest = m;
if (m % 2 != 0) break;
}
// trickle up from smallest to largest while count > 0
for (int m=smallest, mm = 2*m; count.getOrDefault(mm,0) > 0; m = mm, mm=2*mm){
int ct = count.getOrDefault(mm,0);
while (count.get(m) > 0 && ct > 0) {
//System.out.println("adding "+m);
original[pos++] = m;
count.put(mm, ct -1);
count.put(m, count.get(m) - 1);
ct = count.getOrDefault(mm,0);
}
}
}
// check for incorrect format
if (count.values().stream().anyMatch(x -> x > 0)) {
return new int[] {};
}
return original;
}
public static void main(String[] args) {
int[] changed = {1,2,4,2,4,8};
System.out.println(Arrays.toString(changed));
System.out.println(Arrays.toString(findOriginalArray(changed)));
}
}
But I've tried to keep it simple.
The output is NOT guaranteed to be sorted. If you want it sorted it's going to cost O(NlogN) inevitably unless you use a Radix sort or something similar (which would make it O(NlogE) where E is the max value of the numbers you're sorting and logE the number of bits needed).
Runtime
This may not look that it is O(N) but you can see that it is because for every loop it will only find the lowest number in the chain ONCE, then trickle up the chain ONCE. Or said another way, in every iteration it will do O(X) iterations to process X elements. What will remain is O(N-X) elements. Therefore, even though there are for's inside for's it is still O(N).
An example execution can be seen with [64,32,16,8,4,2].
If this where not O(N) if you print out each value that it traverses to find the smallest you'd expect to see the values appear over and over again (for example N*(N+1)/2 times).
But instead you see them only once:
finding smallest 64
finding smallest 32
finding smallest 16
finding smallest 8
finding smallest 4
finding smallest 2
adding 2
adding 8
adding 32
If you're familiar with the Heapify algorithm you'll recognize the approach here.
def findOriginalArray(self, changed: List[int]) -> List[int]:
size = len(changed)
ans = []
left_elements = size//2
#IF SIZE IS ODD THEN RETURN [] NO SOLN. IS POSSIBLE
if(size%2 !=0):
return ans
#FREQUENCY DICTIONARY given array [0,0,2,1] my map will be: {0:2,2:1,1:1}
d = {}
for i in changed:
if(i in d):
d[i]+=1
else:
d[i] = 1
# CHECK THE EDGE CASE OF 0
if(0 in d):
count = d[0]
half = count//2
if((count % 2 != 0) or (half > left_elements)):
return ans
left_elements -= half
ans = [0 for i in range(half)]
#CHECK REST OF THE CASES : considering the values will be 10^5
for i in range(1,50001):
if(i in d):
if(d[i] > 0):
count = d[i]
if(count > left_elements):
ans = []
break
left_elements -= d[i]
for j in range(count):
ans.append(i)
if(2*i in d):
if(d[2*i] < count):
ans = []
break
else:
d[2*i] -= count
else:
ans = []
break
return ans
I have a simple idea which might not be the best, but I could not think of a case where it would not work. Having the array A with the doubled elements and randomly shuffled, keep a helper map. Process each element of the array and, each time you find a new element, add it to the map with the value 0. When an element is processed, increment map[i] and decrement map[2*i]. Next you iterate over the map and print the elements that have a value greater than zero.
A simple example, say that the vector is:
[1, 2, 3]
And the doubled/shuffled version is:
A = [3, 2, 1, 4, 2, 6]
When processing 3, first add the keys 3 and 6 to the map with value zero. Increment map[3] and decrement map[6]. This way, map[3] = 1 and map[6] = -1. Then for the next element map[2] = 1 and map[4] = -1 and so forth. The final state of the map in this example would be map[1] = 1, map[2] = 1, map[3] = 1, map[4] = -1, map[6] = 0, map[8] = -1, map[12] = -1.
Then you just process the keys of the map and, for each key with a value greater than zero, add it to the output. There are certainly more efficient solutions, but this one is O(n).
In C++, you can try this.
With time is O(N + KlogK) where N is the length of input, and K is the number of unique elements in input.
class Solution {
public:
vector<int> findOriginalArray(vector<int>& input) {
if (input.size() % 2) return {};
unordered_map<int, int> m;
for (int n : input) m[n]++;
vector<int> nums;
for (auto [n, cnt] : m) nums.push_back(n);
sort(begin(nums), end(nums));
vector<int> out;
for (int n : nums) {
if (m[2 * n] < m[n]) return {};
for (int i = 0; i < m[n]; ++i, --m[2 * n]) out.push_back(n);
}
return out;
}
};
Not so clear about the space complexity required in the question, so this is my top-of-the-mind attempt to this question if this requires O(n) time complexity.
If the length of the input array is not even, then its wrong !!
Create a map, add the elements of the input array to it.
Divide each element in the input array by 2 and check if that value exists in the map. If it exists, add it to the array (slice) orig.
There is a chance we have added duplicate values to this original array, clean it!!
Here is a sample go code:
https://go.dev/play/p/w4mm-rloHyi
I am sure we can optimize this code in a lot of ways for space complexities. But its O(n) time complexity.
Given a large set of sets, all the same size (call it S={s1,...,sn}), I want to find all pairs (si,sj) that have an overlap of at least M.
So if M=2 and S consists of
s1 = (3,4,8,9)
s2 = (1,3,7,8)
s3 = (1,2,5,6)
s4 = (1,6,7,8)
I want to identify the pairs (s1,s2), (s2,s4), and (s3,s4).
The straightforward approach compares every pair and checks for the size of the intersection, but this is prohibitively slow given the number of sets and size of sets I am using (something like O(log(m) n2) where m is the size of the sets?).
I've searched around and haven't found a similar question (though this answer is probably relevant). Any help would be greatly appreciated!
Go by value
pseudo code:
typedef V => type of values;
list<Set> S;
Map<V, list<int>> val2sets;
for(int setIdx=0; setIdx < S.length; ++setIdx){
foreach(V v in S[setIdx]) {
val2sets[v].push(setIdx);
}
}
int[][] setIntersectCount;
foreach(V as v ,list<int> as l in val2sets) {
for(int i=0; i < l.length; ++i){
for(int j=i+1; j < l.length; ++j){
setIntersectCount[i][j]++;
if(setIntersectCount[i][j] == 2){
printf("set {0} and {1} are a pair\n", i, j);
}
}
}
}
as for Complexity:
let S = # of sets.
let M = # of members in a set.
let V = # of unique values in the sets.
O(SM) generating the val2sets
as for reading all values and matching each 2...,
it is maxed at O(V x S^2) but probability it will be closer to O(SM) because uniformity will say not all sets will have all elements in common.
P.S. hope i didn't have any mistake in my calculations.
Worth to note: I did not calculate the Naieve approach complexity either :)
This is a fairly simple question but I can't remember all my coding and data structures and feel a little blank.
Lets say I have a list/array of things (e.g. structures or objects). There is a certain property (true or false) that needs to hold between all pairs of these objects. What would be the fastest method to check if the property is violated between any pair of objects?
Unless you have additional information about the property (for example, that it is transitive) your only solution is to check that property for every pair from the list, with two nested loops:
for (int i = 0 ; i != N ; i++)
for (int j = 0 ; j != N ; j++)
if (i != j) // This assumes that the property might not be reflexive
// This will check the property both ways, i.e.
// there is no implication that the property is commutative.
checkProperty(list[i], list[j]);
For commutative properties (i.e. when A ? B implies B ? A) you can do it in half the comparisons by starting the second loop at j = i+1
If property is transitive (i.e. when A ? B and B ? C imply that A ? C, where ? denotes the property check) you can build a faster check.
You'll need a double loop to compare every item. Assuming you just need to check every 2-item combination (i.e. order doesn't matter), you can just loop through the remaining items in the inner loop.
for (int i = 0 ; i < N ; i++)
for (int j = i+1 ; j < N ; j++)
checkProperty(list[i], list[j]);
I have two ordered lists of the same element type, each list having at most one element of each value (say ints and unique numbers), but otherwise with no restrictions (one may be a subset of the other, they may be completely disjunct, or share some elements but not others).
How do I efficiently determine if A is ordering any two items in a different way than B is? For example, if A has the items 1, 2, 10 and B the items 2, 10, 1, the property would not hold as A lists 1 before 10 but B lists it after 10. 1, 2, 10 vs 2, 10, 5 would be perfectly valid however as A never mentions 5 at all, I cannot rely on any given sorting rule shared by both lists.
You can get O(n) as follows. First, find the intersection of the two sets using hashing. Second, test whether A and B are identical if you only consider elements from the intersection.
My approach would be to first make sorted copies of A and B which also record the positions of elements in the original lists:
for i in 1 .. length(A):
Apos[i] = (A, i)
sortedApos = sort(Apos[] by first element of each pair)
for i in 1 .. length(B):
Bpos[i] = (B, i)
sortedBpos = sort(Bpos[] by first element of each pair)
Now find those elements in common using a standard list merge that records the positions in both A and B of the shared elements:
i = 1
j = 1
shared = []
while i <= length(A) && j <= length(B)
if sortedApos[i][1] < sortedBpos[j][1]
++i
else if sortedApos[i][1] > sortedBpos[j][1]
++j
else // They're equal
append(shared, (sortedApos[i][2], sortedBpos[j][2]))
++i
++j
Finally, sort shared by its first element (position in A) and check that all its second elements (positions in B) are increasing. This will be the case iff the elements common to A and B appear in the same order:
sortedShared = sort(shared[] by first element of each pair)
for i = 2 .. length(sortedShared)
if sortedShared[i][2] < sortedShared[i-1][2]
return DIFFERENT
return SAME
Time complexity: 2*(O(n) + O(nlog n)) + O(n) + O(nlog n) + O(n) = O(nlog n).
General approach: store all the values and their positions in B as keys and values in a HashMap. Iterate over the values in A and look them up in B's HashMap to get their position in B (or null). If this position is before the largest position value you've seen previously, then you know that something in B is in a different order than A. Runs in O(n) time.
Rough, totally untested code:
boolean valuesInSameOrder(int[] A, int[] B)
{
Map<Integer, Integer> bMap = new HashMap<Integer, Integer>();
for (int i = 0; i < B.length; i++)
{
bMap.put(B[i], i);
}
int maxPosInB = 0;
for (int i = 0; i < A.length; i++)
{
if(bMap.containsKey(A[i]))
{
int currPosInB = bMap.get(A[i]);
if (currPosInB < maxPosInB)
{
// B has something in a different order than A
return false;
}
else
{
maxPosInB = currPosInB;
}
}
}
// All of B's values are in the same order as A
return true;
}
I can only think of this naive algorithm. Any better way? C/C++, Ruby ,Haskell is OK.
arry = [1,5,.....4569895] //1000000 elements ,sorted , no duplicated
newArray = Hash.new
for (i = 0 ; i < arry.length ;i++ )
{
for (j = 0 ; j < arry.length ;j ++ )
{
elem = arry[i] + arry[j]
if (! newArray.key?(elem))
{
newArray [elem] = arry[i] + arry[j]
}
}
}
EDIT : sorry. I have discrete value in the array , instead of [1..1000000]
It would be more efficient to separate the algorithm into two distinct steps. (Warning: pseudocode ahead)
First create n-1 lists by adding the rest of the elements to the ith element. This can be done in parallel for each list. Note that the resulting lists will be sorted.
newArray = array(array.length);
for (i = 0 ; i < array.length ;i++ ) {
newArray[i] = array(array.length - i - 1);
for (j = 0; j < array.length - i; j++) {
newArray[i][j] = array[i] + array[j + i];
}
}
Second use merge sort in to merge the resulted lists. You can do this in parallel, e.g. merge newArray[0] - newArray[i], newArray[2] - newArray[1-i], ... and then again until you only have one list.
If the condition says that you should be able to add any item in the range, then the only way i can think of is to check if the sum is not yet in the result list. Since for any number x, there are x different additions that lead to x. (Or x/2 if you think that 1 + 2 and 2 + 1 is the same addition).
There is one obvious optimization: make the second loop start at the indice i, that way you will avoid having x+y and y+x.
Then if you don't want to use a set, you could use the fact that the items are sorted, so you could build N lists, and merge them while removing the duplicates.
I'm afraid the best worst-case time complexity is O(n2). For input {20, 21, 22, ...}, you won't get any duplicate adding these numbers. Assuming hash insertions are O(1), you already have the best algorithm...