Nested loop complexity - algorithm

I have several lists of varying size, each index of the list contains both a key and an object : list1.add('Key', obj).
The lists are all sorted.
My aim is to iterate through the list and match 1 or more items in list 2,3,4 or n to an item in the mainList using the key.
Currently I do something along the lines of:
for i to list1 size
for j to list2 size
if list1[i] = list2[j]
do stuff
As I loop through I'm, using boolean values to exit quickly using a if current != previous check and I'm deleting the object from the list I take it from.
it's working fine but I now have another list that I need to match and possible another n lists afterwards. The lists are of different sizes.
the 2 options that I can see are to either repeat the above segment of code several times where the inner list is changed - I do no like this approach.
The other option is to extend the above and once one inner loop is finished, move onto the next:
for i to list1 size
for j to list2 size
if list1[i] = list2[j]
do stuff
for k to list2 size
if list1[i] = list2[k]
do stuff
I'd like to think I'm correct in thinking that the 2nd is more efficient however I'm unsure. Also, is there a better way?
Thanks for any advice / help.

If the lists are all sorted then you only need to iterate though each list once; on each iteration of the main list, iterate through a secondary list (starting at the previously saved index, initialized to 0) until you find an index whose value is greater than the current value of the main list, save this index, and proceed to the next secondary list.
Array<Integer> indices = new Array(n-1); // indices for every list except list1
for(int i = 0; i < indices.size; i++) {
indices[i] = 0;
}
for(int i = 0; i < list1.size; i++) {
Value curVal = list1[i];
while(indices[0] < list2.size && list2[indices[0]] <= curVal) {
if(list2[indices[0]] == curVal) {
// do stuff on list2[indices[0]]
}
indices[0]++;
}
while(indices[1] < list3.size && list3[indices[1]] < curVal) {
if(list3[indices[1]] == curVal) {
// do stuff on list3[indices[1]]
}
indices[1]++;
}
// etc
}
You can avoid the copy-pasting by using something like a ListIterator that contains a list and its current index; then on each iteration of the main loop you'll iterate through a list of ListIterators in lieu of the copy-pasted code block
public class ListIterator {
int index = 0;
List<Value> list;
Value curVal() {
return list[index];
}
boolean hasNext() {
return (index < list.size);
}
}
List<ListIterator> iterators;
for(int i = 0; i < list1.size; i++) {
Value curVal = list1[i];
for(int j = 0; j < iterators.size; j++) {
ListIterator iterator = iterators[j];
while(iterator.hasNext() && iterator.curVal() <= curVal) {
if(iterator.curVal() == curVal) {
// do something with iterator.curVal()
}
iterator.index++;
}
}
}
This is time complexity O(n) where n is the sum of the lengths of all of your lists
Edit: If it's difficult to compare keys via <=, then you can use a Set implementation instead. Add the List1 keys to a Set, then iterate through the remaining lists testing for set membership.
Set<String> set = new Set(List1);
Array<List> lists = new Array();
// add lists to Array<List>
for(int i = 0; i < lists.size; i++) {
List currentList = lists[i];
for(int j = 0; j < currentList.size; j++) {
if(set.contains(currentList[j]) {
// do something
}
}
}

Related

I think I've found a good sorting algorithm. Seems to be faster than quickSort?

This works without comparing.
First, it finds the largest number in the array and saves it in a variable called "max". Then it creates a temporary array with the length of max + 1. After that, each "tempArray[i]" counts how often a number equal to "i" has been counted in the input array. In the end, it converts "tempArray" and writes it into the input array. See for yourself.
static int[] nSort(int[] array) {
int max = array[0];
for(int i = 1; i < array.length; i++) {
max = Math.max(max, array[i]);
}
Integer[] tempArray = new Integer[max+1];
for(int i = 0; i < array.length; i++) {
if(tempArray[array[i]] == null) {
tempArray[array[i]] = 0;
}
tempArray[array[i]]++;
}
for(int[] i = new int[2]; i[0] < max + 1; i[0]++) {
if(tempArray[i[0]] != null) {
while(tempArray[i[0]] > 0) {
array[i[1]] = i[0];
i[1]++;
tempArray[i[0]]--;
}
}
}
return array;
}
I've charted the measured runtime in a graph below. Green being my algorithm red and red being quicksort.
I've used this quicksort GitHub implementation and measured runtime in the same way as implemented there.
Runtime graph:

Multi-layer loop with uncertain layers

I have tried multiple variations of this, but none of them seem to work. Any ideas?
int[] array = new int[n];
for(int i = 0; i < array[0]; i++){
for(int j = 0; j < array[1]; j++){
....
for(int k = 0; k < array[array.length - 1]; k++){
do something with i,j, ... , k
}
}
}
So if I don't know the length of array first, so I can't write the certain layers for the loop, I don't know how to do it.
Thanks in advance.
Use recursion:
arr = [a, b, c, ... n]
function iterate_one_level(arr, indices) {
if (arr == []) {
do_something_with_indices(indices)
return;
}
for (let i=0; i<arr.length; i++) {
iterate_one_level(arr[1:], indices + [i])
}
}
Alternatively, your language may have something similar to itertools.product
The canonical way to do this is with recursion:
deep_loop(array[], index[]) {
if array.length == 0 // Finally reached the innermost loop
do something with index[:]
else // Go down one loop level
// loop on the first array element
for n in range 0:array[0] {
// recur on the rest of the array
// append this index to the sequence
deep_loop( array[1:], index[:] + [n] )

why this while loop performs worse than the other very similar while loop?

I am trying to write a variation of insertion sort. In my algorithm, the swapping of values doesn't happen when finding the correct place for item in hand. Instead, it uses a lookup table (an array containing "links" to smaller values in the main array at corresponding positions) to find the correct position of the item. When we are done with all n elements in the main array, we haven't actually changed any of the elements in the main array itself, but an array named smaller will contain the links to immediate smaller values at positions i, i+1, ... n in correspondence to every element i, i+1, ... n in the main array. Finally, we iterate through the array smaller, starting from the index where the largest value in the main array existed, and populate another empty array in backward direction to finally get the sorted sequence.
Somewhat hacky/verbose implementation of the algorithm just described:
public static int [] sort (int[] a) {
int length = a.length;
int sorted [] = new int [length];
int smaller [] = new int [length];
//debug helpers
long e = 0, t = 0;
int large = 0;
smaller[large] = -1;
here:
for (int i = 1; i < length; i++) {
if (a[i] > a[large]) {
smaller[i] = large;
large = i;
continue;
}
int prevLarge = large;
int temp = prevLarge;
long st = System.currentTimeMillis();
while (prevLarge > -1 && a[prevLarge] >= a[i]) {
e++;
if (smaller[prevLarge] == -1) {
smaller[i] = -1;
smaller[prevLarge] = i;
continue here;
}
temp = prevLarge;
prevLarge = smaller[prevLarge];
}
long et = System.currentTimeMillis();
t += (et - st);
smaller[i] = prevLarge;
smaller[temp] = i;
}
for (int i = length - 1; i >= 0; i--) {
sorted[i] = a[large];
large = smaller[large];
}
App.print("DevSort while loop execution: " + (e));
App.print("DevSort while loop time: " + (t));
return sorted;
}
The variables e and t contain the number of times the inner while loop is executed and total time taken to execute the while loop e times, respectively.
Here is a modified version of insertion sort:
public static int [] sort (int a[]) {
int n = a.length;
//debug helpers
long e = 0, t = 0;
for (int j = 1; j < n; j++) {
int key = a[j];
int i = j - 1;
long st = System.currentTimeMillis();
while ( (i > -1) && (a[i] >= key)) {
e++;
// simply crap
if (1 == 1) {
int x = 0;
int y = 1;
int z = 2;
}
a[i + 1] = a[i];
i--;
}
long et = System.currentTimeMillis();
t += (et - st);
a[i+1] = key;
}
App.print("InsertSort while loop execution: " + (e));
App.print("InsertSort while loop time: " + (t));
return a;
}
if block inside the while loop is introduced just to match the number of statements inside the while loop of my "hacky" algorithm. Note that two variables e and t are introduced also in the modified insertion sort.
The thing that's confusing is that even though the while loop of insertion sort runs exactly equal number of times the while loop inside my "hacky" algorithm, t for insertion sort is significantly smaller than t for my algorithm.
For a particular run, if n = 10,000:
Total time taken by insertion sort's while loop: 20ms
Total time taken by my algorithm's while loop: 98ms
if n = 100,000;
Total time taken by insertion sort's while loop: 1100ms
Total time taken by my algorithm's while loop: 25251ms
In fact, because the condition 1 == 1 is always true, insertion sort's if block inside the while loop must execute more often than the one inside while loop of my algorithm. Can someone explain what's going on?
Two arrays containing same elements in the same order are being sorted using each algorithm.

SubSet sum with N array Solution, Dynamic Solution recquired

Though you can have any number of arrays but let's Suppose you have two arrays {1,2,3,4,5,6} and {1,2,3,4,5,6}
You have to find whether they sum upto 4 with participation of both array elements. i.e.
1 from array1, 3 from array2
2 from array1, 2 from array2
3 from array1, 1 from array2
etc
In Nutshell:Want to implement SubSet Sum Algorithm where there is two arrays and array elements are chosen from both of the arrays to make up the target Sum
here is the subset sum algorithm that I use for one array
bool subset_sum(int a[],int n, int sum)
{
bool dp[n+1][sum+1];
int i,j;
for(i=0;i<=n;i++)
dp[i][0]=true;
for(j=1;j<=sum;j++)
dp[0][j]=false;
for(i=1;i<=n;i++)
{
for(j=1;j<=sum;j++)
{
if(dp[i-1][j]==true)
dp[i][j]=true;
else
{
if(a[i-1]>j)
dp[i][j]=false;
else
dp[i][j]=dp[i-1][j-a[i-1]];
}
}
}
return dp[n][sum];
}
We can implement this with a 3 dimensional dp. But for simplicity and readability I have written it using two methods.
NOTE : My solution works when we choose at least one element from each array. It doesn't work if there is a condition that we have to choose equal number of elements from each array.
// This is a helper method
// prevPosAr[] is the denotes what values could be made with participation from ALL
// arrays BEFORE the current array
// This method returns an array which denotes what values could be made with the
// with participation from ALL arrays UP TO current array
boolean[] getPossibleAr( boolean prevPossibleAr[], int ar[] )
{
boolean dp[][] = new boolean[ ar.length + 1 ][ prevPossibleAr.length ];
// dp[i][j] denotes if we can make value j using AT LEAST
// ONE element from current ar[0...i-1]
for (int i = 1; i <= ar.length; i++)
{
for (int j = 0; j < dp[i].length; j++)
{
if ( dp[i-1][j] == true )
{
// we can make value j using AT LEAST one element from ar[0...i-2]
// it implies that we can also make value j using AT LEAST
// one element from ar[0...i-1]
dp[i][j] = true;
continue;
}
int prev = i-ar[i-1];
// now we look behind
if ( prev < 0 )
{
// it implies that ar[i-1] > i
continue;
}
if ( prevPossibleAr[prev] || dp[i-1][prev] )
{
// It is the main catch
// Be careful
// if ( prevPossibleAr[prev] == true )
// it means that we could make the value prev
// using the previous arrays (without using any element
// of the current array)
// so now we can add ar[i-1] with prev and eventually make i
// if ( dp[i-1][prev] == true )
// it means that we could make prev using one or more
// elements from the current array....
// now we can add ar[i-1] with this and eventually make i
dp[i][j] = true;
}
}
}
// What is dp[ar.length] ?
// It is an array of booleans
// It denotes whether we can make value j using ALL the arrays
// (using means taking AT LEAST ONE ELEMENT)
// before the current array and using at least ONE element
// from the current array ar[0...ar.lengh-1] (That is the full current array)
return dp[ar.length];
}
// This is the method which will give us the output
boolean subsetSum(int ar[][], int sum )
{
boolean prevPossible[] = new boolean[sum+1];
prevPossible[0] = true;
for ( int i = 0; i < ar.length; i++ )
{
boolean newPossible[] = getPossibleAr(prevPossible, ar[i]);
// calling that helper function
// newPossible denotes what values can be made with
// participation from ALL arrays UP TO i th array
// (0 based index here)
prevPossible = newPossible;
}
return prevPossible[sum];
}
Steps,
find (Array1, Array2, N)
sort Array1
sort Array2
for (i -> 0 && i < Array1.length) and (j -> Array2.length-1 && j >= 0)
if(Array1[i] + Array2[j] == N)
return yes;
if(Array1[i] + Array2[j] > N)
j--;
else
i++;
return NO;

Interview - Find magnitude pole in an array

Magnitude Pole: An element in an array whose left hand side elements are lesser than or equal to it and whose right hand side element are greater than or equal to it.
example input
3,1,4,5,9,7,6,11
desired output
4,5,11
I was asked this question in an interview and I have to return the index of the element and only return the first element that met the condition.
My logic
Take two MultiSet (So that we can consider duplicate as well), one for right hand side of the element and one for left hand side of the
element(the pole).
Start with 0th element and put rest all elements in the "right set".
Base condition if this 0th element is lesser or equal to all element on "right set" then return its index.
Else put this into "left set" and start with element at index 1.
Traverse the Array and each time pick the maximum value from "left set" and minimum value from "right set" and compare.
At any instant of time for any element all the value to its left are in the "left set" and value to its right are in the "right set"
Code
int magnitudePole (const vector<int> &A) {
multiset<int> left, right;
int left_max, right_min;
int size = A.size();
for (int i = 1; i < size; ++i)
right.insert(A[i]);
right_min = *(right.begin());
if(A[0] <= right_min)
return 0;
left.insert(A[0]);
for (int i = 1; i < size; ++i) {
right.erase(right.find(A[i]));
left_max = *(--left.end());
if (right.size() > 0)
right_min = *(right.begin());
if (A[i] > left_max && A[i] <= right_min)
return i;
else
left.insert(A[i]);
}
return -1;
}
My questions
I was told that my logic is incorrect, I am not able to understand why this logic is incorrect (though I have checked for some cases and
it is returning right index)
For my own curiosity how to do this without using any set/multiset in O(n) time.
For an O(n) algorithm:
Count the largest element from n[0] to n[k] for all k in [0, length(n)), save the answer in an array maxOnTheLeft. This costs O(n);
Count the smallest element from n[k] to n[length(n)-1] for all k in [0, length(n)), save the answer in an array minOnTheRight. This costs O(n);
Loop through the whole thing and find any n[k] with maxOnTheLeft <= n[k] <= minOnTheRight. This costs O(n).
And you code is (at least) wrong here:
if (A[i] > left_max && A[i] <= right_min) // <-- should be >= and <=
Create two bool[N] called NorthPole and SouthPole (just to be humorous.
step forward through A[]tracking maximum element found so far, and set SouthPole[i] true if A[i] > Max(A[0..i-1])
step backward through A[] and set NorthPole[i] true if A[i] < Min(A[i+1..N-1)
step forward through NorthPole and SouthPole to find first element with both set true.
O(N) in each step above, as visiting each node once, so O(N) overall.
Java implementation:
Collection<Integer> magnitudes(int[] A) {
int length = A.length;
// what's the maximum number from the beginning of the array till the current position
int[] maxes = new int[A.length];
// what's the minimum number from the current position till the end of the array
int[] mins = new int[A.length];
// build mins
int min = mins[length - 1] = A[length - 1];
for (int i = length - 2; i >= 0; i--) {
if (A[i] < min) {
min = A[i];
}
mins[i] = min;
}
// build maxes
int max = maxes[0] = A[0];
for (int i = 1; i < length; i++) {
if (A[i] > max) {
max = A[i];
}
maxes[i] = max;
}
Collection<Integer> result = new ArrayList<>();
// use them to find the magnitudes if any exists
for (int i = 0; i < length; i++) {
if (A[i] >= maxes[i] && A[i] <= mins[i]) {
// return here if first one only is needed
result.add(A[i]);
}
}
return result;
}
Your logic seems perfectly correct (didn't check the implementation, though) and can be implemented to give an O(n) time algorithm! Nice job thinking in terms of sets.
Your right set can be implemented as a stack which supports a min, and the left set can be implemented as a stack which supports a max and this gives an O(n) time algorithm.
Having a stack which supports max/min is a well known interview question and can be done so each operation (push/pop/min/max is O(1)).
To use this for your logic, the pseudo code will look something like this
foreach elem in a[n-1 to 0]
right_set.push(elem)
while (right_set.has_elements()) {
candidate = right_set.pop();
if (left_set.has_elements() && left_set.max() <= candidate <= right_set.min()) {
break;
} else if (!left.has_elements() && candidate <= right_set.min() {
break;
}
left_set.push(candidate);
}
return candidate
I saw this problem on Codility, solved it with Perl:
sub solution {
my (#A) = #_;
my ($max, $min) = ($A[0], $A[-1]);
my %candidates;
for my $i (0..$#A) {
if ($A[$i] >= $max) {
$max = $A[$i];
$candidates{$i}++;
}
}
for my $i (reverse 0..$#A) {
if ($A[$i] <= $min) {
$min = $A[$i];
return $i if $candidates{$i};
}
}
return -1;
}
How about the following code? I think its efficiency is not good in the worst case, but it's expected efficiency would be good.
int getFirstPole(int* a, int n)
{
int leftPole = a[0];
for(int i = 1; i < n; i++)
{
if(a[j] >= leftPole)
{
int j = i;
for(; j < n; j++)
{
if(a[j] < a[i])
{
i = j+1; //jump the elements between i and j
break;
}
else if (a[j] > a[i])
leftPole = a[j];
}
if(j == n) // if no one is less than a[i] then return i
return i;
}
}
return 0;
}
Create array of ints called mags, and int variable called maxMag.
For each element in source array check if element is greater or equal to maxMag.
If is: add element to mags array and set maxMag = element.
If isn't: loop through mags array and remove all elements lesser.
Result: array of magnitude poles
Interesting question, I am having my own solution in C# which I have given below, read the comments to understand my approach.
public int MagnitudePoleFinder(int[] A)
{
//Create a variable to store Maximum Valued Item i.e. maxOfUp
int maxOfUp = A[0];
//if list has only one value return this value
if (A.Length <= 1) return A[0];
//create a collection for all candidates for magnitude pole that will be found in the iteration
var magnitudeCandidates = new List<KeyValuePair<int, int>>();
//add the first element as first candidate
var a = A[0];
magnitudeCandidates.Add(new KeyValuePair<int, int>(0, a));
//lets iterate
for (int i = 1; i < A.Length; i++)
{
a = A[i];
//if this item is maximum or equal to all above items ( maxofUp will hold max value of all the above items)
if (a >= maxOfUp)
{
//add it to candidate list
magnitudeCandidates.Add(new KeyValuePair<int, int>(i, a));
maxOfUp = a;
}
else
{
//remote all the candidates having greater values to this item
magnitudeCandidates = magnitudeCandidates.Except(magnitudeCandidates.Where(c => c.Value > a)).ToList();
}
}
//if no candidate return -1
if (magnitudeCandidates.Count == 0) return -1;
else
//return value of first candidate
return magnitudeCandidates.First().Key;
}

Resources