From http://www.geeksforgeeks.org/merge-sort-for-linked-list/
The slow random-access performance of a linked list makes some other
algorithms (such as quicksort) perform poorly, and others (such as
heapsort) completely impossible.
However, I don't really see why quick sort would perform worse than merge sort while sorting a linked list.
In Quick Sort:
Choosing a pivot requires a random access, and needs to iterate through linked list (O(n) per recursion).
Partitioning can be done using left-to-right sweep manner (which doesn't require random access):
In Merge Sort:
Split at middle requires a random access, and needs to iterate through linked list (using fast-slow pointer mechanism) (O(n) per recursion).
Merging can be done left-to-right sweep manner (which doesn't require random access).
So as far as I can see, both Quick Sort and Merge Sort requires random access in each recursion, and I don't see why Quick Sort would perform worse than Merge Sort due to no-random access nature of Linked List.
Am I missing something here?
EDIT: I am looking at the partition function where pivot is the last element and we sweep from lwft sequentially. If partition works differently (i.e. pivot is in the middle and you maintain two pointers at each end), it would still work fine if linked list is doubly linked...
I'm updating this answer to provide a better comparison. In my original answer below, I include an example of bottom up merge sort, using a small array of pointers to lists. The merge function merges two lists into a destination list. As an alternative, the merge function could merge one list into the other via splice operations, which would mean only updating links about half the time for pseudo random data. For arrays, merge sort does more moves but fewer compares than quicksort, but if the linked list merge is merging one list into the other, the number of "moves" is cut in half.
For quicksort, the first node could be used as a pivot, and only nodes less than pivot would be moved, forming a list prior to the pivot (in reverse order), which would also mean only updating links about half of the time for pseudo random data.
The issue with quicksort is that the partitioning isn't perfect, even with psuedo random data, while merge sort (top down or bottom up) has the equivalent of perfect partitioning. A common analysis for quicksort considers the probability of a pivot falling in the middle 75% of a list through various means of choosing a pivot, for a 75% / 25% split (versus merge sort always getting 50% / 50% split). I compared a quicksort with first node as pivot versus merge sort with 4 million 64 bit pseudo random integers, and quicksort took 45% longer with 30% more splice operations (link updates or node "moves") and other overheads.
Original answer
For linked lists, there is an iterative bottom up version of merge sort that doesn't scan lists to split them, which avoids the issue of slow random access performance. A bottom up merge sort for linked list uses a small (25 to 32) array of pointers to nodes. Time complexity is O(n log(n)), and space complexity is O(1) (the array of 25 to32 pointers to nodes).
At that web page
http://www.geeksforgeeks.org/merge-sort-for-linked-list
I've posted a few comments, including a link to a working example of bottom up merge sort for linked list, but never received a response from that group. Link to working example used for that web site:
http://code.geeksforgeeks.org/Mcr1Bf
As for quick sort without random access, the first node could be used as the pivot. Three lists would be created, one list for nodes < pivot, one list for nodes == pivot, one list for nodes > pivot. Recursion would be used on the two lists for nodes != pivot. This has worst case time complexity of O(n^2), and worst case stack space complexity of O(n). The stack space complexity can be reduced to O(log(n)), by only using recursion on the shorter list with nodes != pivot, then looping back to sort the longer list using the first node of the longer list as the new pivot. Keeping track of the last node in a list, such as using a tail pointer to a circular list, would allow for quick concatenation of the other two lists. Worst case time complexity remains at O(n^2).
It should be pointed out that if you have the space, it's usually much faster to move the linked list to an array (or vector), sort the array, and create a new sorted list from the sorted array.
Example C code:
#include <stdio.h>
#include <stdlib.h>
typedef struct NODE_{
struct NODE_ * next;
int data;
}NODE;
/* merge two already sorted lists */
/* compare uses pSrc2 < pSrc1 to follow the STL rule */
/* of only using < and not <= */
NODE * MergeLists(NODE *pSrc1, NODE *pSrc2)
{
NODE *pDst = NULL; /* destination head ptr */
NODE **ppDst = &pDst; /* ptr to head or prev->next */
if(pSrc1 == NULL)
return pSrc2;
if(pSrc2 == NULL)
return pSrc1;
while(1){
if(pSrc2->data < pSrc1->data){ /* if src2 < src1 */
*ppDst = pSrc2;
pSrc2 = *(ppDst = &(pSrc2->next));
if(pSrc2 == NULL){
*ppDst = pSrc1;
break;
}
} else { /* src1 <= src2 */
*ppDst = pSrc1;
pSrc1 = *(ppDst = &(pSrc1->next));
if(pSrc1 == NULL){
*ppDst = pSrc2;
break;
}
}
}
return pDst;
}
/* sort a list using array of pointers to list */
/* aList[i] == NULL or ptr to list with 2^i nodes */
#define NUMLISTS 32 /* number of lists */
NODE * SortList(NODE *pList)
{
NODE * aList[NUMLISTS]; /* array of lists */
NODE * pNode;
NODE * pNext;
int i;
if(pList == NULL) /* check for empty list */
return NULL;
for(i = 0; i < NUMLISTS; i++) /* init array */
aList[i] = NULL;
pNode = pList; /* merge nodes into array */
while(pNode != NULL){
pNext = pNode->next;
pNode->next = NULL;
for(i = 0; (i < NUMLISTS) && (aList[i] != NULL); i++){
pNode = MergeLists(aList[i], pNode);
aList[i] = NULL;
}
if(i == NUMLISTS) /* don't go beyond end of array */
i--;
aList[i] = pNode;
pNode = pNext;
}
pNode = NULL; /* merge array into one list */
for(i = 0; i < NUMLISTS; i++)
pNode = MergeLists(aList[i], pNode);
return pNode;
}
/* allocate memory for a list */
/* create list of nodes with pseudo-random data */
NODE * CreateList(int count)
{
NODE *pList;
NODE *pNode;
int i;
int r;
/* allocate nodes */
pList = (NODE *)malloc(count * sizeof(NODE));
if(pList == NULL)
return NULL;
pNode = pList; /* init nodes */
for(i = 0; i < count; i++){
r = (((int)((rand()>>4) & 0xff))<< 0);
r += (((int)((rand()>>4) & 0xff))<< 8);
r += (((int)((rand()>>4) & 0xff))<<16);
r += (((int)((rand()>>4) & 0x7f))<<24);
pNode->data = r;
pNode->next = pNode+1;
pNode++;
}
(--pNode)->next = NULL;
return pList;
}
#define NUMNODES (1024) /* number of nodes */
int main(void)
{
void *pMem; /* ptr to allocated memory */
NODE *pList; /* ptr to list */
NODE *pNode;
int data;
/* allocate memory and create list */
if(NULL == (pList = CreateList(NUMNODES)))
return(0);
pMem = pList; /* save ptr to mem */
pList = SortList(pList); /* sort the list */
data = pList->data; /* check the sort */
while(pList = pList->next){
if(data > pList->data){
printf("failed\n");
break;
}
data = pList->data;
}
if(pList == NULL)
printf("passed\n");
free(pMem); /* free memory */
return(0);
}
You can split the list by a pivot element in linear time using constant extra memory (even though it's quite painful to implement for a singly-linked list) so it would have the same time complexity as the merge sort on average (the good think about the merge sort is that it's O(N log N) in the worst case). So they can be the same in terms of asymptotic behavior.
It can be hard to tell which one is faster (because the real run time is a property of an implementation, not the algorithm itself).
However, a partition that uses a random pivot is quite a mess for a singly linked list (it's possible, but the method I can think of has a larger constant than just getting two halves for the merge sort). Using the first or the last element as a pivot has an obvious issue: it works in O(N^2) for a sorted (or nearly sorted) lists. Taking this in account, I'd say that the merge sort would be a more reasonable choice in most of the cases.
As already pointed out, if single linked lists are used, merge sort and quick sort have the same average running time: O(n logn).
I'm not 100% sure which partition algorithm you have in mind, but the one sweeping algorithm I can come up would delete the current element from the list if it is larger than the pivot element and insert it at the end of the list. For making this change at least 3 operation are needed:
the link of the parent element must be changed
the link of the last element must be changed
it must be updated, who is the last element
However this must be done only in 50% of the cases, so on average 1.5 changes per element during the partition-function.
On the other hand during the merge-function. In ca. 50% of the cases, two consecutive elements in the linked list are from the same original linked list -> there is nothing to do, because these elements are already linked. In the other case, we have to change a link - to the head of the other list. On average, 0.5 changes per element for the merge-function.
Clearly, one hast to know the exact costs of operations to know the final result, so this is only a hand waving explanation.
Expanding on rcgldr's answer, I wrote a simplistic1 implementation of Quick Sort on linked lists using the first element as pivot (which behaves pathologically bad on sorted lists) and ran a benchmark on lists with pseudo-random data.
I implemented Quick Sort using recursion but taking care of avoiding a stack overflow on pathological cases by recursing only on the smaller half.
I also implemented the proposed alternative with an auxiliary array of pointers to the nodes.
Here is the code:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
typedef struct NODE {
struct NODE *next;
int data;
} NODE;
/* merge two already sorted lists */
/* compare uses pSrc2 < pSrc1 to follow the STL rule */
/* of only using < and not <= */
NODE *MergeLists(NODE *pSrc1, NODE *pSrc2) {
NODE *pDst = NULL; /* destination head ptr */
NODE **ppDst = &pDst; /* ptr to head or prev->next */
for (;;) {
if (pSrc2->data < pSrc1->data) { /* if src2 < src1 */
*ppDst = pSrc2;
pSrc2 = *(ppDst = &(pSrc2->next));
if (pSrc2 == NULL) {
*ppDst = pSrc1;
break;
}
} else { /* src1 <= src2 */
*ppDst = pSrc1;
pSrc1 = *(ppDst = &(pSrc1->next));
if (pSrc1 == NULL) {
*ppDst = pSrc2;
break;
}
}
}
return pDst;
}
/* sort a list using array of pointers to list */
NODE *MergeSort(NODE *pNode) {
#define NUMLISTS 32 /* number of lists */
NODE *aList[NUMLISTS]; /* array of lists */
/* aList[i] == NULL or ptr to list with 2^i nodes */
int i, n = 0;
while (pNode != NULL) {
NODE *pNext = pNode->next;
pNode->next = NULL;
for (i = 0; i < n && aList[i] != NULL; i++) {
pNode = MergeLists(aList[i], pNode);
aList[i] = NULL;
}
if (i == NUMLISTS) /* don't go beyond end of array */
i--;
else
if (i == n) /* extend array */
n++;
aList[i] = pNode;
pNode = pNext;
}
for (i = 0; i < n; i++) {
if (!pNode)
pNode = aList[i];
else if (aList[i])
pNode = MergeLists(aList[i], pNode);
}
return pNode;
}
void QuickSortRec(NODE **pStart, NODE *pList, NODE *stop) {
NODE *pivot, *left, *right;
NODE **ppivot, **pleft, **pright;
int data, nleft, nright;
while (pList != stop && pList->next != stop) {
data = pList->data; // use the first node as pivot
pivot = pList;
ppivot = &pList->next;
pleft = &left;
pright = &right;
nleft = nright = 0;
while ((pList = pList->next) != stop) {
if (data == pList->data) {
*ppivot = pList;
ppivot = &pList->next;
} else
if (data > pList->data) {
nleft++;
*pleft = pList;
pleft = &pList->next;
} else {
nright++;
*pright = pList;
pright = &pList->next;
}
}
*pleft = pivot;
*pright = stop;
*ppivot = right;
if (nleft >= nright) { // recurse on the smaller part
if (nright > 1)
QuickSortRec(ppivot, right, stop);
pList = left;
stop = pivot;
} else {
if (nleft > 1)
QuickSortRec(pStart, left, pivot);
pStart = ppivot;
pList = right;
}
}
*pStart = pList;
}
NODE *QuickSort(NODE *pList) {
QuickSortRec(&pList, pList, NULL);
return pList;
}
int NodeCmp(const void *a, const void *b) {
NODE *aa = *(NODE * const *)a;
NODE *bb = *(NODE * const *)b;
return (aa->data > bb->data) - (aa->data < bb->data);
}
NODE *QuickSortA(NODE *pList) {
NODE *pNode;
NODE **pArray;
int i, len;
/* compute the length of the list */
for (pNode = pList, len = 0; pNode; pNode = pNode->next)
len++;
if (len > 1) {
/* allocate an array of NODE pointers */
if ((pArray = malloc(len * sizeof(NODE *))) == NULL) {
QuickSortRec(&pList, pList, NULL);
return pList;
}
/* initialize the array from the list */
for (pNode = pList, i = 0; pNode; pNode = pNode->next)
pArray[i++] = pNode;
qsort(pArray, len, sizeof(*pArray), NodeCmp);
for (i = 0; i < len - 1; i++)
pArray[i]->next = pArray[i + 1];
pArray[i]->next = NULL;
pList = pArray[0];
free(pArray);
}
return pList;
}
int isSorted(NODE *pList) {
if (pList) {
int data = pList->data;
while ((pList = pList->next) != NULL) {
if (data > pList->data)
return 0;
data = pList->data;
}
}
return 1;
}
void test(int count) {
NODE *pMem1, *pMem2, *pMem3;
NODE *pList1, *pList2, *pList3;
int i;
time_t t1, t2, t3;
/* create linear lists of nodes with pseudo-random data */
srand(clock());
if (count == 0
|| (pMem1 = malloc(count * sizeof(NODE))) == NULL
|| (pMem2 = malloc(count * sizeof(NODE))) == NULL
|| (pMem3 = malloc(count * sizeof(NODE))) == NULL)
return;
for (i = 0; i < count; i++) {
int data = rand();
pMem1[i].data = data;
pMem1[i].next = &pMem1[i + 1];
pMem2[i].data = data;
pMem2[i].next = &pMem2[i + 1];
pMem3[i].data = data;
pMem3[i].next = &pMem3[i + 1];
}
pMem1[count - 1].next = NULL;
pMem2[count - 1].next = NULL;
pMem3[count - 1].next = NULL;
t1 = clock();
pList1 = MergeSort(pMem1);
t1 = clock() - t1;
t2 = clock();
pList2 = QuickSort(pMem2);
t2 = clock() - t2;
t3 = clock();
pList3 = QuickSortA(pMem3);
t3 = clock() - t3;
printf("%10d", count);
if (isSorted(pList1))
printf(" %10.3fms", t1 * 1000.0 / CLOCKS_PER_SEC);
else
printf(" failed");
if (isSorted(pList2))
printf(" %10.3fms", t2 * 1000.0 / CLOCKS_PER_SEC);
else
printf(" failed");
if (isSorted(pList3))
printf(" %10.3fms", t3 * 1000.0 / CLOCKS_PER_SEC);
else
printf(" failed");
printf("\n");
free(pMem1);
free(pMem2);
}
int main(int argc, char **argv) {
int i;
printf(" N MergeSort QuickSort QuickSortA\n");
if (argc > 1) {
for (i = 1; i < argc; i++)
test(strtol(argv[1], NULL, 0));
} else {
for (i = 10; i < 23; i++)
test(1 << i);
}
return 0;
}
Here is the benchmark on lists with geometrically increasing lengths, showing N log(N) times:
N MergeSort QuickSort QuickSortA
1024 0.052ms 0.057ms 0.105ms
2048 0.110ms 0.114ms 0.190ms
4096 0.283ms 0.313ms 0.468ms
8192 0.639ms 0.834ms 1.022ms
16384 1.233ms 1.491ms 1.930ms
32768 2.702ms 3.786ms 4.392ms
65536 8.267ms 10.442ms 13.993ms
131072 23.461ms 34.229ms 27.278ms
262144 51.593ms 71.619ms 51.663ms
524288 114.656ms 240.946ms 120.556ms
1048576 284.717ms 535.906ms 279.828ms
2097152 707.635ms 1465.617ms 636.149ms
4194304 1778.418ms 3508.703ms 1424.820ms
QuickSort() is approximately half as fast as MergeSort() on these datasets, but would behave much worse on partially ordered sets and other pathological cases, whereas MergeSort has a regular time complexity that does not depend on the dataset and performs a stable sort. QuickSortA() performs marginally better than MergeSort() for large datasets on my system, but performance will depend on the actual implementation of qsort, which does not necessarily use a Quick Sort algorithm.
MergeSort() does not allocate any extra memory and performs a stable sort, which makes it a clear winner to sort lists.
1) well, not so simplistic after all, but the choice of pivot is too simple
I want to shuffle a list of unique items, but not do an entirely random shuffle. I need to be sure that no element in the shuffled list is at the same position as in the original list. Thus, if the original list is (A, B, C, D, E), this result would be OK: (C, D, B, E, A), but this one would not: (C, E, A, D, B) because "D" is still the fourth item. The list will have at most seven items. Extreme efficiency is not a consideration. I think this modification to Fisher/Yates does the trick, but I can't prove it mathematically:
function shuffle(data) {
for (var i = 0; i < data.length - 1; i++) {
var j = i + 1 + Math.floor(Math.random() * (data.length - i - 1));
var temp = data[j];
data[j] = data[i];
data[i] = temp;
}
}
You are looking for a derangement of your entries.
First of all, your algorithm works in the sense that it outputs a random derangement, ie a permutation with no fixed point. However it has a enormous flaw (which you might not mind, but is worth keeping in mind): some derangements cannot be obtained with your algorithm. In other words, it gives probability zero to some possible derangements, so the resulting distribution is definitely not uniformly random.
One possible solution, as suggested in the comments, would be to use a rejection algorithm:
pick a permutation uniformly at random
if it hax no fixed points, return it
otherwise retry
Asymptotically, the probability of obtaining a derangement is close to 1/e = 0.3679 (as seen in the wikipedia article). Which means that to obtain a derangement you will need to generate an average of e = 2.718 permutations, which is quite costly.
A better way to do that would be to reject at each step of the algorithm. In pseudocode, something like this (assuming the original array contains i at position i, ie a[i]==i):
for (i = 1 to n-1) {
do {
j = rand(i, n) // random integer from i to n inclusive
} while a[j] != i // rejection part
swap a[i] a[j]
}
The main difference from your algorithm is that we allow j to be equal to i, but only if it does not produce a fixed point. It is slightly longer to execute (due to the rejection part), and demands that you be able to check if an entry is at its original place or not, but it has the advantage that it can produce every possible derangement (uniformly, for that matter).
I am guessing non-rejection algorithms should exist, but I would believe them to be less straight-forward.
Edit:
My algorithm is actually bad: you still have a chance of ending with the last point unshuffled, and the distribution is not random at all, see the marginal distributions of a simulation:
An algorithm that produces uniformly distributed derangements can be found here, with some context on the problem, thorough explanations and analysis.
Second Edit:
Actually your algorithm is known as Sattolo's algorithm, and is known to produce all cycles with equal probability. So any derangement which is not a cycle but a product of several disjoint cycles cannot be obtained with the algorithm. For example, with four elements, the permutation that exchanges 1 and 2, and 3 and 4 is a derangement but not a cycle.
If you don't mind obtaining only cycles, then Sattolo's algorithm is the way to go, it's actually much faster than any uniform derangement algorithm, since no rejection is needed.
As #FelixCQ has mentioned, the shuffles you are looking for are called derangements. Constructing uniformly randomly distributed derangements is not a trivial problem, but some results are known in the literature. The most obvious way to construct derangements is by the rejection method: you generate uniformly randomly distributed permutations using an algorithm like Fisher-Yates and then reject permutations with fixed points. The average running time of that procedure is e*n + o(n) where e is Euler's constant 2.71828... That would probably work in your case.
The other major approach for generating derangements is to use a recursive algorithm. However, unlike Fisher-Yates, we have two branches to the algorithm: the last item in the list can be swapped with another item (i.e., part of a two-cycle), or can be part of a larger cycle. So at each step, the recursive algorithm has to branch in order to generate all possible derangements. Furthermore, the decision of whether to take one branch or the other has to be made with the correct probabilities.
Let D(n) be the number of derangements of n items. At each stage, the number of branches taking the last item to two-cycles is (n-1)D(n-2), and the number of branches taking the last item to larger cycles is (n-1)D(n-1). This gives us a recursive way of calculating the number of derangements, namely D(n)=(n-1)(D(n-2)+D(n-1)), and gives us the probability of branching to a two-cycle at any stage, namely (n-1)D(n-2)/D(n-1).
Now we can construct derangements by deciding to which type of cycle the last element belongs, swapping the last element to one of the n-1 other positions, and repeating. It can be complicated to keep track of all the branching, however, so in 2008 some researchers developed a streamlined algorithm using those ideas. You can see a walkthrough at http://www.cs.upc.edu/~conrado/research/talks/analco08.pdf . The running time of the algorithm is proportional to 2n + O(log^2 n), a 36% improvement in speed over the rejection method.
I have implemented their algorithm in Java. Using longs works for n up to 22 or so. Using BigIntegers extends the algorithm to n=170 or so. Using BigIntegers and BigDecimals extends the algorithm to n=40000 or so (the limit depends on memory usage in the rest of the program).
package io.github.edoolittle.combinatorics;
import java.math.BigInteger;
import java.math.BigDecimal;
import java.math.MathContext;
import java.util.Random;
import java.util.HashMap;
import java.util.TreeMap;
public final class Derangements {
// cache calculated values to speed up recursive algorithm
private static HashMap<Integer,BigInteger> numberOfDerangementsMap
= new HashMap<Integer,BigInteger>();
private static int greatestNCached = -1;
// load numberOfDerangementsMap with initial values D(0)=1 and D(1)=0
static {
numberOfDerangementsMap.put(0,BigInteger.valueOf(1));
numberOfDerangementsMap.put(1,BigInteger.valueOf(0));
greatestNCached = 1;
}
private static Random rand = new Random();
// private default constructor so class isn't accidentally instantiated
private Derangements() { }
public static BigInteger numberOfDerangements(int n)
throws IllegalArgumentException {
if (numberOfDerangementsMap.containsKey(n)) {
return numberOfDerangementsMap.get(n);
} else if (n>=2) {
// pre-load the cache to avoid stack overflow (occurs near n=5000)
for (int i=greatestNCached+1; i<n; i++) numberOfDerangements(i);
greatestNCached = n-1;
// recursion for derangements: D(n) = (n-1)*(D(n-1) + D(n-2))
BigInteger Dn_1 = numberOfDerangements(n-1);
BigInteger Dn_2 = numberOfDerangements(n-2);
BigInteger Dn = (Dn_1.add(Dn_2)).multiply(BigInteger.valueOf(n-1));
numberOfDerangementsMap.put(n,Dn);
greatestNCached = n;
return Dn;
} else {
throw new IllegalArgumentException("argument must be >= 0 but was " + n);
}
}
public static int[] randomDerangement(int n)
throws IllegalArgumentException {
if (n<2)
throw new IllegalArgumentException("argument must be >= 2 but was " + n);
int[] result = new int[n];
boolean[] mark = new boolean[n];
for (int i=0; i<n; i++) {
result[i] = i;
mark[i] = false;
}
int unmarked = n;
for (int i=n-1; i>=0; i--) {
if (unmarked<2) break; // can't move anything else
if (mark[i]) continue; // can't move item at i if marked
// use the rejection method to generate random unmarked index j < i;
// this could be replaced by more straightforward technique
int j;
while (mark[j=rand.nextInt(i)]);
// swap two elements of the array
int temp = result[i];
result[i] = result[j];
result[j] = temp;
// mark position j as end of cycle with probability (u-1)D(u-2)/D(u)
double probability
= (new BigDecimal(numberOfDerangements(unmarked-2))).
multiply(new BigDecimal(unmarked-1)).
divide(new BigDecimal(numberOfDerangements(unmarked)),
MathContext.DECIMAL64).doubleValue();
if (rand.nextDouble() < probability) {
mark[j] = true;
unmarked--;
}
// position i now becomes out of play so we could mark it
//mark[i] = true;
// but we don't need to because loop won't touch it from now on
// however we do have to decrement unmarked
unmarked--;
}
return result;
}
// unit tests
public static void main(String[] args) {
// test derangement numbers D(i)
for (int i=0; i<100; i++) {
System.out.println("D(" + i + ") = " + numberOfDerangements(i));
}
System.out.println();
// test quantity (u-1)D_(u-2)/D_u for overflow, inaccuracy
for (int u=2; u<100; u++) {
double d = numberOfDerangements(u-2).doubleValue() * (u-1) /
numberOfDerangements(u).doubleValue();
System.out.println((u-1) + " * D(" + (u-2) + ") / D(" + u + ") = " + d);
}
System.out.println();
// test derangements for correctness, uniform distribution
int size = 5;
long reps = 10000000;
TreeMap<String,Integer> countMap = new TreeMap<String,Integer>();
System.out.println("Derangement\tCount");
System.out.println("-----------\t-----");
for (long rep = 0; rep < reps; rep++) {
int[] d = randomDerangement(size);
String s = "";
String sep = "";
if (size > 10) sep = " ";
for (int i=0; i<d.length; i++) {
s += d[i] + sep;
}
if (countMap.containsKey(s)) {
countMap.put(s,countMap.get(s)+1);
} else {
countMap.put(s,1);
}
}
for (String key : countMap.keySet()) {
System.out.println(key + "\t\t" + countMap.get(key));
}
System.out.println();
// large random derangement
int size1 = 1000;
System.out.println("Random derangement of " + size1 + " elements:");
int[] d1 = randomDerangement(size1);
for (int i=0; i<d1.length; i++) {
System.out.print(d1[i] + " ");
}
System.out.println();
System.out.println();
System.out.println("We start to run into memory issues around u=40000:");
{
// increase this number from 40000 to around 50000 to trigger
// out of memory-type exceptions
int u = 40003;
BigDecimal d = (new BigDecimal(numberOfDerangements(u-2))).
multiply(new BigDecimal(u-1)).
divide(new BigDecimal(numberOfDerangements(u)),MathContext.DECIMAL64);
System.out.println((u-1) + " * D(" + (u-2) + ") / D(" + u + ") = " + d);
}
}
}
In C++:
template <class T> void shuffle(std::vector<T>&arr)
{
int size = arr.size();
for (auto i = 1; i < size; i++)
{
int n = rand() % (size - i) + i;
std::swap(arr[i-1], arr[n]);
}
}
Sometimes, I come across the following interview question: How to implement 3 stacks with one array ? Of course, any static allocation is not a solution.
Space (not time) efficient. You could:
1) Define two stacks beginning at the array endpoints and growing in opposite directions.
2) Define the third stack as starting in the middle and growing in any direction you want.
3) Redefine the Push op, so that when the operation is going to overwrite other stack, you shift the whole middle stack in the opposite direction before Pushing.
You need to store the stack top for the first two stacks, and the beginning and end of the third stack in some structure.
Edit
Above you may see an example. The shifting is done with an equal space partitioning policy, although other strategies could be chosen depending upon your problem heuristics.
Edit
Following #ruslik's suggestion, the middle stack could be implemented using an alternating sequence for subsequent pushes. The resulting stack structure will be something like:
| Elem 6 | Elem 4 | Elem 2 | Elem 0 | Elem 1 | Elem 3 | Elem 5 |
In this case, you'll need to store the number n of elements on the middle stack and use the function:
f[n_] := 1/4 ( (-1)^n (-1 + 2 n) + 1) + BS3
to know the next array element to use for this stack.
Although probably this will lead to less shifting, the implementation is not homogeneous for the three stacks, and inhomogeneity (you know) leads to special cases, more bugs and difficulties to maintain code.
As long as you try to arrange all items from one stack together at one "end" of the array, you're lacking space for the third stack.
However, you could "intersperse" the stack elements. Elements of the first stack are at indices i * 3, elements of the second stack are at indices i * 3 + 1, elements of the third stack are at indices i * 3 + 2 (where i is an integer).
+----+----+----+----+----+----+----+----+----+----+----+----+----+..
| A1 : B1 : C1 | A2 : B2 : C2 | : B3 | C3 | : B4 : | :
+----+----+----+----+----+----+----+----+----+----+----+----+----+..
^ ^ ^
A´s top C´s top B´s top
Of course, this scheme is going to waste space, especially when the stacks have unequal sizes. You could create arbitrarily complex schemes similar to the one described above, but without knowing any more constraints for the posed question, I'll stop here.
Update:
Due to the comments below, which do have a very good point, it should be added that interspersing is not necessary, and may even degrade performance when compared to a much simpler memory layout such as the following:
+----+----+----+----+----+----+----+----+----+----+----+----+----+..
| A1 : A2 : : : | B1 : B2 : B3 : B4 : | C1 : C2 : C3 :
+----+----+----+----+----+----+----+----+----+----+----+----+----+..
^ ^ ^
A´s top B´s top C´s top
i.e. giving each stack it's own contiguous block of memory. If the real question is indeed to how to make the best possible use of a fixed amount of memory, in order to not limit each stack more than necessary, then my answer isn't going to be very helpful.
In that case, I'd go with #belisarius' answer: One stack goes to the "bottom" end of the memory area, growing "upwards"; another stack goes to the "top" end of the memory area, growing "downwards", and one stack is in the middle that grows in any direction but is able to move when it gets too close to one of the other stacks.
Maintain a single arena for all three stacks. Each element pushed onto the stack has a backwards pointer to its previous element. The bottom of each stack has a pointer to NULL/None.
The arena maintains a pointer to the next item in the free space. A push adds this element to the respective stack and marks it as no longer in the free space. A pop removes the element from the respective stack and adds it to the free list.
From this sketch, elements in stacks need a reverse pointer and space for the data. Elements in the free space need two pointers, so the free space is implemented as a doubly linked list.
The object containing the three stacks needs a pointer to the top of each stack plus a pointer to the head of the free list.
This data structure uses all the space and pushes and pops in constant time. There is overhead of a single pointer for all data elements in a stack and the free list elements use the maximum of (two pointers, one pointer + one element).
Later: python code goes something like this. Note use of integer indexes as pointers.
class StackContainer(object):
def __init__(self, stack_count=3, size=256):
self.stack_count = stack_count
self.stack_top = [None] * stack_count
self.size = size
# Create arena of doubly linked list
self.arena = [{'prev': x-1, 'next': x+1} for x in range(self.size)]
self.arena[0]['prev'] = None
self.arena[self.size-1]['next'] = None
self.arena_head = 0
def _allocate(self):
new_pos = self.arena_head
free = self.arena[new_pos]
next = free['next']
if next:
self.arena[next]['prev'] = None
self.arena_head = next
else:
self.arena_head = None
return new_pos
def _dump(self, stack_num):
assert 0 <= stack_num < self.stack_count
curr = self.stack_top[stack_num]
while curr is not None:
d = self.arena[curr]
print '\t', curr, d
curr = d['prev']
def _dump_all(self):
print '-' * 30
for i in range(self.stack_count):
print "Stack %d" % i
self._dump(i)
def _dump_arena(self):
print "Dump arena"
curr = self.arena_head
while curr is not None:
d = self.arena[curr]
print '\t', d
curr = d['next']
def push(self, stack_num, value):
assert 0 <= stack_num < self.stack_count
# Find space in arena for new value, update pointers
new_pos = self._allocate()
# Put value-to-push into a stack element
d = {'value': value, 'prev': self.stack_top[stack_num], 'pos': new_pos}
self.arena[new_pos] = d
self.stack_top[stack_num] = new_pos
def pop(self, stack_num):
assert 0 <= stack_num < self.stack_count
top = self.stack_top[stack_num]
d = self.arena[top]
assert d['pos'] == top
self.stack_top[stack_num] = d['prev']
arena_elem = {'prev': None, 'next': self.arena_head}
# Link the current head to the new head
head = self.arena[self.arena_head]
head['prev'] = top
# Set the curr_pos to be the new head
self.arena[top] = arena_elem
self.arena_head = top
return d['value']
if __name__ == '__main__':
sc = StackContainer(3, 10)
sc._dump_arena()
sc.push(0, 'First')
sc._dump_all()
sc.push(0, 'Second')
sc.push(0, 'Third')
sc._dump_all()
sc.push(1, 'Fourth')
sc._dump_all()
print sc.pop(0)
sc._dump_all()
print sc.pop(1)
sc._dump_all()
I have a solution for this question. The following program makes the best use of the array (in my case, an array of StackNode Objects). Let me know if you guys have any questions about this. [It's pretty late out here, so i didn't bother to document the code - I know, I should :) ]
public class StackNode {
int value;
int prev;
StackNode(int value, int prev) {
this.value = value;
this.prev = prev;
}
}
public class StackMFromArray {
private StackNode[] stackNodes = null;
private static int CAPACITY = 10;
private int freeListTop = 0;
private int size = 0;
private int[] stackPointers = { -1, -1, -1 };
StackMFromArray() {
stackNodes = new StackNode[CAPACITY];
initFreeList();
}
private void initFreeList() {
for (int i = 0; i < CAPACITY; i++) {
stackNodes[i] = new StackNode(0, i + 1);
}
}
public void push(int stackNum, int value) throws Exception {
int freeIndex;
int currentStackTop = stackPointers[stackNum - 1];
freeIndex = getFreeNodeIndex();
StackNode n = stackNodes[freeIndex];
n.prev = currentStackTop;
n.value = value;
stackPointers[stackNum - 1] = freeIndex;
}
public StackNode pop(int stackNum) throws Exception {
int currentStackTop = stackPointers[stackNum - 1];
if (currentStackTop == -1) {
throw new Exception("UNDERFLOW");
}
StackNode temp = stackNodes[currentStackTop];
stackPointers[stackNum - 1] = temp.prev;
freeStackNode(currentStackTop);
return temp;
}
private int getFreeNodeIndex() throws Exception {
int temp = freeListTop;
if (size >= CAPACITY)
throw new Exception("OVERFLOW");
freeListTop = stackNodes[temp].prev;
size++;
return temp;
}
private void freeStackNode(int index) {
stackNodes[index].prev = freeListTop;
freeListTop = index;
size--;
}
public static void main(String args[]) {
// Test Driver
StackMFromArray mulStack = new StackMFromArray();
try {
mulStack.push(1, 11);
mulStack.push(1, 12);
mulStack.push(2, 21);
mulStack.push(3, 31);
mulStack.push(3, 32);
mulStack.push(2, 22);
mulStack.push(1, 13);
StackNode node = mulStack.pop(1);
node = mulStack.pop(1);
System.out.println(node.value);
mulStack.push(1, 13);
} catch (Exception e) {
e.printStackTrace();
}
}
}
For simplicity if not very efficient memory usage, you could[*] divide the array up into list nodes, add them all to a list of free nodes, and then implement your stacks as linked lists, taking nodes from the free list as required. There's nothing special about the number 3 in this approach, though.
[*] in a low-level language where memory can be used to store pointers, or if the stack elements are of a type such as int that can represent an index into the array.
There are many solutions to this problem already stated on this page. The fundamental questions, IMHO are:
How long does each push/pop operation take?
How much space is used? Specifically, what is the smallest number of elements that can be pushed to the three stacks to cause the data structure to run out of space?
As far as I can tell, each solution already posted on this page either can take up to linear time for a push/pop or can run out of space with a linear number of spaces still empty.
In this post, I will reference solutions that perform much better, and I will present the simplest one.
In order to describe the solution space more carefully, I will refer to two functions of a data structure in the following way:
A structure that takes O(f(n)) amortized time to perform a push/pop and does not run out of space unless the three stacks hold at least n - O(g(n)) items will be referred to as an (f,g) structure. Smaller f and g are better. Every structure already posted on this page has n for either the time or the space. I will demonstrate a (1,√n) structure.
This is all based on:
Michael L. Fredman and Deborah L. Goldsmith, "Three Stacks", in Journal of Algorithms, Volume 17, Issue 1, July 1994, Pages 45-70
An earlier version appeared in the 29th Annual Symposium on Foundations of Computer Science (FOCS) in 1988
Deborah Louise Goldsmith's PhD thesis from University of California, San Diego, Department of Electrical Engineering/Computer Science in 1987, "Efficient memory management for >= 3 stacks"
They show, though I will not present, a (log n/log S,S) structure for any S. This is equivalent to a (t, n1/t) structure for any t. I will show a simplified version that is a (1,√n) structure.
Divide the array up into blocks of size Θ(√n). The blocks are numbered from 1 to Θ(√n), and the number of a block is called its "address". An address can be stored in an array slot instead of a real item. An item within a given block can be referred to with a number less than O(√n), and such a number is called an index. An index will also fit in an array slot.
The first block will be set aside for storing addresses and indexes, and no other block will store any addresses or indexes. The first block is called the directory. Every non-directory block will either be empty or hold elements from just one of the three stacks; that is, no block will have two elements from different stacks. Additionally, every stack will have at most one block that is partially filled -- all other blocks associated with a stack will be completely full or completely empty.
As long as there is an empty block, a push operation will be permitted to any stack. Pop operations are always permitted. When a push operation fails, the data structure is full. At that point, the number of slots not containing elements from one of the stacks is at most O(√n): two partially-filled blocks from the stacks not being pushed to, and one directory block.
Every block is ordered so that the elements closer to the front of the block (lower indexes) are closer to the bottom of the stack.
The directory holds:
Three addresses for the blocks at the top of the three stacks, or 0 if there are no blocks in a particular stack yet
Three indexes for the element at the top of the three stacks, or 0 if there are no items in a particular stack yet.
For each full or partially full block, the address of the block lower than it in the same stack, or 0 if it is the lowest block in the stack.
The address of a free block, called the leader block, or 0 if there are no free blocks
For each free block, the address of another free block, or 0 if there are no more free blocks
These last two constitute a stack, stored as a singly-linked list, of free blocks. That is, following the addresses of free blocks starting with the leader block will give a path through all the free blocks, ending in a 0.
To push an item onto a stack, find its top block and top element within that block using the directory. If there is room in that block, put the item there and return.
Otherwise, pop the stack of free blocks by changing the address of the leader block to the address of the next free block in the free block stack. Change the address and index for the stack to be the address of the just-popped free block and 1, respectively. Add the item to the just-popped block at index 1, and return.
All operations take O(1) time. Pop is symmetric.
A variant on an earlier answer: stack #1 grows from the left, and stack #2 grows from the right.
Stack #3 is in the center, but the elements grow in alternate order to the left and right. If N is the center index, the stack grows as: N, N-1, N+1, N-2, N+2, etc. A simple function converts the stack index to an array index.
I think you should divide array in 3 pieces, making head of first stack at 0, head of second stack at n/3, head of 3rd stack at n-1.
so implement push operation on :
first & second stack make i++ and for 3rd stack make i--;
If you encounter that first stack have no space to push, shift 2nd stack k/3 positions forward. Where k is the number of positions left to be filled in array.
If you encounter that second stack have no space to push, shift 2nd stack 2*k/3 positions backward. Where k is the number of positions left to be filled in array.
If you encounter that third stack have no space to push, shift 2nd stack 2*k/3 positions backward. Where k is the number of positions left to be filled in array.
We are shifting k/3 and 2*k/3 when no space is left so that after shifting of middle stack, each stack have equal space available for use.
Store the stack in the area in such way when first stack goes into index 0, then 0+3=3, then 3+3=6...; the second one goes into indexes 1, 1+3=4, 4+3=7...; the the third one goes into indexes 2, 2+3=5, 5+3=8
So if we mark the first stack elements with a, as one with b and there with c we get:
a1 b1 c1 a2 b2 c2 a3 b3 c3...
There could be gaps but we always know the top indexes which are stored in 3-element topIndex array.
Partitioning the array into 3 parts in not a good idea as it will give overflow if there are many elements in stack1 and very few elements in the other two.
My idea:
Keep three pointers ptr1, ptr2, ptr3 to point the top element of respective stacks.
Initially ptr1 = ptr2 = ptr3 = -1;
In the array, the even indexed element will store the value and odd indexed element will store the index of previous element of that stack.
For example,
s1.push(1);
s2.push(4);
s3.push(3);
s1.push(2);
s3.push(7);
s1.push(10);
s1.push(5);
then our array looks like:
1, -1, 4, -1, 3, -1, 2, 0, 7, 4, 10, 6, 5, 10
and the values of pointers are:
ptr1 = 12, ptr2 = 2 , ptr3 = 8
Solution: Implementing two stacks is easy.
First stack grows from start to end while second one grows from end to start.
Overflow for any of them will not happen unless there really is no space left on the array.
For three stacks, following is required:
An auxiliary array to maintain the parent for each node.
Variables to store the current top of each stack.
With these two in place, data from all the stacks can be interspersed in the original array and one can still do push/pop/size operations for all the stacks.
When inserting any element, insert it at the end of all the elements in the normal array.
Store current-top of that stack as parent for the new element (in the parents' array) and update current-top to the new position.
When deleting, insert NULL in the stacks array for the deleted element and reset stack-top for that stack to the parent.
When the array is full, it will have some holes corresponding to deleted elements.
At this point, either the array can be compacted to bring all free space together or a linear search can be done for free space when inserting new elements.
for further details refer this link:- https://coderworld109.blogspot.in/2017/12/how-to-implement-3-stacks-with-one-array.html
Here is my solution of N stacks in a single array.
Some constraints will be here.
that size of the array will not be less than of the number of stacks.
I have used to customize exception class StackException in my solution. You can change the exception class for running the programme.
For multiple stacks in an array, I managed pointers to another array.
package com.practice.ds.stack;
import java.util.Scanner;
import java.util.logging.Logger;
/** Multiple stacks in a single array */
public class MultipleStack {
private static Logger logger = Logger.getLogger("MultipleStack");
private int[] array;
private int size = 10;
private int stackN = 1;
private int[] pointer;
public MultipleStack() {
this.array = new int[size];
this.pointer = new int[1];
}
public MultipleStack(int size, int stackN) throws StackException {
if (stackN > size)
throw new StackException("Input mismatch ! no of stacks can't be larger than size ");
this.size = size;
this.stackN = stackN;
init();
}
private void init() {
if (size <= 0) {
logger.info("Initialize size is " + size + " so assiginig defalt size ");
this.size = 10;
}
if (stackN < 1) {
logger.info("Initialize no of Stack is " + size + " so assiginig defalt");
this.stackN = 1;
}
this.array = new int[size];
this.pointer = new int[stackN];
initializePointer();
}
private void initializePointer() {
for (int i = 0; i < stackN; i++)
pointer[i] = (int)(i * Math.ceil(size / stackN) - 1);
}
public void push(int item, int sn) throws StackException {
if (full(sn))
throw new StackException(sn + " is overflowed !");
int stkPointer = pointer[sn - 1];
array[++stkPointer] = item;
pointer[sn - 1] = stkPointer;
}
public void pop(int sn) throws StackException {
if (empty(sn))
throw new StackException(sn + " is underflow !");
int peek = peek(sn);
System.out.println(peek);
pointer[sn - 1] = --pointer[sn - 1];
}
public int peek(int sn) throws StackException {
authenticate(sn);
return array[pointer[sn - 1]];
}
public boolean empty(int sn) throws StackException {
authenticate(sn);
return pointer[sn - 1] == (int)(((sn - 1) * Math.ceil(size / stackN)) - 1);
}
public boolean full(int sn) throws StackException {
authenticate(sn);
return sn == stackN ? pointer[sn - 1] == size - 1 : pointer[sn - 1] == (int)((sn) * Math.ceil(size / stackN)) - 1;
}
private void authenticate(int sn) throws StackException {
if (sn > stackN || sn < 1)
throw new StackException("No such stack found");
}
public static void main(String[] args) {
try (Scanner scanner = new Scanner(System.in)) {
System.out.println("Define size of the stack");
int size = scanner.nextInt();
System.out.println("total number of stacks");
int stackN = scanner.nextInt();
MultipleStack stack = new MultipleStack(size, stackN);
boolean exit = false;
do {
System.out.println("1. Push");
System.out.println("2. Pop");
System.out.println("3. Exit");
System.out.println("Choice");
int choice = scanner.nextInt();
switch (choice) {
case 1:
try {
System.out.println("Item : ");
int item = scanner.nextInt();
System.out.println("Stack Number : ");
int stk = scanner.nextInt();
stack.push(item, stk);
} catch (Exception e) {
e.printStackTrace();
}
break;
case 2:
try {
System.out.println("Stack Number : ");
int stk = scanner.nextInt();
stack.pop(stk);
} catch (Exception e) {
e.printStackTrace();
}
break;
case 3:
exit = true;
break;
default:
System.out.println("Invalid choice !");
break;
}
} while (!exit);
} catch (Exception e) {
e.printStackTrace();
}
}
}
We can generalize it to K stacks in one Array. Basic idea is to:
Maintain a PriorityQueue as a min heap of currently free indexes in the allocation array.
Maintain an array of size K, that holds the top of the stack, for each of the stacks.
Create a Data class with 1) Value 2) Index of Prev element in the allocation array 3) Index of current element being pushed in the
allocation array
Maintain an allocation array of type Data
Refer the code for a working sample implementation.
import java.util.*;
public class Main
{
// A Java class to represent k stacks in a single array of size n
public static final class KStack {
/**
* PriorityQueue as min heap to keep track of the next free index in the
* backing array.
*/
private final PriorityQueue<Integer> minHeap = new PriorityQueue<>((a,b) -> (a - b));
/**
* Keeps track of the top of the stack of each of the K stacks
*/
private final Data index[];
/**
* Backing array to hold the data of K stacks.
*/
private final Data array[];
public KStack(int noOfStacks, int sizeOfBackingArray) {
index = new Data[noOfStacks];
array = new Data[sizeOfBackingArray];
for(int i =0; i< sizeOfBackingArray; i++) {
minHeap.add(i);
}
}
public void push(int val, int stackNo) {
if(minHeap.isEmpty()) {
return;
}
int nextFreeIdx = minHeap.poll();
Data tos = index[stackNo];
if(tos == null) {
tos = new Data(val, -1 /* Previous elemnet's idx*/, nextFreeIdx
/* This elemnent's idx in underlying array*/);
} else {
tos = new Data(val, tos.myIndex, nextFreeIdx);
}
index[stackNo] = tos;
array[nextFreeIdx] = tos;
}
public int pop(int stackNo) {
if(minHeap.size() == array.length) {
return -1; // Maybe throw Exception?
}
Data tos = index[stackNo];
if(tos == null) {
return -1; // Maybe throw Exception?
}
minHeap.add(tos.myIndex);
array[tos.myIndex] = null;
int value = tos.value;
if(tos.prevIndex == -1) {
tos = null;
} else {
tos = array[tos.prevIndex];
}
index[stackNo] = tos;
return value;
}
}
public static final class Data {
int value;
int prevIndex;
int myIndex;
public Data(int value, int prevIndex, int myIndex) {
this.value = value;
this.prevIndex = prevIndex;
this.myIndex = myIndex;
}
#Override
public String toString() {
return "Value: " + this.value + ", prev: " + this.prevIndex + ", myIndex: " + myIndex;
}
}
// Driver program
public static void main(String[] args)
{
int noOfStacks = 3, sizeOfBackingArray = 10;
KStack ks = new KStack(noOfStacks, sizeOfBackingArray);
// Add elements to stack number 1
ks.push(11, 0);
ks.push(9, 0);
ks.push(7, 0);
// Add elements to stack number 3
ks.push(51, 2);
ks.push(54, 2);
// Add elements to stack number 2
ks.push(71, 1);
ks.push(94, 1);
ks.push(93, 1);
System.out.println("Popped from stack 3: " + ks.pop(2));
System.out.println("Popped from stack 3: " + ks.pop(2));
System.out.println("Popped from stack 3: " + ks.pop(2));
System.out.println("Popped from stack 2: " + ks.pop(1));
System.out.println("Popped from stack 1: " + ks.pop(0));
}
}
Dr. belisarius's answer explains the basic algorithm, but doesn't go in the details, and as we know, the devil is always in the details. I coded up a solution in Python 3, with some explanation and a unit test. All operations run in constant time, as they should for a stack.
# One obvious solution is given array size n, divide up n into 3 parts, and allocate floor(n / 3) cells
# to two stacks at either end of the array, and remaining cells to the one in the middle. This strategy is not
# space efficient because even though there may be space in the array, one of the stack may overflow.
#
# A better approach is to have two stacks at either end of the array, the left one growing on the right, and the
# right one growing on the left. The middle one starts at index floor(n / 2), and grows at both ends. When the
# middle stack size is even, it grows on the right, and when it's odd, it grows on the left. This way, the middle
# stack grows evenly and minimizes the changes of overflowing one of the stack at either end.
#
# The rest is pointer arithmetic, adjusting tops of the stacks on push and pop operations.
class ThreeStacks:
def __init__(self, n: int):
self._arr: List[int] = [0] * n
self._tops: List[int] = [-1, n, n // 2]
self._sizes: List[int] = [0] * 3
self._n = n
def _is_stack_3_even_size(self):
return self._sizes[2] % 2 == 0
def _is_stack_3_odd_size(self):
return not self._is_stack_3_even_size()
def is_empty(self, stack_number: int) -> bool:
return self._sizes[stack_number] == 0
def is_full(self, stack_number: int) -> bool:
if stack_number == 0 and self._is_stack_3_odd_size():
return self._tops[stack_number] == self._tops[2] - self._sizes[2]
elif stack_number == 1 and self._is_stack_3_even_size():
return self._tops[stack_number] == self._tops[2] + self._sizes[2]
return (self._is_stack_3_odd_size() and self._tops[0] == self._tops[2] - self._sizes[2]) or \
(self._is_stack_3_even_size() and self._tops[1] == self._tops[2] + self._sizes[2])
def pop(self, stack_number: int) -> int:
if self.is_empty(stack_number):
raise RuntimeError(f"Stack : {stack_number} is empty")
x: int = self._arr[self._tops[stack_number]]
if stack_number == 0:
self._tops[stack_number] -= 1
elif stack_number == 1:
self._tops[stack_number] += 1
else:
if self._is_stack_3_even_size():
self._tops[stack_number] += (self._sizes[stack_number] - 1)
else:
self._tops[stack_number] -= (self._sizes[stack_number] - 1)
self._sizes[stack_number] -= 1
return x
def push(self, item: int, stack_number: int) -> None:
if self.is_full(stack_number):
raise RuntimeError(f"Stack: {stack_number} is full")
if stack_number == 0:
self._tops[stack_number] += 1
elif stack_number == 1:
self._tops[stack_number] -= 1
else:
if self._is_stack_3_even_size():
self._tops[stack_number] += self._sizes[stack_number]
else:
self._tops[stack_number] -= self._sizes[stack_number]
self._arr[self._tops[stack_number]] = item
self._sizes[stack_number] += 1
def __repr__(self):
return str(self._arr)
Test:
def test_stack(self):
stack = ThreeStacks(10)
for i in range(3):
with pytest.raises(RuntimeError):
stack.pop(i)
for i in range(1, 4):
stack.push(i, 0)
for i in range(4, 7):
stack.push(i, 1)
for i in range(7, 11):
stack.push(i, 2)
for i in range(3):
with pytest.raises(RuntimeError):
stack.push(1, i)
assert [stack.pop(i) for i in range(3)] == [3, 6, 10]
assert [stack.pop(i) for i in range(3)] == [2, 5, 9]
assert [stack.pop(i) for i in range(3)] == [1, 4, 8]
for i in range(2):
assert stack.is_empty(i)
assert not stack.is_empty(2)
assert stack.pop(2) == 7
assert stack.is_empty(2)
This is a very common interview question "Implement 3 stacks using a single Array or >List".
Here is my solution-
Approach 1- Go for a fixed division of an array means if we divide our array into 3 equal parts and push the elements of an array
into three fixed-sized stacks.
For stack 1, use [0,n/3]
For stack 2, use [n/3,2n/3]
For stack 3, use [2n/3,n]. The problem with this approach is that we may face a condition where the size of an array may be
greater than the size of the stack ie. Stack Overflow condition. So,
we must take care of special cases and edge cases like this. now go
for 2nd approach.
Approach 2- Flexible Division, In the first approach we face a condition where the size of an array may be greater than the size of
the stack ie the Stack overflow condition. we can overcome this
problem by doing a flexible division of the stack. while adding
elements to the stack, when one stack exceeds the initial capacity,
shift the elements to the next stack. So, this way we can approach
this problem.
Here's my solution for it in C# -
/* Program: Implement 3 stacks using a single array
*
* Date: 12/26/2015
*/
using System;
namespace CrackingTheCodingInterview
{
internal class Item
{
public object data;
public int prev;
}
/// <summary>
/// Class implementing 3 stacks using single array
/// </summary>
public class Stacks
{
/// <summary>
/// Pushing an element 'data' onto a stack 'i'
/// </summary>
public void Push(int i, object d)
{
i--;
if (available != null)
{
int ava = (int)available.DeleteHead();
elems[ava].data = d;
elems[ava].prev = top[i];
top[i] = ava;
}
else
{
Console.WriteLine("Array full. No more space to enter!");
return;
}
}
/// <summary>
/// Popping an element from stack 'i'
/// </summary>
public object Pop(int i)
{
i--;
if (top[i] != -1)
{
object popVal = elems[top[i]].data;
int prevTop = elems[top[i]].prev;
elems[top[i]].data = null;
elems[top[i]].prev = -1;
available.Insert(top[i]);
top[i] = prevTop;
return popVal;
}
else
{
Console.WriteLine("Stack: {0} empty!", i);
return null;
}
}
/// <summary>
/// Peeking top element of a stack
/// </summary>
public object Peek(int i)
{
i--;
if (top[i] != -1)
{
return elems[top[i]].data;
}
else
{
Console.WriteLine("Stack: {0} empty!", i);
return null;
}
}
/// <summary>
/// Constructor initializing array of Nodes of size 'n' and the ability to store 'k' stacks
/// </summary>
public Stacks(int n, int k)
{
elems = new Item[n];
top = new int[k];
for (int i = 0; i < k; i++)
{
top[i] = -1;
}
for (int i = 0; i < n; i++)
{
elems[i] = new Item();
elems[i].data = null;
elems[i].prev = -1;
}
available = new SinglyLinkedList();
for (int i = n - 1; i >= 0; i--)
{
available.Insert(i);
}
}
private Item[] elems;
private int[] top;
private SinglyLinkedList available;
}
internal class StacksArrayTest
{
static void Main()
{
Stacks s = new Stacks(10, 3);
s.Push(1, 'a');
s.Push(1, 'b');
s.Push(1, 'c');
Console.WriteLine("After pushing in stack 1");
Console.WriteLine("Top 1: {0}", s.Peek(1));
s.Push(2, 'd');
s.Push(2, 'e');
s.Push(2, 'f');
s.Push(2, 'g');
Console.WriteLine("After pushing in stack 2");
Console.WriteLine("Top 1: {0}", s.Peek(1));
Console.WriteLine("Top 2: {0}", s.Peek(2));
s.Pop(1);
s.Pop(2);
Console.WriteLine("After popping from stack 1 and 2");
Console.WriteLine("Top 1: {0}", s.Peek(1));
Console.WriteLine("Top 2: {0}", s.Peek(2));
s.Push(3, 'h');
s.Push(3, 'i');
s.Push(3, 'j');
s.Push(3, 'k');
s.Push(3, 'l');
Console.WriteLine("After pushing in stack 3");
Console.WriteLine("Top 3: {0}", s.Peek(3));
Console.ReadLine();
}
}
}
Output:
After pushing in stack 1
Top 1: c
After pushing in stack 2
Top 1: c
Top 2: g
After popping from stack 1 and 2
Top 1: b
Top 2: f
After pushing in stack 3
Top 3: l
I refer to this post for coding it - http://codercareer.blogspot.com/2013/02/no-39-stacks-sharing-array.html
package job.interview;
import java.util.Arrays;
public class NStack1ArrayGen<T> {
T storage[];
int numOfStacks;
Integer top[];
public NStack1ArrayGen(int numOfStks, T myStorage[]){
storage = myStorage;
numOfStacks = numOfStks;
top = new Integer[numOfStks];
for(int i=0;i<numOfStks;i++){top[i]=-1;}
}
public void push(int stk_indx, T value){
int r_indx = stk_indx -1;
if(top[r_indx]+numOfStacks < storage.length){
top[r_indx] = top[r_indx] < 0 ? stk_indx-1 : top[r_indx]+numOfStacks;
storage[top[r_indx]] = value;
}
}
public T pop(int stk_indx){
T ret = top[stk_indx-1]<0 ? null : storage[top[stk_indx-1]];
top[stk_indx-1] -= numOfStacks;
return ret;
}
public void printInfo(){
print("The array", Arrays.toString(storage));
print("The top indices", Arrays.toString(top));
for(int j=1;j<=numOfStacks;j++){
printStack(j);
}
}
public void print(String name, String value){
System.out.println(name + " ==> " + value);
}
public void printStack(int indx){
String str = "";
while(top[indx-1]>=0){
str+=(str.length()>0 ? "," : "") + pop(indx);
}
print("Stack#"+indx,str);
}
public static void main (String args[])throws Exception{
int count=4, tsize=40;
int size[]={105,108,310,105};
NStack1ArrayGen<String> mystack = new NStack1ArrayGen<String>(count,new String[tsize]);
for(int i=1;i<=count;i++){
for(int j=1;j<=size[i-1];j++){
mystack.push(i, "stk"+i+"_value"+j);
}
}
}
}
This prints:
The array ==> [stk1_value1, stk2_value1, stk3_value1, stk4_value1, stk1_value2, stk2_value2, stk3_value2, stk4_value2, stk1_value3, stk2_value3, stk3_value3, stk4_value3, stk1_value4, stk2_value4, stk3_value4, stk4_value4, stk1_value5, stk2_value5, stk3_value5, stk4_value5, stk1_value6, stk2_value6, stk3_value6, stk4_value6, stk1_value7, stk2_value7, stk3_value7, stk4_value7, stk1_value8, stk2_value8, stk3_value8, stk4_value8, stk1_value9, stk2_value9, stk3_value9, stk4_value9, stk1_value10, stk2_value10, stk3_value10, stk4_value10]
The top indices ==> [36, 37, 38, 39]
Stack#1 ==> stk1_value10,stk1_value9,stk1_value8,stk1_value7,stk1_value6,stk1_value5,stk1_value4,stk1_value3,stk1_value2,stk1_value1
Stack#2 ==> stk2_value10,stk2_value9,stk2_value8,stk2_value7,stk2_value6,stk2_value5,stk2_value4,stk2_value3,stk2_value2,stk2_value1
Stack#3 ==> stk3_value10,stk3_value9,stk3_value8,stk3_value7,stk3_value6,stk3_value5,stk3_value4,stk3_value3,stk3_value2,stk3_value1
Stack#4 ==> stk4_value10,stk4_value9,stk4_value8,stk4_value7,stk4_value6,stk4_value5,stk4_value4,stk4_value3,stk4_value2,stk4_value1
enum stackId{LEFT, MID, RIGHT };
class threeStacks {
int* arr;
int leftSize;
int rightSize;
int midSize;
int mid;
int maxSize;
public:
threeStacks(int n):leftSize(0), rightSize(0), midSize(0), mid(n/2), maxSize(n)
{
arr = new int[n];
}
void push(stackId sid, int val){
switch(sid){
case LEFT:
pushLeft(val);
break;
case MID:
pushMid(val);
break;
case RIGHT:
pushRight(val);
}
}
int pop(stackId sid){
switch(sid){
case LEFT:
return popLeft();
case MID:
return popMid();
case RIGHT:
return popRight();
}
}
int top(stackId sid){
switch(sid){
case LEFT:
return topLeft();
case MID:
return topMid();
case RIGHT:
return topRight();
}
}
void pushMid(int val){
if(midSize+leftSize+rightSize+1 > maxSize){
cout << "Overflow!!"<<endl;
return;
}
if(midSize % 2 == 0){
if(mid - ((midSize+1)/2) == leftSize-1){
//left side OverFlow
if(!shiftMid(RIGHT)){
cout << "Overflow!!"<<endl;
return;
}
}
midSize++;
arr[mid - (midSize/2)] = val;
}
else{
if(mid + ((midSize+1)/2) == (maxSize - rightSize)){
//right side OverFlow
if(!shiftMid(LEFT)){
cout << "Overflow!!"<<endl;
return;
}
}
midSize++;
arr[mid + (midSize/2)] = val;
}
}
int popMid(){
if(midSize == 0){
cout << "Mid Stack Underflow!!"<<endl;
return -1;
}
int val;
if(midSize % 2 == 0)
val = arr[mid - (midSize/2)];
else
val = arr[mid + (midSize/2)];
midSize--;
return val;
}
int topMid(){
if(midSize == 0){
cout << "Mid Stack Underflow!!"<<endl;
return -1;
}
int val;
if(midSize % 2 == 0)
val = arr[mid - (midSize/2)];
else
val = arr[mid + (midSize/2)];
return val;
}
bool shiftMid(stackId dir){
int freeSpace;
switch (dir){
case LEFT:
freeSpace = (mid - midSize/2) - leftSize;
if(freeSpace < 1)
return false;
if(freeSpace > 1)
freeSpace /= 2;
for(int i=0; i< midSize; i++){
arr[(mid - midSize/2) - freeSpace + i] = arr[(mid - midSize/2) + i];
}
mid = mid-freeSpace;
break;
case RIGHT:
freeSpace = maxSize - rightSize - (mid + midSize/2) - 1;
if(freeSpace < 1)
return false;
if(freeSpace > 1)
freeSpace /= 2;
for(int i=0; i< midSize; i++){
arr[(mid + midSize/2) + freeSpace - i] = arr[(mid + midSize/2) - i];
}
mid = mid+freeSpace;
break;
default:
return false;
}
}
void pushLeft(int val){
if(midSize+leftSize+rightSize+1 > maxSize){
cout << "Overflow!!"<<endl;
return;
}
if(leftSize == (mid - midSize/2)){
//left side OverFlow
if(!shiftMid(RIGHT)){
cout << "Overflow!!"<<endl;
return;
}
}
arr[leftSize] = val;
leftSize++;
}
int popLeft(){
if(leftSize == 0){
cout << "Left Stack Underflow!!"<<endl;
return -1;
}
leftSize--;
return arr[leftSize];
}
int topLeft(){
if(leftSize == 0){
cout << "Left Stack Underflow!!"<<endl;
return -1;
}
return arr[leftSize - 1];
}
void pushRight(int val){
if(midSize+leftSize+rightSize+1 > maxSize){
cout << "Overflow!!"<<endl;
return;
}
if(maxSize - rightSize - 1 == (mid + midSize/2)){
//right side OverFlow
if(!shiftMid(LEFT)){
cout << "Overflow!!"<<endl;
return;
}
}
rightSize++;
arr[maxSize - rightSize] = val;
}
int popRight(){
if(rightSize == 0){
cout << "Right Stack Underflow!!"<<endl;
return -1;
}
int val = arr[maxSize - rightSize];
rightSize--;
return val;
}
int topRight(){
if(rightSize == 0){
cout << "Right Stack Underflow!!"<<endl;
return -1;
}
return arr[maxSize - rightSize];
}
};
Python
class Stack:
def __init__(self):
self.pos_1 = 0
self.pos_2 = 1
self.pos_3 = 2
self.stack = [None, None, None]
def pop_1(self):
if self.pos_2 - 1 > 0:
to_ret = self.stack.pop(self.pos_1)
self.pos_2 -= 1
self.pos_3 -= 1
return to_ret
def push_1(self, value):
self.stack.insert(self.pos_1, value)
self.pos_2 += 1
self.pos_3 += 1
return None
def pop_2(self):
if self.pos_2 - 1 < self.pos_3:
to_ret = self.stack.pop(self.pos_2)
self.pos_3 -= 1
return to_ret
def push_2(self, value):
self.stack.insert(self.pos_2, value)
self.pos_3 += 1
return None
def pop_3(self):
if self.pos_3 - 1 > self.pos_2:
to_ret = self.stack.pop(self.pos_3)
return to_ret
def push_3(self, value):
self.stack.insert(self.pos_3, value)
return None
if __name__ == "__main__":
stack = Stack()
stack.push_2(22)
stack.push_1(1)
stack.push_1(2)
print stack.pop_1()
print stack.pop_1()
print stack.pop_2()
prints: 2 1 22
What is the most efficient way to remove duplicate items from an array under the constraint that axillary memory usage must be to a minimum, preferably small enough to not even require any heap allocations? Sorting seems like the obvious choice, but this is clearly not asymptotically efficient. Is there a better algorithm that can be done in place or close to in place? If sorting is the best choice, what kind of sort would be best for something like this?
I'll answer my own question since, after posting, I came up with a really clever algorithm to do this. It uses hashing, building something like a hash set in place. It's guaranteed to be O(1) in axillary space (the recursion is a tail call), and is typically O(N) time complexity. The algorithm is as follows:
Take the first element of the array, this will be the sentinel.
Reorder the rest of the array, as much as possible, such that each element is in the position corresponding to its hash. As this step is completed, duplicates will be discovered. Set them equal to sentinel.
Move all elements for which the index is equal to the hash to the beginning of the array.
Move all elements that are equal to sentinel, except the first element of the array, to the end of the array.
What's left between the properly hashed elements and the duplicate elements will be the elements that couldn't be placed in the index corresponding to their hash because of a collision. Recurse to deal with these elements.
This can be shown to be O(N) provided no pathological scenario in the hashing:
Even if there are no duplicates, approximately 2/3 of the elements will be eliminated at each recursion. Each level of recursion is O(n) where small n is the amount of elements left. The only problem is that, in practice, it's slower than a quick sort when there are few duplicates, i.e. lots of collisions. However, when there are huge amounts of duplicates, it's amazingly fast.
Edit: In current implementations of D, hash_t is 32 bits. Everything about this algorithm assumes that there will be very few, if any, hash collisions in full 32-bit space. Collisions may, however, occur frequently in the modulus space. However, this assumption will in all likelihood be true for any reasonably sized data set. If the key is less than or equal to 32 bits, it can be its own hash, meaning that a collision in full 32-bit space is impossible. If it is larger, you simply can't fit enough of them into 32-bit memory address space for it to be a problem. I assume hash_t will be increased to 64 bits in 64-bit implementations of D, where datasets can be larger. Furthermore, if this ever did prove to be a problem, one could change the hash function at each level of recursion.
Here's an implementation in the D programming language:
void uniqueInPlace(T)(ref T[] dataIn) {
uniqueInPlaceImpl(dataIn, 0);
}
void uniqueInPlaceImpl(T)(ref T[] dataIn, size_t start) {
if(dataIn.length - start < 2)
return;
invariant T sentinel = dataIn[start];
T[] data = dataIn[start + 1..$];
static hash_t getHash(T elem) {
static if(is(T == uint) || is(T == int)) {
return cast(hash_t) elem;
} else static if(__traits(compiles, elem.toHash)) {
return elem.toHash;
} else {
static auto ti = typeid(typeof(elem));
return ti.getHash(&elem);
}
}
for(size_t index = 0; index < data.length;) {
if(data[index] == sentinel) {
index++;
continue;
}
auto hash = getHash(data[index]) % data.length;
if(index == hash) {
index++;
continue;
}
if(data[index] == data[hash]) {
data[index] = sentinel;
index++;
continue;
}
if(data[hash] == sentinel) {
swap(data[hash], data[index]);
index++;
continue;
}
auto hashHash = getHash(data[hash]) % data.length;
if(hashHash != hash) {
swap(data[index], data[hash]);
if(hash < index)
index++;
} else {
index++;
}
}
size_t swapPos = 0;
foreach(i; 0..data.length) {
if(data[i] != sentinel && i == getHash(data[i]) % data.length) {
swap(data[i], data[swapPos++]);
}
}
size_t sentinelPos = data.length;
for(size_t i = swapPos; i < sentinelPos;) {
if(data[i] == sentinel) {
swap(data[i], data[--sentinelPos]);
} else {
i++;
}
}
dataIn = dataIn[0..sentinelPos + start + 1];
uniqueInPlaceImpl(dataIn, start + swapPos + 1);
}
Keeping auxillary memory usage to a minimum, your best bet would be to do an efficient sort to get them in order, then do a single pass of the array with a FROM and TO index.
You advance the FROM index every time through the loop. You only copy the element from FROM to TO (and increment TO) when the key is different from the last.
With Quicksort, that'll average to O(n-log-n) and O(n) for the final pass.
If you sort the array, you will still need another pass to remove duplicates, so the complexity is O(NN) in the worst case (assuming Quicksort), or O(Nsqrt(N)) using Shellsort.
You can achieve O(N*N) by simply scanning the array for each element removing duplicates as you go.
Here is an example in Lua:
function removedups (t)
local result = {}
local count = 0
local found
for i,v in ipairs(t) do
found = false
if count > 0 then
for j = 1,count do
if v == result[j] then found = true; break end
end
end
if not found then
count = count + 1
result[count] = v
end
end
return result, count
end
I don't see any way to do this without something like a bubblesort. When you find a dupe, you need to reduce the length of the array. Quicksort is not designed for the size of the array to change.
This algorithm is always O(n^2) but it also use almost no extra memory -- stack or heap.
// returns the new size
int bubblesqueeze(int* a, int size) {
for (int j = 0; j < size - 1; ++j) {
for (int i = j + 1; i < size; ++i) {
// when a dupe is found, move the end value to index j
// and shrink the size of the array
while (i < size && a[i] == a[j]) {
a[i] = a[--size];
}
if (i < size && a[i] < a[j]) {
int tmp = a[j];
a[j] = a[i];
a[i] = tmp;
}
}
}
return size;
}
Is you have two different var for traversing a datadet insted of just one then you can limit the output by dismissing all diplicates that currently are already in the dataset.
Obvious this example in C is not an efficiant sorting algorith but it is just an example on one way to look at the probkem.
You could also blindly sort the data first and then relocate the data for removing dups, but I'm not sure that would be faster.
#define ARRAY_LENGTH 15
int stop = 1;
int scan_sort[ARRAY_LENGTH] = {5,2,3,5,1,2,5,4,3,5,4,8,6,4,1};
void step_relocate(char tmp,char s,int *dataset)
{
for(;tmp<s;s--)
dataset[s] = dataset[s-1];
}
int exists(int var,int *dataset)
{
int tmp=0;
for(;tmp < stop; tmp++)
{
if( dataset[tmp] == var)
return 1;/* value exsist */
if( dataset[tmp] > var)
tmp=stop;/* Value not in array*/
}
return 0;/* Value not in array*/
}
void main(void)
{
int tmp1=0;
int tmp2=0;
int index = 1;
while(index < ARRAY_LENGTH)
{
if(exists(scan_sort[index],scan_sort))
;/* Dismiss all values currently in the final dataset */
else if(scan_sort[stop-1] < scan_sort[index])
{
scan_sort[stop] = scan_sort[index];/* Insert the value as the highest one */
stop++;/* One more value adde to the final dataset */
}
else
{
for(tmp1=0;tmp1<stop;tmp1++)/* find where the data shall be inserted */
{
if(scan_sort[index] < scan_sort[tmp1])
{
index = index;
break;
}
}
tmp2 = scan_sort[index]; /* Store in case this value is the next after stop*/
step_relocate(tmp1,stop,scan_sort);/* Relocated data already in the dataset*/
scan_sort[tmp1] = tmp2;/* insert the new value */
stop++;/* One more value adde to the final dataset */
}
index++;
}
printf("Result: ");
for(tmp1 = 0; tmp1 < stop; tmp1++)
printf( "%d ",scan_sort[tmp1]);
printf("\n");
system( "pause" );
}
I liked the problem so I wrote a simple C test prog for it as you can see above. Make a comment if I should elaborate or you see any faults.