How to Implement stack using priority queue?
Guys this is a Microsoft Interview Question for Software Engineer/Developer.I just can't make out the meaning of the question.So I goggled and found this:
Stacks and queues may be modeled as particular kinds of priority queues. In a stack, the priority of each inserted element is monotonically increasing; thus, the last element inserted is always the first retrieved.
So what this question wants us to do.As stacks (Correct me if am wrong) are implicitly implemented as priority queues (priority being monotonically increasing as elements are added).
Does anybody can make out the meaning of this question.What we are supposed to do when such type of question is asked in an interview.
Pseudocode:
// stack of Key
class Stack {
class Element { int prio, Key elem; };
MaxPriorityQueue<Element> q;
int top_priority = 0;
void push(Key x) { q.push(Element(top_priority++, x)); }
Key pop() { top_priority--; return q.pop().elem; }
};
LIFO behavior follows from the fact that every new element is pushed with a priority higher than all the current elements, so it will be popped before any of them.
There are two ways to respond to this interview question. One is to explain in detail the structure above. The second is to briefly mention it, mumble something about O(lg n) and say you'd never implement a stack this way.
If you don't know what a priority queue is, ask. If you don't know what a stack is, ask. If you don't understand the question, ask. By now you should hopefully be able to work out that an adaptor like the following is required.
Stack :
private:
q : MaxPriorityQueue
counter : 0
public:
push(x) : q.add(x, counter++)
pop() : q.remove()
Here is the java implementation for this question.
public class StackPriorityQueue {
PriorityQueue<StackElement> queue = new PriorityQueue<>(10, new Comparator<StackElement>() {
#Override
public int compare(StackElement o1, StackElement o2) {
return o2.key - o1.key;
}
});
int order = 1;
public void push(int val){
StackElement element = new StackElement(order++,val);
queue.add(element);
}
public Integer pop(){
if(queue.isEmpty()){
System.out.println("Stack Underflow");
return null;
}
return queue.poll().value;
}
public static void main(String... args){
StackPriorityQueue q = new StackPriorityQueue();
q.push(5);
q.push(10);
q.push(1);
q.push(3);
q.push(50);
q.push(500);
q.push(60);
q.push(30);
q.push(40);
q.push(23);
q.push(34);
System.out.println(q.pop());
System.out.println(q.pop());
System.out.println(q.pop());
System.out.println(q.pop());
System.out.println(q.pop());
System.out.println(q.pop());
System.out.println(q.pop());
System.out.println(q.pop());
System.out.println(q.pop());
System.out.println(q.pop());
System.out.println(q.pop());
System.out.println(q.pop());
}
}
class StackElement {
int key;
int value;
public StackElement(int key, int value) {
this.key = key;
this.value = value;
}
}
Such questions require you to think a bit deep( though not so deep with this one).
The explanation for this answer is, instead of inserting each element with their values being the key, you should wrap them into a Object and assign order as an attribute. You should make this Order as the key.
Sample C Code:
struct MyNode
{
DataPacket dataPacket;
int order;
};
Java Implementation with Time Complexity and Space Complexity:
Time Complexity: Java Priority Queue is implemented using Heap Data Structures and Heap has O(log(n)), time complexity to insert the element.
Space Complexity: O(2k) for storing the elements in the Priority Queue and their associated ordering.
public class StackUsingHeap {
public static void main(String[] args) {
Stack stack = new Stack();
stack.push(10);
stack.push(15);
stack.push(20);
System.out.println(stack.pop());
System.out.println(stack.pop());
System.out.println(stack.pop());
}
}
class Stack {
PriorityQueue<Node> pq = new PriorityQueue<>(new Node());
static int position = -1;
public void push(int data) {
pq.add(new Node(data, ++position));
}
public int pop() {
--position; // optional
return pq.remove().data;
}
}
class Node implements Comparator<Node> {
int data;
int position;
public Node() {
}
public Node(int data, int position) {
this.data = data;
this.position = position;
}
#Override
public int compare(Node n1, Node n2) {
if (n1.position < n2.position)
return 1;
else if (n1.position > n2.position)
return -1;
return 0;
}
}
Here is the java implementation for this question.
import org.junit.Test;
import java.util.PriorityQueue;
import static org.junit.Assert.assertEquals;
public class StackHeap {
#Test
public void test() {
Stack s = new Stack();
s.push(1);
s.push(2);
s.push(3);
assertEquals(3, s.pop());
assertEquals(2, s.pop());
s.push(4);
s.push(5);
assertEquals(5, s.pop());
assertEquals(4, s.pop());
assertEquals(1, s.pop());
}
class Stack {
PriorityQueue<Node> pq = new PriorityQueue<>((Node x, Node y) -> Integer.compare(y.position, x.position));
int position = -1;
public void push(int data) {
pq.add(new Node(data, ++position));
}
public int pop() {
if (position == -1) {
return Integer.MIN_VALUE;
}
position--;
return pq.poll().data;
}
}
class Node {
int data;
int position;
public Node (int data, int position) {
this.data = data;
this.position = position;
}
}
}
You can implement a stack using a priority queue( say PQ) using min heap. You need one extra integer variable (say t). 't' will be used as the priority while inserting/deleting the elements from PQ.
You have to initialize t (say t=100) to some value at starting.
push(int element){
PQ.insert(t,element);
t--; //decrease priority value(less priority will be popped first)
}
pop(){
return PQ.deleteMin();
}
peek(){
return PQ.min();
}
Note: You can also use system time to push elements according to the priority.
push(int element){
PQ.insert(-getTime(),element); //negative of sys time(less priority will be popped first)
}
Related
I am stuck in sorting really want a code that can sort my dynamic stack i am posting my code if anyone can help me provide me code for sorting according to my code.
class Node
{
public int info; //incoming data
public Node link; //reference of next value
public Node(int i)
{
info = i;
link = null;
}
}
class DynamicStack
{
private Node start;
public DynamicStack()
{
start = null;
}
public void Push(int data)
{
Node temp = new Node(data);
temp.link = start;
start = temp;
}
public void CreateStack()
{
int i, n, data;
Console.Write("Enter the number of nodes : ");
n = Convert.ToInt32(Console.ReadLine());
if (n == 0)
return;
for (i = 1; i <= n; i++)
{
Console.Write("Enter the element to be inserted : ");
data = Convert.ToInt32(Console.ReadLine());
Push(data);
}
}
public void Pop()
{
if (start == null)
return;
start = start.link;
}
By using these methods i can push and pop my values in list but stuck in sorting those inserted values just need a code which can work with my code and sort my values.
A 2-way merge is widely studied as a part of Mergesort algorithm.
But I am interested to find out the best way one can perform an N-way merge?
Lets say, I have N files which have sorted 1 million integers each.
I have to merge them into 1 single file which will have those 100 million sorted integers.
Please keep in mind that use case for this problem is actually external sorting which is disk based. Therefore, in real scenarios there would be memory limitation as well. So a naive approach of merging 2 files at a time (99 times) won't work. Lets say we have only a small sliding window of memory available for each array.
I am not sure if there is already a standardized solution to this N-way merge. (Googling didn't tell me much).
But if you know if a good n-way merge algorithm, please post algo/link.
Time complexity: If we greatly increase the number of files (N) to be merged, how would that affect the time complexity of your algorithm?
Thanks for your answers.
I haven't been asked this anywhere, but I felt this could be an interesting interview question. Therefore tagged.
How about the following idea:
Create a priority queue
Iterate through each file f
enqueue the pair (nextNumberIn(f), f) using the first value as priority key
While queue not empty
dequeue head (m, f) of queue
output m
if f not depleted
enqueue (nextNumberIn(f), f)
Since adding elements to a priority queue can be done in logarithmic time, item 2 is O(N × log N). Since (almost all) iterations of the while loop adds an element, the whole while-loop is O(M × log N) where M is the total number of numbers to sort.
Assuming all files have a non-empty sequence of numbers, we have M > N and thus the whole algorithm should be O(M × log N).
Search for "Polyphase merge", check out classics - Donald Knuth & E.H.Friend.
Also, you may want to take a look at the proposed Smart Block Merging by Seyedafsari & Hasanzadeh, that, similarly to earlier suggestions, uses priority queues.
Another interesting reasonsing is In Place Merging Algorithm by Kim & Kutzner.
I also recommend this paper by Vitter: External memory algorithms and data structures: dealing with massive data.
One simple idea is to keep a priority queue of the ranges to merge, stored in such a way that the range with the smallest first element is removed first from the queue. You can then do an N-way merge as follows:
Insert all of the ranges into the priority queue, excluding empty ranges.
While the priority queue is not empty:
Dequeue the smallest element from the queue.
Append the first element of this range to the output sequence.
If it's nonempty, insert the rest of the sequence back into the priority queue.
The correctness of this algorithm is essentially a generalization of the proof that a 2-way merge works correctly - if you always add the smallest element from any range, and all the ranges are sorted, you end up with the sequence as a whole sorted.
The runtime complexity of this algorithm can be found as follows. Let M be the total number of elements in all the sequences. If we use a binary heap, then we do at most O(M) insertions and O(M) deletions from the priority queue, since for each element written to the output sequence there's a dequeue to pull out the smallest sequence, followed by an enqueue to put the rest of the sequence back into the queue. Each of these steps takes O(lg N) operations, because insertion or deletion from a binary heap with N elements in it takes O(lg N) time. This gives a net runtime of O(M lg N), which grows less than linearly with the number of input sequences.
There may be a way to get this even faster, but this seems like a pretty good solution. The memory usage is O(N) because we need O(N) overhead for the binary heap. If we implement the binary heap by storing pointers to the sequences rather than the sequences themselves, this shouldn't be too much of a problem unless you have a truly ridiculous number of sequences to merge. In that case, just merge them in groups that do fit into memory, then merge all the results.
Hope this helps!
A simple approach to Merging k sorted arrays (each of length n) requires O(n k^2) time and not O(nk) time. As when you merge first 2 arrays it takes 2n time, then when you merge third with the output , it takes 3n time as now we are merging two array of length 2n and n. Now when we merge this output with the fourth one,this merge requires 4n time.Thus the last merge (when we are adding the kth array to our already sorted array ) requires k*n time.Thus total time required is 2n+ 3n + 4n +...k*n which is O(n k^2).
It looks like we can do it in O(kn) time but it is not so because each time our array which we are merging is increasing in size.
Though we can achieve a better bound using divide and conquer. I am still working on that and post a solution if I find one.
See http://en.wikipedia.org/wiki/External_sorting. Here is my take on the heap based k-way merge, using a buffered read from the sources to emulate I/O reduction:
public class KWayMerger<T>
{
private readonly IList<T[]> _sources;
private readonly int _bufferSize;
private readonly MinHeap<MergeValue<T>> _mergeHeap;
private readonly int[] _indices;
public KWayMerger(IList<T[]> sources, int bufferSize, Comparer<T> comparer = null)
{
if (sources == null) throw new ArgumentNullException("sources");
_sources = sources;
_bufferSize = bufferSize;
_mergeHeap = new MinHeap<MergeValue<T>>(
new MergeComparer<T>(comparer ?? Comparer<T>.Default));
_indices = new int[sources.Count];
}
public T[] Merge()
{
for (int i = 0; i <= _sources.Count - 1; i++)
AddToMergeHeap(i);
var merged = new T[_sources.Sum(s => s.Length)];
int mergeIndex = 0;
while (_mergeHeap.Count > 0)
{
var min = _mergeHeap.ExtractDominating();
merged[mergeIndex++] = min.Value;
if (min.Source != -1) //the last item of the source was extracted
AddToMergeHeap(min.Source);
}
return merged;
}
private void AddToMergeHeap(int sourceIndex)
{
var source = _sources[sourceIndex];
var start = _indices[sourceIndex];
var end = Math.Min(start + _bufferSize - 1, source.Length - 1);
if (start > source.Length - 1)
return; //we're done with this source
for (int i = start; i <= end - 1; i++)
_mergeHeap.Add(new MergeValue<T>(-1, source[i]));
//only the last item should trigger the next buffered read
_mergeHeap.Add(new MergeValue<T>(sourceIndex, source[end]));
_indices[sourceIndex] += _bufferSize; //we may have added less items,
//but if we did we've reached the end of the source so it doesn't matter
}
}
internal class MergeValue<T>
{
public int Source { get; private set; }
public T Value { get; private set; }
public MergeValue(int source, T value)
{
Value = value;
Source = source;
}
}
internal class MergeComparer<T> : IComparer<MergeValue<T>>
{
public Comparer<T> Comparer { get; private set; }
public MergeComparer(Comparer<T> comparer)
{
if (comparer == null) throw new ArgumentNullException("comparer");
Comparer = comparer;
}
public int Compare(MergeValue<T> x, MergeValue<T> y)
{
Debug.Assert(x != null && y != null);
return Comparer.Compare(x.Value, y.Value);
}
}
Here is one possible implementation of MinHeap<T>. Some tests:
[TestMethod]
public void TestKWaySort()
{
var rand = new Random();
for (int i = 0; i < 10; i++)
AssertKwayMerge(rand);
}
private static void AssertKwayMerge(Random rand)
{
var sources = new[]
{
GenerateRandomCollection(rand, 10, 30, 0, 30).OrderBy(i => i).ToArray(),
GenerateRandomCollection(rand, 10, 30, 0, 30).OrderBy(i => i).ToArray(),
GenerateRandomCollection(rand, 10, 30, 0, 30).OrderBy(i => i).ToArray(),
GenerateRandomCollection(rand, 10, 30, 0, 30).OrderBy(i => i).ToArray(),
};
Assert.IsTrue(new KWayMerger<int>(sources, 20).Merge().SequenceEqual(sources.SelectMany(s => s).OrderBy(i => i)));
}
public static IEnumerable<int> GenerateRandomCollection(Random rand, int minLength, int maxLength, int min = 0, int max = int.MaxValue)
{
return Enumerable.Repeat(0, rand.Next(minLength, maxLength)).Select(i => rand.Next(min, max));
}
I wrote this STL-style piece of code that does N-way merge and thought I'd post it here to help prevent others from reinventing the wheel. :)
Warning: it's only mildly tested. Test before use. :)
You can use it like this:
#include <vector>
int main()
{
std::vector<std::vector<int> > v;
std::vector<std::vector<int>::iterator> vout;
std::vector<int> v1;
std::vector<int> v2;
v1.push_back(1);
v1.push_back(2);
v1.push_back(3);
v2.push_back(0);
v2.push_back(1);
v2.push_back(2);
v.push_back(v1);
v.push_back(v2);
multiway_merge(v.begin(), v.end(), std::back_inserter(vout), false);
}
It also allows using pairs of iterators instead of the containers themselves.
If you use Boost.Range, you can remove some of the boilerplate code.
The code:
#include <algorithm>
#include <functional> // std::less
#include <iterator>
#include <queue> // std::priority_queue
#include <utility> // std::pair
#include <vector>
template<class OutIt>
struct multiway_merge_value_insert_iterator : public std::iterator<
std::output_iterator_tag, OutIt, ptrdiff_t
>
{
OutIt it;
multiway_merge_value_insert_iterator(OutIt const it = OutIt())
: it(it) { }
multiway_merge_value_insert_iterator &operator++(int)
{ return *this; }
multiway_merge_value_insert_iterator &operator++()
{ return *this; }
multiway_merge_value_insert_iterator &operator *()
{ return *this; }
template<class It>
multiway_merge_value_insert_iterator &operator =(It const i)
{
*this->it = *i;
++this->it;
return *this;
}
};
template<class OutIt>
multiway_merge_value_insert_iterator<OutIt>
multiway_merge_value_inserter(OutIt const it)
{ return multiway_merge_value_insert_iterator<OutIt>(it); };
template<class Less>
struct multiway_merge_value_less : private Less
{
multiway_merge_value_less(Less const &less) : Less(less) { }
template<class It1, class It2>
bool operator()(
std::pair<It1, It1> const &b /* inverted */,
std::pair<It2, It2> const &a) const
{
return b.first != b.second && (
a.first == a.second ||
this->Less::operator()(*a.first, *b.first));
}
};
struct multiway_merge_default_less
{
template<class T>
bool operator()(T const &a, T const &b) const
{ return std::less<T>()(a, b); }
};
template<class R>
struct multiway_merge_range_iterator
{ typedef typename R::iterator type; };
template<class R>
struct multiway_merge_range_iterator<R const>
{ typedef typename R::const_iterator type; };
template<class It>
struct multiway_merge_range_iterator<std::pair<It, It> >
{ typedef It type; };
template<class R>
typename R::iterator multiway_merge_range_begin(R &r)
{ return r.begin(); }
template<class R>
typename R::iterator multiway_merge_range_end(R &r)
{ return r.end(); }
template<class R>
typename R::const_iterator multiway_merge_range_begin(R const &r)
{ return r.begin(); }
template<class R>
typename R::const_iterator multiway_merge_range_end(R const &r)
{ return r.end(); }
template<class It>
It multiway_merge_range_begin(std::pair<It, It> const &r)
{ return r.first; }
template<class It>
It multiway_merge_range_end(std::pair<It, It> const &r)
{ return r.second; }
template<class It, class OutIt, class Less, class PQ>
OutIt multiway_merge(
It begin, It const end, OutIt out, Less const &less,
PQ &pq, bool const distinct = false)
{
while (begin != end)
{
pq.push(typename PQ::value_type(
multiway_merge_range_begin(*begin),
multiway_merge_range_end(*begin)));
++begin;
}
while (!pq.empty())
{
typename PQ::value_type top = pq.top();
pq.pop();
if (top.first != top.second)
{
while (!pq.empty() && pq.top().first == pq.top().second)
{ pq.pop(); }
if (!distinct ||
pq.empty() ||
less(*pq.top().first, *top.first) ||
less(*top.first, *pq.top().first))
{
*out = top.first;
++out;
}
++top.first;
pq.push(top);
}
}
return out;
}
template<class It, class OutIt, class Less>
OutIt multiway_merge(
It const begin, It const end, OutIt out, Less const &less,
bool const distinct = false)
{
typedef typename multiway_merge_range_iterator<
typename std::iterator_traits<It>::value_type
>::type SubIt;
if (std::distance(begin, end) < 16)
{
typedef std::vector<std::pair<SubIt, SubIt> > Remaining;
Remaining remaining;
remaining.reserve(
static_cast<size_t>(std::distance(begin, end)));
for (It i = begin; i != end; ++i)
{
if (multiway_merge_range_begin(*i) !=
multiway_merge_range_end(*i))
{
remaining.push_back(std::make_pair(
multiway_merge_range_begin(*i),
multiway_merge_range_end(*i)));
}
}
while (!remaining.empty())
{
typename Remaining::iterator smallest =
remaining.begin();
for (typename Remaining::iterator
i = remaining.begin();
i != remaining.end();
)
{
if (less(*i->first, *smallest->first))
{
smallest = i;
++i;
}
else if (distinct && i != smallest &&
!less(
*smallest->first,
*i->first))
{
i = remaining.erase(i);
}
else { ++i; }
}
*out = smallest->first;
++out;
++smallest->first;
if (smallest->first == smallest->second)
{ smallest = remaining.erase(smallest); }
}
return out;
}
else
{
std::priority_queue<
std::pair<SubIt, SubIt>,
std::vector<std::pair<SubIt, SubIt> >,
multiway_merge_value_less<Less>
> q((multiway_merge_value_less<Less>(less)));
return multiway_merge(begin, end, out, less, q, distinct);
}
}
template<class It, class OutIt>
OutIt multiway_merge(
It const begin, It const end, OutIt const out,
bool const distinct = false)
{
return multiway_merge(
begin, end, out,
multiway_merge_default_less(), distinct);
}
Here is my implementation using MinHeap...
package merging;
import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileReader;
import java.io.FileWriter;
import java.io.IOException;
import java.io.PrintWriter;
public class N_Way_Merge {
int No_of_files=0;
String[] listString;
int[] listIndex;
PrintWriter pw;
private String fileDir = "D:\\XMLParsing_Files\\Extracted_Data";
private File[] fileList;
private BufferedReader[] readers;
public static void main(String[] args) throws IOException {
N_Way_Merge nwm=new N_Way_Merge();
long start= System.currentTimeMillis();
try {
nwm.createFileList();
nwm.createReaders();
nwm.createMinHeap();
}
finally {
nwm.pw.flush();
nwm.pw.close();
for (BufferedReader readers : nwm.readers) {
readers.close();
}
}
long end = System.currentTimeMillis();
System.out.println("Files merged into a single file.\nTime taken: "+((end-start)/1000)+"secs");
}
public void createFileList() throws IOException {
//creates a list of sorted files present in a particular directory
File folder = new File(fileDir);
fileList = folder.listFiles();
No_of_files=fileList.length;
assign();
System.out.println("No. of files - "+ No_of_files);
}
public void assign() throws IOException
{
listString = new String[No_of_files];
listIndex = new int[No_of_files];
pw = new PrintWriter(new BufferedWriter(new FileWriter("D:\\XMLParsing_Files\\Final.txt", true)));
}
public void createReaders() throws IOException {
//creates array of BufferedReaders to read the files
readers = new BufferedReader[No_of_files];
for(int i=0;i<No_of_files;++i)
{
readers[i]=new BufferedReader(new FileReader(fileList[i]));
}
}
public void createMinHeap() throws IOException {
for(int i=0;i<No_of_files;i++)
{
listString[i]=readers[i].readLine();
listIndex[i]=i;
}
WriteToFile(listString,listIndex);
}
public void WriteToFile(String[] listString,int[] listIndex) throws IOException{
BuildHeap_forFirstTime(listString, listIndex);
while(!(listString[0].equals("zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz")))
{
pw.println(listString[0]);
listString[0]=readers[listIndex[0]].readLine();
MinHeapify(listString,listIndex,0);
}
}
public void BuildHeap_forFirstTime(String[] listString,int[] listIndex){
for(int i=(No_of_files/2)-1;i>=0;--i)
MinHeapify(listString,listIndex,i);
}
public void MinHeapify(String[] listString,int[] listIndex,int index){
int left=index*2 + 1;
int right=left + 1;
int smallest=index;
int HeapSize=No_of_files;
if(left <= HeapSize-1 && listString[left]!=null && (listString[left].compareTo(listString[index])) < 0)
smallest = left;
if(right <= HeapSize-1 && listString[right]!=null && (listString[right].compareTo(listString[smallest])) < 0)
smallest=right;
if(smallest!=index)
{
String temp=listString[index];
listString[index]=listString[smallest];
listString[smallest]=temp;
listIndex[smallest]^=listIndex[index];
listIndex[index]^=listIndex[smallest];
listIndex[smallest]^=listIndex[index];
MinHeapify(listString,listIndex,smallest);
}
}
}
Java implementation of min heap algorithm for merging k sorted arrays:
public class MergeKSorted {
/**
* helper object to store min value of each array in a priority queue,
* the kth array and the index into kth array
*
*/
static class PQNode implements Comparable<PQNode>{
int value;
int kth = 0;
int indexKth = 0;
public PQNode(int value, int kth, int indexKth) {
this.value = value;
this.kth = kth;
this.indexKth = indexKth;
}
#Override
public int compareTo(PQNode o) {
if(o != null) {
return Integer.valueOf(value).compareTo(Integer.valueOf(o.value));
}
else return 0;
}
#Override
public String toString() {
return value+" "+kth+" "+indexKth;
}
}
public static void mergeKSorted(int[][] sortedArrays) {
int k = sortedArrays.length;
int resultCtr = 0;
int totalSize = 0;
PriorityQueue<PQNode> pq = new PriorityQueue<>();
for(int i=0; i<k; i++) {
int[] kthArray = sortedArrays[i];
totalSize+=kthArray.length;
if(kthArray.length > 0) {
PQNode temp = new PQNode(kthArray[0], i, 0);
pq.add(temp);
}
}
int[] result = new int[totalSize];
while(!pq.isEmpty()) {
PQNode temp = pq.poll();
int[] kthArray = sortedArrays[temp.kth];
result[resultCtr] = temp.value;
resultCtr++;
temp.indexKth++;
if(temp.indexKth < kthArray.length) {
temp = new PQNode(kthArray[temp.indexKth], temp.kth, temp.indexKth);
pq.add(temp);
}
}
print(result);
}
public static void print(int[] a) {
StringBuilder sb = new StringBuilder();
for(int v : a) {
sb.append(v).append(" ");
}
System.out.println(sb);
}
public static void main(String[] args) {
int[][] sortedA = {
{3,4,6,9},
{4,6,8,9,12},
{3,4,9},
{1,4,9}
};
mergeKSorted(sortedA);
}
}
I had implelmented the rod cutting using memoization technique in java and here is the code that i have come up so far:
public class RodCutMemo {
public static int [] memo;
public static void main(String args [])
{ int [] prices = {0,2,3,5,8,6,4,9,10,12,15,16,17,18,20,22,31,50} ;
int n=5;
memo = new int [n+1];
for(int i =1;i<=n;i++)
{ memo[i]=-9999;}
System.out.println(maxProfitRodCutMemo(prices ,n));
}
public static int maxProfitRodCutMemo(int [] prices,int n)
{ if(memo[n]>=0)
{
return memo[n];}
//else if(n==0)
//{
// return 0;
//}
else
{ int q = -9999;
for(int i =1;i<=n;i++)
{q=Math.max(q,prices[i]+maxProfitRodCutMemo(prices, n-i) );}
return q;}}}
I have two questions here...
Q1)I have commented out one of the base conditions..if(n==0).Is that required in code.Am i missing some corner case without that??
Yes!
The simple answer to your question comes out when you consider i=n(in the for loop in the function maxProfitRodCutMemo), which will lead to calling of maxProfitRodCutMemo(int [] prices,0), which wont give you a correct result
according to your code (as you dont have any condition to check it)..
This is my first time posting question, do pardon me if anything I do is wrong.
My question here is how to get a faster algorithm from this code? i'm currently using 2 stacks to implement the code such that it will get the minimum value out of the range of index User asks for input.
Example (2,3,4,5,1), if (user selects (1,4)), it means they are looking at (2,3,4,5), which the output is 2.
Thanks.
import java.util.*;
interface StackADT <Integer> {
// check whether stack is empty
public boolean empty();
// retrieve topmost item on stack
public int peek() throws EmptyStackException;
// remove and return topmost item on stack
public int pop() throws EmptyStackException;
// insert item onto stack
public void push(int item);
}
class StackArr <Integer> implements StackADT <Integer> {
private int[] arr;
private int top;
private int maxSize;
private final int INITSIZE = 1000;
public StackArr() {
arr = (int[]) new int[INITSIZE]; // creating array of type E
top = -1; // empty stack - thus, top is not on an valid array element
maxSize = INITSIZE;
}
public boolean empty() {
return (top < 0);
}
public int peek() throws EmptyStackException {
if (!empty()) return arr[top];
else throw new EmptyStackException();
}
public int pop() throws EmptyStackException {
int obj = peek();
top--;
return obj;
}
public void push(int obj) {
if (top >= maxSize - 1) enlargeArr();
top++;
arr[top] = obj;
}
}
class RMQ{
//declare stack object
Stack<Integer> stack1;
public RMQ(){
stack1 = new Stack<Integer>();
}
public void insertInt(int num){
stack1.push(num);
}
public int findIndex(int c, int d){
Stack<Integer> tempStack = new Stack<Integer>();
Stack<Integer> popStack = new Stack<Integer>();
tempStack = (Stack)stack1.clone();
while (d != tempStack.size())
{
tempStack.pop();
}
int minValue = tempStack.pop();
popStack.push(minValue);
while (c <= tempStack.size())
{
int tempValue = tempStack.pop();
if(tempValue >= minValue)
{
continue;
}
else
{
popStack.push(tempValue);
minValue = tempValue;
}
}
return popStack.pop();
}
}
public class Pseudo{
public static void main(String[] args){
//declare variables
int inputNum;
int numOfOperations;
//create object
RMQ rmq = new RMQ();
Scanner sc = new Scanner(System.in);
//read input
inputNum = sc.nextInt();
//add integers into stack
for(int i=0; i < inputNum; i++){
rmq.insertInt(sc.nextInt());
}
// read input for number of queries
numOfOperations = sc.nextInt();
// Output queries
for(int k=0; k < numOfOperations; k++){
int output = rmq.findIndex(sc.nextInt(), sc.nextInt());
System.out.println(output);
}
}
}
Why are you using a stack? Simply use an array:
int[] myArray = new int[inputNum];
// fill the array...
// get the minimum between "from" and "to"
int minimum = Integer.MAX_VALUE;
for(int i = from ; i <= to ; ++i) {
minimum = Math.min(minimum, myArray[i])
}
And that's it!
The way I understand your question is that you want to do some preprocessing on a fixed array that then makes your find min operation of a range of elements very fast.
This answer describes an approach that does O(nlogn) preprocessing work, followed by O(1) work for each query.
Preprocessing O(nlogn)
The idea is to prepare a 2d array SMALL[a,k] where SMALL[a,k] is the minimum of the 2^k elements starting at a
You can compute this array in a recursive way by starting at k==0 and then building up the value for each higher element by combining two previous elements together.
SMALL[a,k] = min(SMALL[a,k-1] , SMALL[a+2^(k-1),k-1])
Lookup O(1) per query
You are then able to instantly find the min for any range by combining 2 preprepared answers.
Suppose you want to find the min for elements from 100 to 133. You already know the min of 32 elements 100 to 131 (in BIG[100,5]) and also the min of 32 elements from 102 to 133 (in BIG[102,5]) so you can find the smallest of these to get the answer.
This is Range Minimum Query problem.
There are some algorthms and data structures to solve it effectively
A 2-way merge is widely studied as a part of Mergesort algorithm.
But I am interested to find out the best way one can perform an N-way merge?
Lets say, I have N files which have sorted 1 million integers each.
I have to merge them into 1 single file which will have those 100 million sorted integers.
Please keep in mind that use case for this problem is actually external sorting which is disk based. Therefore, in real scenarios there would be memory limitation as well. So a naive approach of merging 2 files at a time (99 times) won't work. Lets say we have only a small sliding window of memory available for each array.
I am not sure if there is already a standardized solution to this N-way merge. (Googling didn't tell me much).
But if you know if a good n-way merge algorithm, please post algo/link.
Time complexity: If we greatly increase the number of files (N) to be merged, how would that affect the time complexity of your algorithm?
Thanks for your answers.
I haven't been asked this anywhere, but I felt this could be an interesting interview question. Therefore tagged.
How about the following idea:
Create a priority queue
Iterate through each file f
enqueue the pair (nextNumberIn(f), f) using the first value as priority key
While queue not empty
dequeue head (m, f) of queue
output m
if f not depleted
enqueue (nextNumberIn(f), f)
Since adding elements to a priority queue can be done in logarithmic time, item 2 is O(N × log N). Since (almost all) iterations of the while loop adds an element, the whole while-loop is O(M × log N) where M is the total number of numbers to sort.
Assuming all files have a non-empty sequence of numbers, we have M > N and thus the whole algorithm should be O(M × log N).
Search for "Polyphase merge", check out classics - Donald Knuth & E.H.Friend.
Also, you may want to take a look at the proposed Smart Block Merging by Seyedafsari & Hasanzadeh, that, similarly to earlier suggestions, uses priority queues.
Another interesting reasonsing is In Place Merging Algorithm by Kim & Kutzner.
I also recommend this paper by Vitter: External memory algorithms and data structures: dealing with massive data.
One simple idea is to keep a priority queue of the ranges to merge, stored in such a way that the range with the smallest first element is removed first from the queue. You can then do an N-way merge as follows:
Insert all of the ranges into the priority queue, excluding empty ranges.
While the priority queue is not empty:
Dequeue the smallest element from the queue.
Append the first element of this range to the output sequence.
If it's nonempty, insert the rest of the sequence back into the priority queue.
The correctness of this algorithm is essentially a generalization of the proof that a 2-way merge works correctly - if you always add the smallest element from any range, and all the ranges are sorted, you end up with the sequence as a whole sorted.
The runtime complexity of this algorithm can be found as follows. Let M be the total number of elements in all the sequences. If we use a binary heap, then we do at most O(M) insertions and O(M) deletions from the priority queue, since for each element written to the output sequence there's a dequeue to pull out the smallest sequence, followed by an enqueue to put the rest of the sequence back into the queue. Each of these steps takes O(lg N) operations, because insertion or deletion from a binary heap with N elements in it takes O(lg N) time. This gives a net runtime of O(M lg N), which grows less than linearly with the number of input sequences.
There may be a way to get this even faster, but this seems like a pretty good solution. The memory usage is O(N) because we need O(N) overhead for the binary heap. If we implement the binary heap by storing pointers to the sequences rather than the sequences themselves, this shouldn't be too much of a problem unless you have a truly ridiculous number of sequences to merge. In that case, just merge them in groups that do fit into memory, then merge all the results.
Hope this helps!
A simple approach to Merging k sorted arrays (each of length n) requires O(n k^2) time and not O(nk) time. As when you merge first 2 arrays it takes 2n time, then when you merge third with the output , it takes 3n time as now we are merging two array of length 2n and n. Now when we merge this output with the fourth one,this merge requires 4n time.Thus the last merge (when we are adding the kth array to our already sorted array ) requires k*n time.Thus total time required is 2n+ 3n + 4n +...k*n which is O(n k^2).
It looks like we can do it in O(kn) time but it is not so because each time our array which we are merging is increasing in size.
Though we can achieve a better bound using divide and conquer. I am still working on that and post a solution if I find one.
See http://en.wikipedia.org/wiki/External_sorting. Here is my take on the heap based k-way merge, using a buffered read from the sources to emulate I/O reduction:
public class KWayMerger<T>
{
private readonly IList<T[]> _sources;
private readonly int _bufferSize;
private readonly MinHeap<MergeValue<T>> _mergeHeap;
private readonly int[] _indices;
public KWayMerger(IList<T[]> sources, int bufferSize, Comparer<T> comparer = null)
{
if (sources == null) throw new ArgumentNullException("sources");
_sources = sources;
_bufferSize = bufferSize;
_mergeHeap = new MinHeap<MergeValue<T>>(
new MergeComparer<T>(comparer ?? Comparer<T>.Default));
_indices = new int[sources.Count];
}
public T[] Merge()
{
for (int i = 0; i <= _sources.Count - 1; i++)
AddToMergeHeap(i);
var merged = new T[_sources.Sum(s => s.Length)];
int mergeIndex = 0;
while (_mergeHeap.Count > 0)
{
var min = _mergeHeap.ExtractDominating();
merged[mergeIndex++] = min.Value;
if (min.Source != -1) //the last item of the source was extracted
AddToMergeHeap(min.Source);
}
return merged;
}
private void AddToMergeHeap(int sourceIndex)
{
var source = _sources[sourceIndex];
var start = _indices[sourceIndex];
var end = Math.Min(start + _bufferSize - 1, source.Length - 1);
if (start > source.Length - 1)
return; //we're done with this source
for (int i = start; i <= end - 1; i++)
_mergeHeap.Add(new MergeValue<T>(-1, source[i]));
//only the last item should trigger the next buffered read
_mergeHeap.Add(new MergeValue<T>(sourceIndex, source[end]));
_indices[sourceIndex] += _bufferSize; //we may have added less items,
//but if we did we've reached the end of the source so it doesn't matter
}
}
internal class MergeValue<T>
{
public int Source { get; private set; }
public T Value { get; private set; }
public MergeValue(int source, T value)
{
Value = value;
Source = source;
}
}
internal class MergeComparer<T> : IComparer<MergeValue<T>>
{
public Comparer<T> Comparer { get; private set; }
public MergeComparer(Comparer<T> comparer)
{
if (comparer == null) throw new ArgumentNullException("comparer");
Comparer = comparer;
}
public int Compare(MergeValue<T> x, MergeValue<T> y)
{
Debug.Assert(x != null && y != null);
return Comparer.Compare(x.Value, y.Value);
}
}
Here is one possible implementation of MinHeap<T>. Some tests:
[TestMethod]
public void TestKWaySort()
{
var rand = new Random();
for (int i = 0; i < 10; i++)
AssertKwayMerge(rand);
}
private static void AssertKwayMerge(Random rand)
{
var sources = new[]
{
GenerateRandomCollection(rand, 10, 30, 0, 30).OrderBy(i => i).ToArray(),
GenerateRandomCollection(rand, 10, 30, 0, 30).OrderBy(i => i).ToArray(),
GenerateRandomCollection(rand, 10, 30, 0, 30).OrderBy(i => i).ToArray(),
GenerateRandomCollection(rand, 10, 30, 0, 30).OrderBy(i => i).ToArray(),
};
Assert.IsTrue(new KWayMerger<int>(sources, 20).Merge().SequenceEqual(sources.SelectMany(s => s).OrderBy(i => i)));
}
public static IEnumerable<int> GenerateRandomCollection(Random rand, int minLength, int maxLength, int min = 0, int max = int.MaxValue)
{
return Enumerable.Repeat(0, rand.Next(minLength, maxLength)).Select(i => rand.Next(min, max));
}
I wrote this STL-style piece of code that does N-way merge and thought I'd post it here to help prevent others from reinventing the wheel. :)
Warning: it's only mildly tested. Test before use. :)
You can use it like this:
#include <vector>
int main()
{
std::vector<std::vector<int> > v;
std::vector<std::vector<int>::iterator> vout;
std::vector<int> v1;
std::vector<int> v2;
v1.push_back(1);
v1.push_back(2);
v1.push_back(3);
v2.push_back(0);
v2.push_back(1);
v2.push_back(2);
v.push_back(v1);
v.push_back(v2);
multiway_merge(v.begin(), v.end(), std::back_inserter(vout), false);
}
It also allows using pairs of iterators instead of the containers themselves.
If you use Boost.Range, you can remove some of the boilerplate code.
The code:
#include <algorithm>
#include <functional> // std::less
#include <iterator>
#include <queue> // std::priority_queue
#include <utility> // std::pair
#include <vector>
template<class OutIt>
struct multiway_merge_value_insert_iterator : public std::iterator<
std::output_iterator_tag, OutIt, ptrdiff_t
>
{
OutIt it;
multiway_merge_value_insert_iterator(OutIt const it = OutIt())
: it(it) { }
multiway_merge_value_insert_iterator &operator++(int)
{ return *this; }
multiway_merge_value_insert_iterator &operator++()
{ return *this; }
multiway_merge_value_insert_iterator &operator *()
{ return *this; }
template<class It>
multiway_merge_value_insert_iterator &operator =(It const i)
{
*this->it = *i;
++this->it;
return *this;
}
};
template<class OutIt>
multiway_merge_value_insert_iterator<OutIt>
multiway_merge_value_inserter(OutIt const it)
{ return multiway_merge_value_insert_iterator<OutIt>(it); };
template<class Less>
struct multiway_merge_value_less : private Less
{
multiway_merge_value_less(Less const &less) : Less(less) { }
template<class It1, class It2>
bool operator()(
std::pair<It1, It1> const &b /* inverted */,
std::pair<It2, It2> const &a) const
{
return b.first != b.second && (
a.first == a.second ||
this->Less::operator()(*a.first, *b.first));
}
};
struct multiway_merge_default_less
{
template<class T>
bool operator()(T const &a, T const &b) const
{ return std::less<T>()(a, b); }
};
template<class R>
struct multiway_merge_range_iterator
{ typedef typename R::iterator type; };
template<class R>
struct multiway_merge_range_iterator<R const>
{ typedef typename R::const_iterator type; };
template<class It>
struct multiway_merge_range_iterator<std::pair<It, It> >
{ typedef It type; };
template<class R>
typename R::iterator multiway_merge_range_begin(R &r)
{ return r.begin(); }
template<class R>
typename R::iterator multiway_merge_range_end(R &r)
{ return r.end(); }
template<class R>
typename R::const_iterator multiway_merge_range_begin(R const &r)
{ return r.begin(); }
template<class R>
typename R::const_iterator multiway_merge_range_end(R const &r)
{ return r.end(); }
template<class It>
It multiway_merge_range_begin(std::pair<It, It> const &r)
{ return r.first; }
template<class It>
It multiway_merge_range_end(std::pair<It, It> const &r)
{ return r.second; }
template<class It, class OutIt, class Less, class PQ>
OutIt multiway_merge(
It begin, It const end, OutIt out, Less const &less,
PQ &pq, bool const distinct = false)
{
while (begin != end)
{
pq.push(typename PQ::value_type(
multiway_merge_range_begin(*begin),
multiway_merge_range_end(*begin)));
++begin;
}
while (!pq.empty())
{
typename PQ::value_type top = pq.top();
pq.pop();
if (top.first != top.second)
{
while (!pq.empty() && pq.top().first == pq.top().second)
{ pq.pop(); }
if (!distinct ||
pq.empty() ||
less(*pq.top().first, *top.first) ||
less(*top.first, *pq.top().first))
{
*out = top.first;
++out;
}
++top.first;
pq.push(top);
}
}
return out;
}
template<class It, class OutIt, class Less>
OutIt multiway_merge(
It const begin, It const end, OutIt out, Less const &less,
bool const distinct = false)
{
typedef typename multiway_merge_range_iterator<
typename std::iterator_traits<It>::value_type
>::type SubIt;
if (std::distance(begin, end) < 16)
{
typedef std::vector<std::pair<SubIt, SubIt> > Remaining;
Remaining remaining;
remaining.reserve(
static_cast<size_t>(std::distance(begin, end)));
for (It i = begin; i != end; ++i)
{
if (multiway_merge_range_begin(*i) !=
multiway_merge_range_end(*i))
{
remaining.push_back(std::make_pair(
multiway_merge_range_begin(*i),
multiway_merge_range_end(*i)));
}
}
while (!remaining.empty())
{
typename Remaining::iterator smallest =
remaining.begin();
for (typename Remaining::iterator
i = remaining.begin();
i != remaining.end();
)
{
if (less(*i->first, *smallest->first))
{
smallest = i;
++i;
}
else if (distinct && i != smallest &&
!less(
*smallest->first,
*i->first))
{
i = remaining.erase(i);
}
else { ++i; }
}
*out = smallest->first;
++out;
++smallest->first;
if (smallest->first == smallest->second)
{ smallest = remaining.erase(smallest); }
}
return out;
}
else
{
std::priority_queue<
std::pair<SubIt, SubIt>,
std::vector<std::pair<SubIt, SubIt> >,
multiway_merge_value_less<Less>
> q((multiway_merge_value_less<Less>(less)));
return multiway_merge(begin, end, out, less, q, distinct);
}
}
template<class It, class OutIt>
OutIt multiway_merge(
It const begin, It const end, OutIt const out,
bool const distinct = false)
{
return multiway_merge(
begin, end, out,
multiway_merge_default_less(), distinct);
}
Here is my implementation using MinHeap...
package merging;
import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileReader;
import java.io.FileWriter;
import java.io.IOException;
import java.io.PrintWriter;
public class N_Way_Merge {
int No_of_files=0;
String[] listString;
int[] listIndex;
PrintWriter pw;
private String fileDir = "D:\\XMLParsing_Files\\Extracted_Data";
private File[] fileList;
private BufferedReader[] readers;
public static void main(String[] args) throws IOException {
N_Way_Merge nwm=new N_Way_Merge();
long start= System.currentTimeMillis();
try {
nwm.createFileList();
nwm.createReaders();
nwm.createMinHeap();
}
finally {
nwm.pw.flush();
nwm.pw.close();
for (BufferedReader readers : nwm.readers) {
readers.close();
}
}
long end = System.currentTimeMillis();
System.out.println("Files merged into a single file.\nTime taken: "+((end-start)/1000)+"secs");
}
public void createFileList() throws IOException {
//creates a list of sorted files present in a particular directory
File folder = new File(fileDir);
fileList = folder.listFiles();
No_of_files=fileList.length;
assign();
System.out.println("No. of files - "+ No_of_files);
}
public void assign() throws IOException
{
listString = new String[No_of_files];
listIndex = new int[No_of_files];
pw = new PrintWriter(new BufferedWriter(new FileWriter("D:\\XMLParsing_Files\\Final.txt", true)));
}
public void createReaders() throws IOException {
//creates array of BufferedReaders to read the files
readers = new BufferedReader[No_of_files];
for(int i=0;i<No_of_files;++i)
{
readers[i]=new BufferedReader(new FileReader(fileList[i]));
}
}
public void createMinHeap() throws IOException {
for(int i=0;i<No_of_files;i++)
{
listString[i]=readers[i].readLine();
listIndex[i]=i;
}
WriteToFile(listString,listIndex);
}
public void WriteToFile(String[] listString,int[] listIndex) throws IOException{
BuildHeap_forFirstTime(listString, listIndex);
while(!(listString[0].equals("zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz")))
{
pw.println(listString[0]);
listString[0]=readers[listIndex[0]].readLine();
MinHeapify(listString,listIndex,0);
}
}
public void BuildHeap_forFirstTime(String[] listString,int[] listIndex){
for(int i=(No_of_files/2)-1;i>=0;--i)
MinHeapify(listString,listIndex,i);
}
public void MinHeapify(String[] listString,int[] listIndex,int index){
int left=index*2 + 1;
int right=left + 1;
int smallest=index;
int HeapSize=No_of_files;
if(left <= HeapSize-1 && listString[left]!=null && (listString[left].compareTo(listString[index])) < 0)
smallest = left;
if(right <= HeapSize-1 && listString[right]!=null && (listString[right].compareTo(listString[smallest])) < 0)
smallest=right;
if(smallest!=index)
{
String temp=listString[index];
listString[index]=listString[smallest];
listString[smallest]=temp;
listIndex[smallest]^=listIndex[index];
listIndex[index]^=listIndex[smallest];
listIndex[smallest]^=listIndex[index];
MinHeapify(listString,listIndex,smallest);
}
}
}
Java implementation of min heap algorithm for merging k sorted arrays:
public class MergeKSorted {
/**
* helper object to store min value of each array in a priority queue,
* the kth array and the index into kth array
*
*/
static class PQNode implements Comparable<PQNode>{
int value;
int kth = 0;
int indexKth = 0;
public PQNode(int value, int kth, int indexKth) {
this.value = value;
this.kth = kth;
this.indexKth = indexKth;
}
#Override
public int compareTo(PQNode o) {
if(o != null) {
return Integer.valueOf(value).compareTo(Integer.valueOf(o.value));
}
else return 0;
}
#Override
public String toString() {
return value+" "+kth+" "+indexKth;
}
}
public static void mergeKSorted(int[][] sortedArrays) {
int k = sortedArrays.length;
int resultCtr = 0;
int totalSize = 0;
PriorityQueue<PQNode> pq = new PriorityQueue<>();
for(int i=0; i<k; i++) {
int[] kthArray = sortedArrays[i];
totalSize+=kthArray.length;
if(kthArray.length > 0) {
PQNode temp = new PQNode(kthArray[0], i, 0);
pq.add(temp);
}
}
int[] result = new int[totalSize];
while(!pq.isEmpty()) {
PQNode temp = pq.poll();
int[] kthArray = sortedArrays[temp.kth];
result[resultCtr] = temp.value;
resultCtr++;
temp.indexKth++;
if(temp.indexKth < kthArray.length) {
temp = new PQNode(kthArray[temp.indexKth], temp.kth, temp.indexKth);
pq.add(temp);
}
}
print(result);
}
public static void print(int[] a) {
StringBuilder sb = new StringBuilder();
for(int v : a) {
sb.append(v).append(" ");
}
System.out.println(sb);
}
public static void main(String[] args) {
int[][] sortedA = {
{3,4,6,9},
{4,6,8,9,12},
{3,4,9},
{1,4,9}
};
mergeKSorted(sortedA);
}
}