How priority queue and heap are similar? [duplicate] - algorithm

It seems that a priority queue is just a heap with normal queue operations like insert, delete, top, etc. Is this the correct way to interpret a priority queue? I know you can build priority queues in different ways but if I were to build a priority queue from a heap is it necessary to create a priority queue class and give instructions for building a heap and the queue operations or is it not really necessary to build the class?
What I mean is if I have a function to build a heap and functions to do operations like insert and delete, do I need to put all these functions in a class or can I just use the instructions by calling them in main.
I guess my question is whether having a collection of functions is equivalent to storing them in some class and using them through a class or just using the functions themselves.
What I have below is all the methods for a priority queue implementation. Is this sufficient to call it an implementation or do I need to put it in a designated priority queue class?
#ifndef MAX_PRIORITYQ_H
#define MAX_PRIORITYQ_H
#include <iostream>
#include <deque>
#include "print.h"
#include "random.h"
int parent(int i)
{
return (i - 1) / 2;
}
int left(int i)
{
if(i == 0)
return 1;
else
return 2*i;
}
int right(int i)
{
if(i == 0)
return 2;
else
return 2*i + 1;
}
void max_heapify(std::deque<int> &A, int i, int heapsize)
{
int largest;
int l = left(i);
int r = right(i);
if(l <= heapsize && A[l] > A[i])
largest = l;
else
largest = i;
if(r <= heapsize && A[r] > A[largest])
largest = r;
if(largest != i) {
exchange(A, i, largest);
max_heapify(A, largest, heapsize);
//int j = max_heapify(A, largest, heapsize);
//return j;
}
//return i;
}
void build_max_heap(std::deque<int> &A)
{
int heapsize = A.size() - 1;
for(int i = (A.size() - 1) / 2; i >= 0; i--)
max_heapify(A, i, heapsize);
}
int heap_maximum(std::deque<int> &A)
{
return A[0];
}
int heap_extract_max(std::deque<int> &A, int heapsize)
{
if(heapsize < 0)
throw std::out_of_range("heap underflow");
int max = A.front();
//std::cout << "heapsize : " << heapsize << std::endl;
A[0] = A[--heapsize];
A.pop_back();
max_heapify(A, 0, heapsize);
//int i = max_heapify(A, 0, heapsize);
//A.erase(A.begin() + i);
return max;
}
void heap_increase_key(std::deque<int> &A, int i, int key)
{
if(key < A[i])
std::cerr << "New key is smaller than current key" << std::endl;
else {
A[i] = key;
while(i > 1 && A[parent(i)] < A[i]) {
exchange(A, i, parent(i));
i = parent(i);
}
}
}
void max_heap_insert(std::deque<int> &A, int key)
{
int heapsize = A.size();
A[heapsize] = std::numeric_limits<int>::min();
heap_increase_key(A, heapsize, key);
}

A priority queue is an abstract datatype. It is a shorthand way of describing a particular interface and behavior, and says nothing about the underlying implementation.
A heap is a data structure. It is a name for a particular way of storing data that makes certain operations very efficient.
It just so happens that a heap is a very good data structure to implement a priority queue, because the operations which are made efficient by the heap data strucure are the operations that the priority queue interface needs.

Having a class with exactly the interface you need (just insert and pop-max?) has its advantages.
You can exchange the implementation (list instead of heap, for example) later.
Someone reading the code that uses the queue doesn't need to understand the more difficult interface of the heap data structure.
I guess my question is whether having a collection of functions is
equivalent to storing them in some class and using them through a
class or just using the functions themselves.
It's mostly equivalent if you just think in terms of "how does my program behave". But it's not equivalent in terms of "how easy is my program to understand by a human reader"

The term priority queue refers to the general data structure useful to order priorities of its element. There are multiple ways to achieve that, e.g., various ordered tree structures (e.g., a splay tree works reasonably well) as well as various heaps, e.g., d-heaps or Fibonacci heaps. Conceptually, a heap is a tree structure where the weight of every node is not less than the weight of any node in the subtree routed at that node.

The C++ Standard Template Library provides the make_heap, push_heap
and pop_heap algorithms for heaps (usually implemented as binary
heaps), which operate on arbitrary random access iterators. It treats
the iterators as a reference to an array, and uses the array-to-heap
conversion. It also provides the container adaptor priority_queue,
which wraps these facilities in a container-like class. However, there
is no standard support for the decrease/increase-key operation.
priority_queue referes to abstract data type defined entirely by the operations that may be performed on it. In C++ STL prioroty_queue is thus one of the sequence adapters - adaptors of basic containers (vector, list and deque are basic because they cannot be built from each other without loss of efficiency), defined in <queue> header (<bits/stl_queue.h> in my case actually). As can be seen from its definition, (as Bjarne Stroustrup says):
container adapter provides a restricted interface to a container. In
particular, adapters do not provide iterators; they are intended to be
used only through their specialized interfaces.
On my implementation prioroty_queue is described as
/**
* #brief A standard container automatically sorting its contents.
*
* #ingroup sequences
*
* This is not a true container, but an #e adaptor. It holds
* another container, and provides a wrapper interface to that
* container. The wrapper is what enforces priority-based sorting
* and %queue behavior. Very few of the standard container/sequence
* interface requirements are met (e.g., iterators).
*
* The second template parameter defines the type of the underlying
* sequence/container. It defaults to std::vector, but it can be
* any type that supports #c front(), #c push_back, #c pop_back,
* and random-access iterators, such as std::deque or an
* appropriate user-defined type.
*
* The third template parameter supplies the means of making
* priority comparisons. It defaults to #c less<value_type> but
* can be anything defining a strict weak ordering.
*
* Members not found in "normal" containers are #c container_type,
* which is a typedef for the second Sequence parameter, and #c
* push, #c pop, and #c top, which are standard %queue operations.
* #note No equality/comparison operators are provided for
* %priority_queue.
* #note Sorting of the elements takes place as they are added to,
* and removed from, the %priority_queue using the
* %priority_queue's member functions. If you access the elements
* by other means, and change their data such that the sorting
* order would be different, the %priority_queue will not re-sort
* the elements for you. (How could it know to do so?)
template:
template<typename _Tp, typename _Sequence = vector<_Tp>,
typename _Compare = less<typename _Sequence::value_type> >
class priority_queue
{
In opposite to this, heap describes how its elements are being fetched and stored in memory. It is a (tree based) data structure, others are i.e array, hash table, struct, union, set..., that in addition satisfies heap property: all nodes are either [greater than or equal to] or [less than or equal to] each of its children, according to a comparison predicate defined for the heap.
So in my heap header I find no heap container, but rather a set of algorithms
/**
* #defgroup heap_algorithms Heap Algorithms
* #ingroup sorting_algorithms
*/
like:
__is_heap_until
__is_heap
__push_heap
__adjust_heap
__pop_heap
make_heap
sort_heap
all of them (excluding __is_heap, commented as "This function is an extension, not part of the C++ standard") described as
* #ingroup heap_algorithms
*
* This operation... (what it does)

Not really. The "priority" in the name stems from a priority value for the entries in the queue, defining their ... of course: priority. There are many ways to implement such a PQ, however.

A priority queue is an abstract data structure that can be implemented in many ways-unsorted array,sorted array,heap-. It is like an interface, it gives you the signature of heap:
class PriorityQueue {
top() → element
peek() → element
insert(element, priority)
remove(element)
update(element, newPriority)
size() → int
}
A heap is a concrete implementation of the priority queue using an array (it can conceptually be represented as a particular kind of binary tree) to hold elements and specific algorithms to enforce invariants. Invariants are internal properties that always hold true throughout the life of the data structure.
here is the performance comparison of priority queue implementions:

Related

Can anybody please describe this portion of code of the Dijkstra algorithm

I can't understand this given portion of the Dijkstra algorithm. I want to understand this portion of code line by line.
The code:
bool operator < (const DATA &p) const { return p.dist > dist; }
I have the basic knowledge of c/c++ code.
bool operator < (const DATA &p) const {
return p.dist > dist;
}
This is less than < operator overloading.
You are passing DATA &p prefixed by const which means p is passed by reference and it can't be modified or altered inside the function.
The function started with const { means there will be no write/modify operation inside the method.
p.dist > dist means after pushing into priority_queue, comparing between two Data will follow this criteria - when Data having smaller dist will be appeared first in the priority queue than Data with longer dist. This sounds contradictory but this is true because priority_queue is by-default a max heap.

Selection Sort in Cuda

So, I'm trying to implement selection sort in Cuda, but so far I haven't been as successful.
__device__ void selection_sort( int *data, int left, int right ){
for( int i = left ; i <= right ; ++i ){
int min_val = data[i];
int min_idx = i;
// Find the smallest value in the range [left, right].
for( int j = i+1 ; j <= right ; ++j ){
int val_j = data[j];
if( val_j < min_val ){
min_idx = j;
min_val = val_j;
}
}
// Swap the values.
if( i != min_idx ){
data[min_idx] = data[i];
data[i] = min_val;
}
}
}
My main attempt here is to find the minimum and parallelize the solution. Now, I realize the code looks very C++ 'ish but I'm nowhere qualified as skilled in Cuda.
Is there a way to parallelize the solution? Are there any more additions to be made?
Selection sort algorithm for N numbers can be roughly described as:
for i from N-1 down to 0
find the maximum element among data[0] ~ data[i]
swap that maximum element with data[i] within the data array
The first part (finding the maximum element) falls into a widely known and well documented class of problems called reduction. However, to perform the second part (swapping), you must track the index of the maximum element while comparing the values, and it is not so natural to do that while performing reduction. This is one of the reasons why selection sort do not port well to parallel architectures.
Also, you can see that the problem size diminishes by one for each loop, and this is another aspect of the selection sort algorithm that does not map well to parallel architectures. In case of CUDA, 32 threads form a warp, which execute at the same time. Although you can tell arbitrary number of threads to run within a warp, it is generally not recommended to do so because it is a loss of computing power.
I've tried to build a CUDA version of selection sort myself, but I stopped doing it because it seems there are better algorithms well suited for CUDA. But I'll just show you what I've done so far to illustrate why selection sort is not good for CUDA.
Firstly, start from a small and simple problem: sorting 32 elements. Since 32 threads form a warp, you can use shuffle instructions to find maximum value. (Full code)
// Finds the maximum element within a warp and gives the maximum element to
// thread with lane id 0. Note that other elements do not get lost but their
// positions are shuffled.
__inline__ __device__ int warpMax(int data, unsigned int threadId)
{
for (int mask = 16; mask > 0; mask /= 2) {
int dual_data = __shfl_xor(data, mask, 32);
if (threadId & mask)
data = min(data, dual_data);
else
data = max(data, dual_data);
}
return data;
}
__global__ void selection32(int* d_data, int* d_data_sorted)
{
unsigned int threadId = blockIdx.x * blockDim.x + threadIdx.x;
unsigned int laneId = threadIdx.x % 32;
int n = N;
while(n-- > 0) {
// get the maximum element among d_data and put it in d_data_sorted[n]
int data = d_data[threadId];
data = warpMax(data, threadId);
d_data[threadId] = data;
// now maximum element is in d_data[0]
if (laneId == 0) {
d_data_sorted[n] = d_data[0];
d_data[0] = INT_MIN; // this element is ignored from now on
}
}
}
int main()
{
// ... build data and trasfer to d_data ...
selection32<<<1, 32>>>(d_data, d_data_sorted);
// ... get the sorted array stored at d_data_sorted ...
}
(Some may argue that this is not exactly a selection sort since 1) the array elements of the unsorted area keep shuffling, and 2) it is not an in-place sort. Please note that I'm just trying to show that selection sort does not fit in for CUDA. Also, note that warpMax has highly divergent branches, making it less optimal for CUDA.)
The case with only 1 warp of elements may look parallel-ish, but the thing gets worse when the problem size increases to multiple warps. Let's see the case for 1024 elements. (I've chosen the number 1024 becuase it is the maximum number limit of threads in a block.) Now there are 32 warps, and after calling warpMax for each warp, we must compare the maximum elements of each warp to get the maximum element among the 1024 elements. This problem of comparing 32 warp-maximum-values cannot be done with warpMax because we need to track in which warp the maximum value came from to swap the maximum value with the last element in the data array. One way I can think of for doing this is using one single thread to compare warp-maximum-values. This is not a good implemenation for CUDA becuase other 1023 threads in the block become idle.
Furthermore, if the problem size grows larger than a block can cover, we need to compare the maximum values of each block, implying that we will have to launch separate kernels since we need to synchronize between blocks. And it is redundant to say that we need to keep track of in which block the maximum value came from. All of these just tells that implementing selection sort for CUDA is not a good idea.

warp shuffling to reduction of arrays with any length

I am working on a Cuda kernel which performs vector dot product (A x B). I assumed that the length of each vector is multiple of 32 (32,64, ...) and defined the block size to be equal to the length of the array. Each thread in the block multiplies one element of A to the corresponding element of B (thread i ==>psum = A[i]xB[i]). After multiplication, I used the following functions which used warp shuffling technique to perform reduction and calculate the sum all multiplications.
__inline__ __device__
float warpReduceSum(float val) {
int warpSize =32;
for (int offset = warpSize/2; offset > 0; offset /= 2)
val += __shfl_down(val, offset);
return val;
}
__inline__ __device__
float blockReduceSum(float val) {
static __shared__ int shared[32]; // Shared mem for 32 partial sums
int lane = threadIdx.x % warpSize;
int wid = threadIdx.x / warpSize;
val = warpReduceSum(val); // Each warp performs partial reduction
if (lane==0)
shared[wid]=val; // Write reduced value to shared memory
__syncthreads(); // Wait for all partial reductions
//read from shared memory only if that warp existed
val = (threadIdx.x < blockDim.x / warpSize) ? shared[lane] : 0;
if (wid==0)
val = warpReduceSum(val); // Final reduce within first warp
return val;
}
I simply call blockReduceSum(psum) which psum is the multiplication of two elements by a thread.
This approach doesn't work when the length of the array is not multiple of 32, so my question is, can we change this code so that it also works for any length? or is it impossible because if the length of the array is not multiple of 32, some warps have elements belonging more than one array?
First of all, depending on the GPU you are using, performing dot product with just 1 block will probably not be very efficient (as long as you are not batching several dot products in 1 kernel, each done by a single block).
To answer your question: you can reuse the code you have written by just calling your kernel with the number of threads being the closest multiple of 32 higher than N (length of the array) and introducing if statement before calling to blockReduceSum that would like this:
__global__ void kernel(float * A, float * B, int N) {
float psum = 0;
if(threadIdx.x < N) //threadIDx.x because your are using single block, you will need to change it to more general id once you move to multiple blocks
psum = A[threadIdx.x] * B[threadIdx.x];
blockReduceSum(psum);
//The rest of computation
}
That way, threads that do not have array element associated with them, but that need to be there due to use of __shfl, will contribute 0 to the sum.

Are std::get<> and std::tuple<> slower then raw pointers?

I have an C++11 application where I commonly iterate over several different structure of arrays for various algorithms. Raw CPU performance is important for this app.
The array elements are fundamental types (int, double, ..) or simple struct. The array are typically tens of thousands of elements long. I often need to iterate several arrays at once in a given loop. So typically I would need one pointer for each array of whatever type. So times I need to increment five individual pointers which is verbose.
Based on these answers about tuples,
Why is std::pair faster than std::tuple
C++11 tuple performance
I hoped there was no overhead to using tuples to pack the pointers together into a single object.
I thought it might be nice to implement a cursor like object to assist in iterating, since missing the increment on a particular pointer would be an annoying bug.
auto pts = std::make_tuple(p1, p2, p3...);
allow you to bundle a bunch of variables together in a typesafe way. Then you can implement a variadic template function to increment each pointer in the tuple in a type safe way.
However...
When I measure performance, the tuple version was slower then using raw pointers. But when I look at the generated assembly I see additional mov instructions in the tuple loop increment. Maybe due to the fact the std::get<> returns a reference? I had hoped that would be compiled away...
Am I missing something or are raw pointers just going to beat tuples when used like this? Here is a simple test harness. I threw away the fancy cursor code and just use a std::tuple<> for this test
On my machine, the tuple loop is consistently twice as slow as the raw pointer version for various data sizes.
My system config is Visual C++ 2013 x64 on Windows 8 with a release build. I did try turning on various optimization in Visual Studio such as
Inline Function Expansion : Any Suitable (/Ob2)
but it did not seem to change the time result for my case.
I did need to do two extra things to avoid aggressive optimization by VS
1) I forced the test data array to allocated on the heap, not the stack. That made a big difference when I timed things, possibly due to memory cache effects.
2) I forced a side effect by writing to static variable at the end so the compiler would not just skip my loop.
struct forceHeap
{
__declspec(noinline) int* newData(int M)
{
int* data = new int[M];
return data;
}
};
void timeSumCursor()
{
static int gIntStore;
int maxCount = 20;
int M = 10000000;
// compiler might place array on stack which changes the timing
// int* data = new int[N];
forceHeap fh;
int* data = fh.newData(M);
int *front = data;
int *end = data + M;
int j = 0;
for (int* p = front; p < end; ++p)
{
*p = (++j) % 1000;
}
{
BEGIN_TIMING_BLOCK("raw pointer loop", maxCount);
int* p = front;
int sum = 0;
int* cursor = front;
while (++cursor != end)
{
sum += *cursor;
}
gIntStore = sum;// force a side effect
END_TIMING_BLOCK();
}
printf("%d\n", gIntStore);
{
// just use a simple tuple to show the issue
// rather full blown cursor object
BEGIN_TIMING_BLOCK("tuple loop", maxCount);
int sum = 0;
auto cursor = std::make_tuple(front);
while (++std::get<0>(cursor) != end)
{
sum += *std::get<0>(cursor);
}
gIntStore = sum; // force a side effect
END_TIMING_BLOCK();
}
printf("%d\n", gIntStore);
delete[] data;
}

Pointer arithmetics in C++ uses sizeof(type) incremention instead of byte incremention?

I am confused by the behavior of pointer arithmetics in C++. I have an array and I want to go N elements forward from the current one. Since in C++ pointer is memory address in BYTES, it seemed logical to me that the code would be newaddr = curaddr + N * sizeof(mytype). It produced errors though; later I found that with newaddr = curaddr + N everything works correctly. Why so? Should it really be address + N instead of address + N * sizeof?
Part of my code where I noticed it (2D array with all memory allocated as one chunk):
// creating pointers to the beginning of each line
if((Content = (int **)malloc(_Height * sizeof(int *))) != NULL)
{
// allocating a single memory chunk for the whole array
if((Content[0] = (int *)malloc(_Width * _Height * sizeof(int))) != NULL)
{
// setting up line pointers' values
int * LineAddress = Content[0];
int Step = _Width * sizeof(int); // <-- this gives errors, just "_Width" is ok
for(int i=0; i<_Height; ++i)
{
Content[i] = LineAddress; // faster than
LineAddress += Step; // Content[i] = Content[0] + i * Step;
}
// everything went ok, setting Width and Height values now
Width = _Width;
Height = _Height;
// success
return 1;
}
else
{
// insufficient memory available
// need to delete line pointers
free(Content);
return 0;
}
}
else
{
// insufficient memory available
return 0;
}
Your error in reasoning is right here: "Since in C++ pointer is memory address in BYTES, [...]".
A C/C++ pointer is not a memory address in bytes. Sure, it is represented by a memory address, but you have to differentiate between a pointer type and its representation. The operation "+" is defined for a type, not for its representation. Therefore, when it is called one the type int *, it respects the semantics of this type. Therefore, + 1 on an int * type will advance the pointer as much bytes as the underlying int type representation uses.
You can of course cast your pointer like this: (int)myPointer. Now you have a numeric type (instead of a pointer type), where + 1 will work as you would expect from a numeric type. Note that after this cast, the representation stays the same, but the type changes.
A "pointer" points to a location.
When you "increment", you want to go to the next, adjacent location.
Q: "Next" and "adjacent" depend on the size of the object you're pointing to, don't they?
Q: When you don't use "sizeof()", everything works, correct? Why? What do you think the compiler is doing for you, "behind your back"?
Q: What do you think should happen if you add your own "sizeof()"?
ON TOP OF the "everything works" scenario?
pointers point to addresses, so incrementing the pointer p by N will point to the Nth block of memory from p.
Now, if you were using addresses instead of pointers to addresses, then it would be appropriate to add N*sizeof(type).

Resources