Understanding time complexity of Dijkstra priority queue vs set implementation - algorithm

Consider the following two implementations of dijkstra's algorithm:
Using set:
// V - total number of vertices
// S - source node
void dijkstra(int V, vector<vector<int>> edge[], int S)
{
vector<int> dist(V, 1e9);
dist[S] = 0;
set<pair<int, int>> s;
for (int i=0; i<V; i++)
s.insert({dist[i], i}); // log(V) time
while (!s.empty()) // exactly V times
{
auto top = *(s.begin()); // constant time
int dis = top.first;
int node = top.second;
s.erase(top); // amortised constant time
for (auto it: edge[node]) // For all the iterations of outer while loop this will sum up to E, where E is the total number of edges in the graph
{
int nb = it[0];
int edge_weight = it[1];
if (dist[nb] > dis + edge_weight)
{
s.erase({dist[nb], nb}); // log(V) time
dist[nb] = dis + edge_weight;
s.insert({dist[nb], nb}); // log(V) time
}
}
}
}
Using priority queue:
// V - total number of vertices
// S - source node
void dijkstra(int V, vector<vector<int>> edge[], int S)
{
vector<int> dist(V, 1e9);
dist[S] = 0;
priority_queue<pair<int, int> , vector<pair<int, int>>, greater<pair<int, int>>> pq;
pq.push({dist[S], S});
while (!pq.empty()) // Can be more than V times, let's call this heap_size times
{
int node = pq.top().second; // O(1) time
pq.pop(); // log(heap_size) time
for (int i=0; i<edge[node].size(); i++) // Can be approximated to (E/V), where E is the total number of edges in the graph
{
int nb = edge[node][i][0];
int edge_weight = edge[node][i][1];
if (dist[nb] > dist[node] + edge_weight) {
dist[nb] = dist[node] + edge_weight;
pq.push({dist[nb], nb}); // log(heap_size) time
}
}
}
return dist;
}
Finding the time complexity using set based approach is easy as the number of elements in set is exactly V(number of vertices) and the inner for loop runs for every edge, so it's time complexity is O(V*log(V) + V + E*log(V)) which is equivalent to O(E*log(V)) (reason: What's the time complexity of Dijkstra's Algorithm)
But I have trouble in understanding the time complexity of the priority_queue approach. Here the same node can be present multiple times in the priority_queue with different distances. How do I calculate the upper bound on the number of nodes that are added to the heap?
I also want to decide which implementation to use based on the nature of the graph(sparse vs dense), are both these implementations equivalent for any type of graph?

Your priority_queue version isn't quite right.
Your while loop should start like this:
auto nextPair = pq.top();
pq.pop();
if (dist[nextPair.second] != nextPair.first) {
continue;
}
int node = nextPair.second;
This ensures that each vertex is processed only once, when its current record is popped from the priority queue.
The complexity analysis then becomes easy, since each edge will then be processed at most once, and there are then at most |E| inserts into the priority queue.
Total complexity is then O(E log E), and since E < V2, that's the same as O(E log V).
The major disadvantage of the priority queue method is that it can consume O(E) space. This is usually OK, since it's on par with the space consumed by the graph itself. Since the priority_queue is a lot faster than set, the priority_queue version is the way that it is commonly done in practice.

Related

why prim`s algorithm needs distance array?

I have some questions about Prim`s algorithm.
Prim algorithms can find MST. In general implementation, It needs initialize all Nodes as INF. but i don`t know why this initialize needs.
Here is my implementation
#include<iostream>
#include<tuple>
#include<algorithm>
#include<vector>
using namespace std;
typedef tuple<int,int,int> ti;
int main(void)
{
ios::sync_with_stdio(0);
cin.tie(0);
bool vis[1005];
vector<pair<int,int>> vertex[1005];
int V,E;
int u,v,w;
int sum = 0;
int cnt = 0;
priority_queue<ti,vector<ti>,greater<ti>> pq;
cin >> V >> E;
for(int i = 0; i < E; i++)
{
cin >> u >> v >> w;
vertex[u].push_back({v,w});
vertex[v].push_back({u,w});
}
for(auto i : vertex[1]){
pq.push({i.second,1,i.first});
}
vis[1] = true;
while(!pq.empty())
{
tie(w,u,v) = pq.top(); pq.pop();
if(vis[v]) continue;
vis[v] = true;
sum += w;
cnt++;
for(auto i : vertex[v]){
if(!vis[i.first])
pq.push({i.second,v,i.first});
}
if(cnt == V-1) break;
}
// VlogV
cout << sum;
return 0;
plz ignore indentation (code paste error)
In this code, you can find sum of the MST. O(VlogV), Also we can find some Vertex Parent node (vis[v] = true, pre[v] = u) so we can know order of MST.
When we don`t need distance array, we can implement prim algorithm O(VlogV), In almost case(not in MST case) it always faster than Kruskal.
I know I'm something wrong, so i want to know what point I am wrong.
So is there any reason why we use distance array??
Your conclusion that this algorithm works in O(Vlog(V)) seems to be wrong. Here is why:
while(!pq.empty()) // executed |V| times
{
tie(w,u,v) = pq.top();
pq.pop(); // log(|V|) for each pop operation
if(vis[v]) continue;
vis[v] = true;
sum += w;
cnt++;
for(auto i : vertex[v]){ // check the vertices of edge v - |E| times in total
if(!vis[i.first])
pq.push({i.second,v,i.first}); // log(|V|) for each push operation
}
if(cnt == V-1) break;
}
First of all notice that, you have to implement the while loop |V| times, since there are |V| number of vertices stored in the pq.
However, also notice that you have to traverse all the vertices in the line:
for(auto i : vertex[v])
Therefore it takes |E| number of operations in total.
Notice that push and pop operations takes |V| number of operations for each approximately.
So what do we have?
We have |V| many iterations and log(|V|) number of push/pop operations in each iteration, which makes V * log(V) number of operations.
On the other hand, we have |E| number of vertex iteration in total, and log(|V|) number of push operation in each vertex iteration, which makes E * log(V) number of operations.
In conclusion, we have V*log(V) + E*log(V) total number of operations. In most cases, V < E assuming a connected graph, therefore time complexity can be shown as O(E*log(V)).
So, time complexity of Prim's Algorithm doesn't depend on keeping a distance array. Still, you have to make the iterations mentioned above.

Dijkstra algorithm with negative weigths

Heres an implementation of dijkstra algo i found in an online couse. Can anyone site an example with negative edges where this might not work.
vector<edge> adj[100];
vector<int> dist(100, INF);
void dijkstra(int start) {
dist[start] = 0;
priority_queue<pair<int, int>,
vector<pair<int, int> >,
greater<pair<int, int> > > pq;
pq.push(make_pair(dist[start], start));
while (!pq.empty()) {
int u = pq.top().second,
d = pq.top().first;
pq.pop();
if (d > dist[u]) continue;
for (int i = 0; i < adj[u].size(); i++) {
int v = adj[u][i].v,
w = adj[u][i].weight;
if (w + dist[u] < dist[v]) {
dist[v] = w + dist[u];
pq.push(make_pair(dist[v], v));
}
}
}
Normally Dijsktra's algorithm includes a visited set which means that we can avoid revisiting nodes, and can terminate as soon as the destination has been visited.
This implementation does not use such a set. This means that it may revisit nodes and adjust the distance if it reduces.
The good thing is that this will always return the correct answer if it terminates. However, the algorithm can take a long time to terminate if negative weights are included (in particular, if there is a negative weight loop then this algorithm will never terminate).
The example is cycle with negative weight. Dijkstra's algorithms never leaves this cycle.

Reconstructing the list of items from a space optimized 0/1 knapsack implementation

A space optimization for the 0/1 knapsack dynamic programming algorithm is to use a 1-d array (say, A) of size equal to the knapsack capacity, and simply overwrite A[w] (if required) at each iteration i, where A[w] denotes the total value if the first i items are considered and knapsack capacity is w.
If this optimization is used, is there a way to reconstruct the list of items picked, perhaps by saving some extra information at each iteration of the DP algorithm? For example, in the Bellman Ford Algorithm a similar space optimization can be implemented, and the shortest path can still be reconstructed as long as we keep a list of the predecessor pointers, ie the last hop (or first, depending on if a source/destination oriented algorithm is being written).
For reference, here is my C++ function for the 0/1 knapsack problem using dynamic programming where I construct a 2-d vector ans such that ans[i][j] denotes the total value considering the first i items and knapsack capacity j. I reconstruct the items picked by reverse traversing this vector ans:
void knapsack(vector<int> v,vector<int>w,int cap){
//v[i]=value of item i-1
//w[i]=weight of item i-1, cap=knapsack capacity
//ans[i][j]=total value if considering 1st i items and capacity j
vector <vector<int> > ans(v.size()+1,vector<int>(cap+1));
//value with 0 items is 0
ans[0]=vector<int>(cap+1,0);
//value with 0 capacity is 0
for (uint i=1;i<v.size()+1;i++){
ans[i][0]=0;
}
//dp
for (uint i=1;i<v.size()+1;i++) {
for (int x=1;x<cap+1;x++) {
if (ans[i-1][x]>=ans[i-1][x-w[i-1]]+v[i-1]||x<w[i-1])
ans[i][x]=ans[i-1][x];
else {
ans[i][x]=ans[i-1][x-w[i-1]]+v[i-1];
}
}
}
cout<<"Total value: "<<ans[v.size()][cap]<<endl;
//reconstruction
cout<<"Items to carry: \n";
for (uint i=v.size();i>0;i--) {
for (int x=cap;x>0;x--) {
if (ans[i][x]==ans[i-1][x]) //item i not in knapsack
break;
else if (ans[i][x]==ans[i-1][x-w[i-1]]+v[i-1]) { //item i in knapsack
cap-=w[i-1];
cout<<i<<"("<<v[i-1]<<"), ";
break;
}
}
}
cout<<endl;
}
The following is a C++ implementation of yildizkabaran's answer. It adapts Hirschberg's clever divide & conquer idea to compute the answer to a knapsack instance with n items and capacity c in O(nc) time and just O(c) space:
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
// Returns a vector of (cost, elem) pairs.
vector<pair<int, int>> optimal_cost(vector<int> const& v, vector<int> const& w, int cap) {
vector<pair<int, int>> dp(cap + 1, { 0, -1 });
for (int i = 0; i < size(v); ++i) {
for (int j = cap; j >= 0; --j) {
if (w[i] <= j && dp[j].first < dp[j - w[i]].first + v[i]) {
dp[j] = { dp[j - w[i]].first + v[i], i };
}
}
}
return dp;
}
// Returns a vector of item labels corresponding to an optimal solution, in increasing order.
vector<int> knapsack_hirschberg(vector<int> const& v, vector<int> const& w, int cap, int offset = 0) {
if (empty(v)) {
return {};
}
int mid = size(v) / 2;
auto subSol1 = optimal_cost(vector<int>(begin(v), begin(v) + mid), vector<int>(begin(w), begin(w) + mid), cap);
auto subSol2 = optimal_cost(vector<int>(begin(v) + mid, end(v)), vector<int>(begin(w) + mid, end(w)), cap);
pair<int, int> best = { -1, -1 };
for (int i = 0; i <= cap; ++i) {
best = max(best, { subSol1[i].first + subSol2[cap - i].first, i });
}
vector<int> solution;
if (subSol1[best.second].second != -1) {
int iChosen = subSol1[best.second].second;
solution = knapsack_hirschberg(vector<int>(begin(v), begin(v) + iChosen), vector<int>(begin(w), begin(w) + iChosen), best.second - w[iChosen], offset);
solution.push_back(subSol1[best.second].second + offset);
}
if (subSol2[cap - best.second].second != -1) {
int iChosen = mid + subSol2[cap - best.second].second;
auto subSolution = knapsack_hirschberg(vector<int>(begin(v) + mid, begin(v) + iChosen), vector<int>(begin(w) + mid, begin(w) + iChosen), cap - best.second - w[iChosen], offset + mid);
copy(begin(subSolution), end(subSolution), back_inserter(solution));
solution.push_back(iChosen + offset);
}
return solution;
}
Even though this is an old question I recently ran into the same problem so I figured I would write my solution here. What you need is Hirschberg's algorithm. Although this algorithm is written for reconstructing edit distances, the same principle applies here. The idea is that when searching for n items in capacity c, the knapsack state at (n/2)th item corresponding to the final maximum value is determined in the first scan. Let's call this state weight_m and value_m. This can be with keeping track of an additional 1d array of size c. So the memory is still O(c). Then the problem is divided into two parts: items 0 to n/2 with a capacity of weight_m, and items n/2 to n with a capacity of c-weight_m. The reduced problems in total is of size nc/2. Continuing this approach we can determine the knapsack state (occupied weight and current value) after each item, after which we can simply check to see which items were included. This algorithm completes in O(2nc) while using O(c) memory, so in terms of big-O nothing is changed even though the algorithm is at least twice as slow. I hope this helps to anyone who is facing a similar problem.
To my understanding, with the proposed solution, it is effectively impossible to obtain the set of involved items for a certain objective value. The set of items can be obtained by either generating the discarded rows again or maintain a suitable auxiliary data structure. This could be done by associating each entry in A with the list of items from which it was generated. However, this would require more memory than the initially proposed solution. Approaches for backtracking for knapsack problems is also briefly discussed in this journal paper.

The Maximum Volume of Trapped Rain Water in 3D

A classic algorithm question in 2D version is typically described as
Given n non-negative integers representing an elevation map where the width of each bar is 1, compute how much water it is able to trap after raining.
For example, Given the input
[0,1,0,2,1,0,1,3,2,1,2,1]
the return value would be
6
The algorithm that I used to solve the above 2D problem is
int trapWaterVolume2D(vector<int> A) {
int n = A.size();
vector<int> leftmost(n, 0), rightmost(n, 0);
//left exclusive scan, O(n), the highest bar to the left each point
int leftMaxSoFar = 0;
for (int i = 0; i < n; i++){
leftmost[i] = leftMaxSoFar;
if (A[i] > leftMaxSoFar) leftMaxSoFar = A[i];
}
//right exclusive scan, O(n), the highest bar to the right each point
int rightMaxSoFar = 0;
for (int i = n - 1; i >= 0; i--){
rightmost[i] = rightMaxSoFar;
if (A[i] > rightMaxSoFar) rightMaxSoFar = A[i];
}
// Summation, O(n)
int vol = 0;
for (int i = 0; i < n; i++){
vol += max(0, min(leftmost[i], rightmost[i]) - A[i]);
}
return vol;
}
My Question is how to make the above algorithm extensible to the 3D version of the problem, to compute the maximum of water trapped in real-world 3D terrain. i.e. To implement
int trapWaterVolume3D(vector<vector<int> > A);
Sample graph:
We know the elevation at each (x, y) point and the goal is to compute the maximum volume of water that can be trapped in the shape. Any thoughts and references are welcome.
For each point on the terrain consider all paths from that point to the border of the terrain. The level of water would be the minimum of the maximum heights of the points of those paths. To find it we need to perform a slightly modified Dijkstra's algorithm, filling the water level matrix starting from the border.
For every point on the border set the water level to the point height
For every point not on the border set the water level to infinity
Put every point on the border into the set of active points
While the set of active points is not empty:
Select the active point P with minimum level
Remove P from the set of active points
For every point Q adjacent to P:
Level(Q) = max(Height(Q), min(Level(Q), Level(P)))
If Level(Q) was changed:
Add Q to the set of active points
user3290797's "slightly modified Dijkstra algorithm" is closer to Prim's algorithm than Dijkstra's. In minimum spanning tree terms, we prepare a graph with one vertex per tile, one vertex for the outside, and edges with weights equal to the maximum height of their two adjoining tiles (the outside has height "minus infinity").
Given a path in this graph to the outside vertex, the maximum weight of an edge in the path is the height that the water has to reach in order to escape along that path. The relevant property of a minimum spanning tree is that, for every pair of vertices, the maximum weight of an edge in the path in the spanning tree is the minimum possible among all paths between those vertices. The minimum spanning tree thus describes the most economical escape paths for water, and the water heights can be extracted in linear time with one traversal.
As a bonus, since the graph is planar, there's a linear-time algorithm for computing the minimum spanning tree, consisting of alternating Boruvka passes and simplifications. This improves on the O(n log n) running time of Prim.
This problem can be solved using the Priority-Flood algorithm. It's been discovered and published a number of times over the past few decades (and again by other people answering this question), though the specific variant you're looking for is not, to my knowledge, in the literature.
You can find a review paper of the algorithm and its variants here. Since that paper was published an even faster variant has been discovered (link), as well as methods to perform this calculation on datasets of trillions of cells (link). A method for selectively breaching low/narrow divides is discussed here. Contact me if you'd like copies of any of these papers.
I have a repository here with many of the above variants; additional implementations can be found here.
A simple script to calculate volume using the RichDEM library is as follows:
#include "richdem/common/version.hpp"
#include "richdem/common/router.hpp"
#include "richdem/depressions/Lindsay2016.hpp"
#include "richdem/common/Array2D.hpp"
/**
#brief Calculates the volume of depressions in a DEM
#author Richard Barnes (rbarnes#umn.edu)
Priority-Flood starts on the edges of the DEM and then works its way inwards
using a priority queue to determine the lowest cell which has a path to the
edge. The neighbours of this cell are added to the priority queue if they
are higher. If they are lower, then they are members of a depression and the
elevation of the flooding minus the elevation of the DEM times the cell area
is the flooded volume of the cell. The cell is flooded, total volume
tracked, and the neighbors are then added to a "depressions" queue which is
used to flood depressions. Cells which are higher than a depression being
filled are added to the priority queue. In this way, depressions are filled
without incurring the expense of the priority queue.
#param[in,out] &elevations A grid of cell elevations
#pre
1. **elevations** contains the elevations of every cell or a value _NoData_
for cells not part of the DEM. Note that the _NoData_ value is assumed to
be a negative number less than any actual data value.
#return
Returns the total volume of the flooded depressions.
#correctness
The correctness of this command is determined by inspection. (TODO)
*/
template <class elev_t>
double improved_priority_flood_volume(const Array2D<elev_t> &elevations){
GridCellZ_pq<elev_t> open;
std::queue<GridCellZ<elev_t> > pit;
uint64_t processed_cells = 0;
uint64_t pitc = 0;
ProgressBar progress;
std::cerr<<"\nPriority-Flood (Improved) Volume"<<std::endl;
std::cerr<<"\nC Barnes, R., Lehman, C., Mulla, D., 2014. Priority-flood: An optimal depression-filling and watershed-labeling algorithm for digital elevation models. Computers & Geosciences 62, 117–127. doi:10.1016/j.cageo.2013.04.024"<<std::endl;
std::cerr<<"p Setting up boolean flood array matrix..."<<std::endl;
//Used to keep track of which cells have already been considered
Array2D<int8_t> closed(elevations.width(),elevations.height(),false);
std::cerr<<"The priority queue will require approximately "
<<(elevations.width()*2+elevations.height()*2)*((long)sizeof(GridCellZ<elev_t>))/1024/1024
<<"MB of RAM."
<<std::endl;
std::cerr<<"p Adding cells to the priority queue..."<<std::endl;
//Add all cells on the edge of the DEM to the priority queue
for(int x=0;x<elevations.width();x++){
open.emplace(x,0,elevations(x,0) );
open.emplace(x,elevations.height()-1,elevations(x,elevations.height()-1) );
closed(x,0)=true;
closed(x,elevations.height()-1)=true;
}
for(int y=1;y<elevations.height()-1;y++){
open.emplace(0,y,elevations(0,y) );
open.emplace(elevations.width()-1,y,elevations(elevations.width()-1,y) );
closed(0,y)=true;
closed(elevations.width()-1,y)=true;
}
double volume = 0;
std::cerr<<"p Performing the improved Priority-Flood..."<<std::endl;
progress.start( elevations.size() );
while(open.size()>0 || pit.size()>0){
GridCellZ<elev_t> c;
if(pit.size()>0){
c=pit.front();
pit.pop();
} else {
c=open.top();
open.pop();
}
processed_cells++;
for(int n=1;n<=8;n++){
int nx=c.x+dx[n];
int ny=c.y+dy[n];
if(!elevations.inGrid(nx,ny)) continue;
if(closed(nx,ny))
continue;
closed(nx,ny)=true;
if(elevations(nx,ny)<=c.z){
if(elevations(nx,ny)<c.z){
++pitc;
volume += (c.z-elevations(nx,ny))*std::abs(elevations.getCellArea());
}
pit.emplace(nx,ny,c.z);
} else
open.emplace(nx,ny,elevations(nx,ny));
}
progress.update(processed_cells);
}
std::cerr<<"t Succeeded in "<<std::fixed<<std::setprecision(1)<<progress.stop()<<" s"<<std::endl;
std::cerr<<"m Cells processed = "<<processed_cells<<std::endl;
std::cerr<<"m Cells in pits = " <<pitc <<std::endl;
return volume;
}
template<class T>
int PerformAlgorithm(std::string analysis, Array2D<T> elevations){
elevations.loadData();
std::cout<<"Volume: "<<improved_priority_flood_volume(elevations)<<std::endl;
return 0;
}
int main(int argc, char **argv){
std::string analysis = PrintRichdemHeader(argc,argv);
if(argc!=2){
std::cerr<<argv[0]<<" <Input>"<<std::endl;
return -1;
}
return PerformAlgorithm(argv[1],analysis);
}
It should be straight-forward to adapt this to whatever 2d array format you are using
In pseudocode, the following is equivalent to the foregoing:
Let PQ be a priority-queue which always pops the cell of lowest elevation
Let Closed be a boolean array initially set to False
Let Volume = 0
Add all the border cells to PQ.
For each border cell, set the cell's entry in Closed to True.
While PQ is not empty:
Select the top cell from PQ, call it C.
Pop the top cell from PQ.
For each neighbor N of C:
If Closed(N):
Continue
If Elevation(N)<Elevation(C):
Volume += (Elevation(C)-Elevation(N))*Area
Add N to PQ, but with Elevation(C)
Else:
Add N to PQ with Elevation(N)
Set Closed(N)=True
This problem is very close to the construction of the morphological watershed of a grayscale image.
One approach is as follows (flooding process):
sort all pixels by increasing elevation.
work incrementally, by increasing elevations, assigning labels to the pixels per catchment basin.
for a new elevation level, you need to label a new set of pixels:
Some have no labeled
neighbor, they form a local minimum configuration and begin a new catchment basin.
Some have only neighbors with the same label, they can be labeled similarly (they extend a catchment basin).
Some have neighbors with different labels. They do not belong to a specific catchment basin and they define the watershed lines.
You will need to enhance the standard watershed algorithm to be able to compute the volume of water. You can do that by determining the maximum water level in each basin and deduce the ground height on every pixel. The water level in a basin is given by the elevation of the lowest watershed pixel around it.
You can act every time you discover a watershed pixel: if a neighboring basin has not been assigned a level yet, that basin can stand the current level without leaking.
In order to accomplish tapping water problem in 3D i.e., to calculate the maximum volume of trapped rain water you can do something like this:
#include<bits/stdc++.h>
using namespace std;
#define MAX 10
int new2d[MAX][MAX];
int dp[MAX][MAX],visited[MAX][MAX];
int dx[] = {1,0,-1,0};
int dy[] = {0,-1,0,1};
int boundedBy(int i,int j,int k,int in11,int in22)
{
if(i<0 || j<0 || i>=in11 || j>=in22)
return 0;
if(new2d[i][j]>k)
return new2d[i][j];
if(visited[i][j]) return INT_MAX;
visited[i][j] = 1;
int r = INT_MAX;
for(int dir = 0 ; dir<4 ; dir++)
{
int nx = i + dx[dir];
int ny = j + dy[dir];
r = min(r,boundedBy(nx,ny,k,in11,in22));
}
return r;
}
void mark(int i,int j,int k,int in1,int in2)
{
if(i<0 || j<0 || i>=in1 || j>=in2)
return;
if(new2d[i][j]>=k)
return;
if(visited[i][j]) return ;
visited[i][j] = 1;
for(int dir = 0;dir<4;dir++)
{
int nx = i + dx[dir];
int ny = j + dy[dir];
mark(nx,ny,k,in1,in2);
}
dp[i][j] = max(dp[i][j],k);
}
struct node
{
int i,j,key;
node(int x,int y,int k)
{
i = x;
j = y;
key = k;
}
};
bool compare(node a,node b)
{
return a.key>b.key;
}
vector<node> store;
int getData(int input1, int input2, int input3[])
{
int row=input1;
int col=input2;
int temp=0;
int count=0;
for(int i=0;i<row;i++)
{
for(int j=0;j<col;j++)
{
if(count==(col*row))
break;
new2d[i][j]=input3[count];
count++;
}
}
store.clear();
for(int i = 0;i<input1;i++)
{
for(int j = 0;j<input2;j++)
{
store.push_back(node(i,j,new2d[i][j]));
}
}
memset(dp,0,sizeof(dp));
sort(store.begin(),store.end(),compare);
for(int i = 0;i<store.size();i++)
{
memset(visited,0,sizeof(visited));
int aux = boundedBy(store[i].i,store[i].j,store[i].key,input1,input2);
if(aux>store[i].key)
{
memset(visited,0,sizeof(visited));
mark(store[i].i,store[i].j,aux,input1,input2);
}
}
long long result =0 ;
for(int i = 0;i<input1;i++)
{
for(int j = 0;j<input2;j++)
{
result = result + max(0,dp[i][j]-new2d[i][j]);
}
}
return result;
}
int main()
{
cin.sync_with_stdio(false);
cout.sync_with_stdio(false);
int n,m;
cin>>n>>m;
int inp3[n*m];
store.clear();
for(int j = 0;j<n*m;j++)
{
cin>>inp3[j];
}
int k = getData(n,m,inp3);
cout<<k;
return 0;
}
class Solution(object):
def trapRainWater(self, heightMap):
"""
:type heightMap: List[List[int]]
:rtype: int
"""
m = len(heightMap)
if m == 0:
return 0
n = len(heightMap[0])
if n == 0:
return 0
visited = [[False for i in range(n)] for j in range(m)]
from Queue import PriorityQueue
q = PriorityQueue()
for i in range(m):
visited[i][0] = True
q.put([heightMap[i][0],i,0])
visited[i][n-1] = True
q.put([heightMap[i][n-1],i,n-1])
for j in range(1, n-1):
visited[0][j] = True
q.put([heightMap[0][j],0,j])
visited[m-1][j] = True
q.put([heightMap[m-1][j],m-1,j])
S = 0
while not q.empty():
cell = q.get()
for (i, j) in [(1,0), (-1,0), (0,1), (0,-1)]:
x = cell[1] + i
y = cell[2] + j
if x in range(m) and y in range(n) and not visited[x][y]:
S += max(0, cell[0] - heightMap[x][y]) # how much water at the cell
q.put([max(heightMap[x][y],cell[0]),x,y])
visited[x][y] = True
return S
Here is the simple code for the same-
#include<iostream>
using namespace std;
int main()
{
int n,count=0,a[100];
cin>>n;
for(int i=0;i<n;i++)
{
cin>>a[i];
}
for(int i=1;i<n-1;i++)
{
///computing left most largest and Right most largest element of array;
int leftmax=0;
int rightmax=0;
///left most largest
for(int j=i-1;j>=1;j--)
{
if(a[j]>leftmax)
{
leftmax=a[j];
}
}
///rightmost largest
for(int k=i+1;k<=n-1;k++)
{
if(a[k]>rightmax)
{
rightmax=a[k];
}
}
///computing hight of the water contained-
int x=(min(rightmax,leftmax)-a[i]);
if(x>0)
{
count=count+x;
}
}
cout<<count;
return 0;
}

Shuffle list, ensuring that no item remains in same position

I want to shuffle a list of unique items, but not do an entirely random shuffle. I need to be sure that no element in the shuffled list is at the same position as in the original list. Thus, if the original list is (A, B, C, D, E), this result would be OK: (C, D, B, E, A), but this one would not: (C, E, A, D, B) because "D" is still the fourth item. The list will have at most seven items. Extreme efficiency is not a consideration. I think this modification to Fisher/Yates does the trick, but I can't prove it mathematically:
function shuffle(data) {
for (var i = 0; i < data.length - 1; i++) {
var j = i + 1 + Math.floor(Math.random() * (data.length - i - 1));
var temp = data[j];
data[j] = data[i];
data[i] = temp;
}
}
You are looking for a derangement of your entries.
First of all, your algorithm works in the sense that it outputs a random derangement, ie a permutation with no fixed point. However it has a enormous flaw (which you might not mind, but is worth keeping in mind): some derangements cannot be obtained with your algorithm. In other words, it gives probability zero to some possible derangements, so the resulting distribution is definitely not uniformly random.
One possible solution, as suggested in the comments, would be to use a rejection algorithm:
pick a permutation uniformly at random
if it hax no fixed points, return it
otherwise retry
Asymptotically, the probability of obtaining a derangement is close to 1/e = 0.3679 (as seen in the wikipedia article). Which means that to obtain a derangement you will need to generate an average of e = 2.718 permutations, which is quite costly.
A better way to do that would be to reject at each step of the algorithm. In pseudocode, something like this (assuming the original array contains i at position i, ie a[i]==i):
for (i = 1 to n-1) {
do {
j = rand(i, n) // random integer from i to n inclusive
} while a[j] != i // rejection part
swap a[i] a[j]
}
The main difference from your algorithm is that we allow j to be equal to i, but only if it does not produce a fixed point. It is slightly longer to execute (due to the rejection part), and demands that you be able to check if an entry is at its original place or not, but it has the advantage that it can produce every possible derangement (uniformly, for that matter).
I am guessing non-rejection algorithms should exist, but I would believe them to be less straight-forward.
Edit:
My algorithm is actually bad: you still have a chance of ending with the last point unshuffled, and the distribution is not random at all, see the marginal distributions of a simulation:
An algorithm that produces uniformly distributed derangements can be found here, with some context on the problem, thorough explanations and analysis.
Second Edit:
Actually your algorithm is known as Sattolo's algorithm, and is known to produce all cycles with equal probability. So any derangement which is not a cycle but a product of several disjoint cycles cannot be obtained with the algorithm. For example, with four elements, the permutation that exchanges 1 and 2, and 3 and 4 is a derangement but not a cycle.
If you don't mind obtaining only cycles, then Sattolo's algorithm is the way to go, it's actually much faster than any uniform derangement algorithm, since no rejection is needed.
As #FelixCQ has mentioned, the shuffles you are looking for are called derangements. Constructing uniformly randomly distributed derangements is not a trivial problem, but some results are known in the literature. The most obvious way to construct derangements is by the rejection method: you generate uniformly randomly distributed permutations using an algorithm like Fisher-Yates and then reject permutations with fixed points. The average running time of that procedure is e*n + o(n) where e is Euler's constant 2.71828... That would probably work in your case.
The other major approach for generating derangements is to use a recursive algorithm. However, unlike Fisher-Yates, we have two branches to the algorithm: the last item in the list can be swapped with another item (i.e., part of a two-cycle), or can be part of a larger cycle. So at each step, the recursive algorithm has to branch in order to generate all possible derangements. Furthermore, the decision of whether to take one branch or the other has to be made with the correct probabilities.
Let D(n) be the number of derangements of n items. At each stage, the number of branches taking the last item to two-cycles is (n-1)D(n-2), and the number of branches taking the last item to larger cycles is (n-1)D(n-1). This gives us a recursive way of calculating the number of derangements, namely D(n)=(n-1)(D(n-2)+D(n-1)), and gives us the probability of branching to a two-cycle at any stage, namely (n-1)D(n-2)/D(n-1).
Now we can construct derangements by deciding to which type of cycle the last element belongs, swapping the last element to one of the n-1 other positions, and repeating. It can be complicated to keep track of all the branching, however, so in 2008 some researchers developed a streamlined algorithm using those ideas. You can see a walkthrough at http://www.cs.upc.edu/~conrado/research/talks/analco08.pdf . The running time of the algorithm is proportional to 2n + O(log^2 n), a 36% improvement in speed over the rejection method.
I have implemented their algorithm in Java. Using longs works for n up to 22 or so. Using BigIntegers extends the algorithm to n=170 or so. Using BigIntegers and BigDecimals extends the algorithm to n=40000 or so (the limit depends on memory usage in the rest of the program).
package io.github.edoolittle.combinatorics;
import java.math.BigInteger;
import java.math.BigDecimal;
import java.math.MathContext;
import java.util.Random;
import java.util.HashMap;
import java.util.TreeMap;
public final class Derangements {
// cache calculated values to speed up recursive algorithm
private static HashMap<Integer,BigInteger> numberOfDerangementsMap
= new HashMap<Integer,BigInteger>();
private static int greatestNCached = -1;
// load numberOfDerangementsMap with initial values D(0)=1 and D(1)=0
static {
numberOfDerangementsMap.put(0,BigInteger.valueOf(1));
numberOfDerangementsMap.put(1,BigInteger.valueOf(0));
greatestNCached = 1;
}
private static Random rand = new Random();
// private default constructor so class isn't accidentally instantiated
private Derangements() { }
public static BigInteger numberOfDerangements(int n)
throws IllegalArgumentException {
if (numberOfDerangementsMap.containsKey(n)) {
return numberOfDerangementsMap.get(n);
} else if (n>=2) {
// pre-load the cache to avoid stack overflow (occurs near n=5000)
for (int i=greatestNCached+1; i<n; i++) numberOfDerangements(i);
greatestNCached = n-1;
// recursion for derangements: D(n) = (n-1)*(D(n-1) + D(n-2))
BigInteger Dn_1 = numberOfDerangements(n-1);
BigInteger Dn_2 = numberOfDerangements(n-2);
BigInteger Dn = (Dn_1.add(Dn_2)).multiply(BigInteger.valueOf(n-1));
numberOfDerangementsMap.put(n,Dn);
greatestNCached = n;
return Dn;
} else {
throw new IllegalArgumentException("argument must be >= 0 but was " + n);
}
}
public static int[] randomDerangement(int n)
throws IllegalArgumentException {
if (n<2)
throw new IllegalArgumentException("argument must be >= 2 but was " + n);
int[] result = new int[n];
boolean[] mark = new boolean[n];
for (int i=0; i<n; i++) {
result[i] = i;
mark[i] = false;
}
int unmarked = n;
for (int i=n-1; i>=0; i--) {
if (unmarked<2) break; // can't move anything else
if (mark[i]) continue; // can't move item at i if marked
// use the rejection method to generate random unmarked index j &lt i;
// this could be replaced by more straightforward technique
int j;
while (mark[j=rand.nextInt(i)]);
// swap two elements of the array
int temp = result[i];
result[i] = result[j];
result[j] = temp;
// mark position j as end of cycle with probability (u-1)D(u-2)/D(u)
double probability
= (new BigDecimal(numberOfDerangements(unmarked-2))).
multiply(new BigDecimal(unmarked-1)).
divide(new BigDecimal(numberOfDerangements(unmarked)),
MathContext.DECIMAL64).doubleValue();
if (rand.nextDouble() < probability) {
mark[j] = true;
unmarked--;
}
// position i now becomes out of play so we could mark it
//mark[i] = true;
// but we don't need to because loop won't touch it from now on
// however we do have to decrement unmarked
unmarked--;
}
return result;
}
// unit tests
public static void main(String[] args) {
// test derangement numbers D(i)
for (int i=0; i<100; i++) {
System.out.println("D(" + i + ") = " + numberOfDerangements(i));
}
System.out.println();
// test quantity (u-1)D_(u-2)/D_u for overflow, inaccuracy
for (int u=2; u<100; u++) {
double d = numberOfDerangements(u-2).doubleValue() * (u-1) /
numberOfDerangements(u).doubleValue();
System.out.println((u-1) + " * D(" + (u-2) + ") / D(" + u + ") = " + d);
}
System.out.println();
// test derangements for correctness, uniform distribution
int size = 5;
long reps = 10000000;
TreeMap<String,Integer> countMap = new TreeMap&ltString,Integer>();
System.out.println("Derangement\tCount");
System.out.println("-----------\t-----");
for (long rep = 0; rep < reps; rep++) {
int[] d = randomDerangement(size);
String s = "";
String sep = "";
if (size > 10) sep = " ";
for (int i=0; i<d.length; i++) {
s += d[i] + sep;
}
if (countMap.containsKey(s)) {
countMap.put(s,countMap.get(s)+1);
} else {
countMap.put(s,1);
}
}
for (String key : countMap.keySet()) {
System.out.println(key + "\t\t" + countMap.get(key));
}
System.out.println();
// large random derangement
int size1 = 1000;
System.out.println("Random derangement of " + size1 + " elements:");
int[] d1 = randomDerangement(size1);
for (int i=0; i<d1.length; i++) {
System.out.print(d1[i] + " ");
}
System.out.println();
System.out.println();
System.out.println("We start to run into memory issues around u=40000:");
{
// increase this number from 40000 to around 50000 to trigger
// out of memory-type exceptions
int u = 40003;
BigDecimal d = (new BigDecimal(numberOfDerangements(u-2))).
multiply(new BigDecimal(u-1)).
divide(new BigDecimal(numberOfDerangements(u)),MathContext.DECIMAL64);
System.out.println((u-1) + " * D(" + (u-2) + ") / D(" + u + ") = " + d);
}
}
}
In C++:
template <class T> void shuffle(std::vector<T>&arr)
{
int size = arr.size();
for (auto i = 1; i < size; i++)
{
int n = rand() % (size - i) + i;
std::swap(arr[i-1], arr[n]);
}
}

Resources