Time complexity analysis of UVa 539 - The Settlers of Catan - algorithm

Problem link: UVa 539 - The Settlers of Catan
(UVa website occasionally becomes down. Alternatively, you can read the problem statement pdf here: UVa External 539 - The Settlers of Catan)
This problem gives a small general graph and asks to find the longest road. The longest road is defined as the longest path within the network that doesn’t use an edge twice. Nodes may be visited more than once, though.
Input Constraints:
1. Number of nodes: n (2 <= n <= 25)
2. Number of edges m (1 <= m <= 25)
3. Edges are un-directed.
4. Nodes have degrees of three or less.
5. The network is not necessarily connected.
Input is given in the format:
15 16
0 2
1 2
2 3
3 4
3 5
4 6
5 7
6 8
7 8
7 9
8 10
9 11
10 12
11 12
10 13
12 14
The first two lines gives the number of nodes n and the number of edges m for this test case respectively. The next m lines describe the m edges. Each edge is given by the numbers of the two nodes connected by it. Nodes are numbered from 0 to n - 1.
The above test can be visualized by the following picture:
Now I know that finding the longest path in a general graph is NP-hard. But as the number of nodes and edges in this problem is small and there's a degree bound of each node, a brute force solution (recursive backtracking) will be able to find the longest path in the given time limit (3.0 seconds).
My strategy to solve the problem was the following:
1. Run DFS (Depth First Search) from each node as the graph can be disconnected
2. When a node visits its neighbor, and that neighbor visits its neighbor and so on, mark the edges as used so that no edge can be used twice in the process
3. When the DFS routine starts to come back to the node from where it began, mark the edges as unused in the unrolling process
4. In each step, update the longest path length
My implementation in C++:
#include <iostream>
#include <vector>
// this function adds an edge to adjacency matrix
// we use this function to build the graph
void addEdgeToGraph(std::vector<std::vector<int>> &graph, int a, int b){
graph[a].emplace_back(b);
graph[b].emplace_back(a); // undirected graph
}
// returns true if the edge between a and b has already been used
bool isEdgeUsed(int a, int b, const std::vector<std::vector<char>> &edges){
return edges[a][b] == '1' || edges[b][a] == '1'; // undirected graph, (a,b) and (b,a) are both valid edges
}
// this function incrementally marks edges when "dfs" routine is called recursively
void markEdgeAsUsed(int a, int b, std::vector<std::vector<char>> &edges){
edges[a][b] = '1';
edges[b][a] = '1'; // order doesn't matter, the edge can be taken in any order [(a,b) or (b,a)]
}
// this function removes edge when a node has processed all its neighbors
// this lets us to reuse this edge in the future to find newer (and perhaps longer) paths
void unmarkEdge(int a, int b, std::vector<std::vector<char>> &edges){
edges[a][b] = '0';
edges[b][a] = '0';
}
int dfs(const std::vector<std::vector<int>> &graph, std::vector<std::vector<char>> &edges, int current_node, int current_length = 0){
int pathLength = -1;
for(int i = 0 ; i < graph[current_node].size() ; ++i){
int neighbor = graph[current_node][i];
if(!isEdgeUsed(current_node, neighbor, edges)){
markEdgeAsUsed(current_node, neighbor, edges);
int ret = dfs(graph, edges, neighbor, current_length + 1);
pathLength = std::max(pathLength, ret);
unmarkEdge(current_node, neighbor, edges);
}
}
return std::max(pathLength, current_length);
}
int dfsFull(const std::vector<std::vector<int>> &graph){
int longest_path = -1;
for(int node = 0 ; node < graph.size() ; ++node){
std::vector<std::vector<char>> edges(graph.size(), std::vector<char>(graph.size(), '0'));
int pathLength = dfs(graph, edges, node);
longest_path = std::max(longest_path, pathLength);
}
return longest_path;
}
int main(int argc, char const *argv[])
{
int n,m;
while(std::cin >> n >> m){
if(!n && !m) break;
std::vector<std::vector<int>> graph(n);
for(int i = 0 ; i < m ; ++i){
int a,b;
std::cin >> a >> b;
addEdgeToGraph(graph, a, b);
}
std::cout << dfsFull(graph) << '\n';
}
return 0;
}
I was ordering what is the worst case for this problem? (I'm wondering it should be n = 25 and m = 25) and in the worst case in total how many times the edges will be traversed? For example for the following test case with 3 nodes and 2 edges:
3 2
0 1
1 2
The dfs routine will be called 3 times, and each time 2 edges will be visited. So in total the edges will be visited 2 x 3 = 6 times. Is there any way to find the upper bound of total edge traversal in the worst case?

Related

Print adjacency list

I've just started with graphs and was printing adjacency list using vector of pairs and unordered_map, though when I test my code against random custom inputs, it matches with the expected results but when I submit it to the online judge it gives me a segmentation fault.
Problem:
Given number of edges 'E' and vertices 'V' of a bidirectional graph. Your task is to build a graph through adjacency list and print the adjacency list for each vertex.
Input:
The first line of input is T denoting the number of testcases.Then first line of each of the T contains two positive integer V and E where 'V' is the number of vertex and 'E' is number of edges in graph. Next, 'E' line contains two positive numbers showing that there is an edge between these two vertex.
Output:
For each vertex, print their connected nodes in order you are pushing them inside the list .
#include<iostream>
#include<unordered_map>
#include<vector>
#include<algorithm>
using namespace std;
int main()
{
int T;
cin>>T;
while(T--){
int nv,ne;
cin>>nv>>ne;
vector<pair<int,int>> vect;
for(int i=0;i<ne;i++)
{ int k,l;
cin>>k>>l;vect.push_back( make_pair(k,l) );
}
unordered_map<int,vector<int>> umap;
for(int i=0;i<ne;i++)
{
umap[vect[i].first].push_back(vect[i].second);
umap[vect[i].second].push_back(vect[i].first);
}
for(int i=0;i<nv;i++)
{
sort(umap[i].begin(),umap[i].end());
}
int j=0;
for(int i=0;i<nv;i++)
{
cout<<i<<"->"<<" ";
for( j=0;j<umap[i].size()-1;j++)
{
cout<<umap[i][j]<<"->"<<" ";
}
cout<<umap[i][j];
cout<<"\n";
}
}
return 0;
}
Example:
Input:
1
5 7
0 1
0 4
1 2
1 3
1 4
2 3
3 4
Output:
0-> 1-> 4
1-> 0-> 2-> 3-> 4
2-> 1-> 3
3-> 1-> 2-> 4
4-> 0-> 1-> 3
"segmentation fault" is caused when a vertex has no edges. I do not see any constraints that all vertices have at least one edge in the problem description. For example, let's consider this input.
1
3 1
0 1
Here vertex 2 does not have any edges. Let's take a look what happens in the printing loop.
for(int i=0;i<nv;i++)
{
cout<<i<<"->"<<" ";
for( j=0;j<umap[i].size()-1;j++)
{
cout<<umap[i][j]<<"->"<<" ";
}
cout<<umap[i][j];
cout<<"\n";
}
umap[i].size()-1 is dangerous : As vector<T>::size() returns an unsigned integer. So if the size is 0 then it is 0-1 and that causes underflow
Even if the first was solved (something like (int)umap[i].size()-1), the following line cout<<umap[i][j]; will try to print umap[i][0] which is invalid if the size is 0
So I would change that code like:
for(int i=0;i<nv;i++)
{
cout<<i;
for( j=0;j<umap[i].size();j++)
{
cout<<"->"<<" "<<umap[i][j];
}
cout<<"\n";
}

BST - interval deletion/multiple nodes deletion

Suppose I have a binary search tree in which I'm supposed to insert N unique-numbered keys in the order given to me on standard input, then I am to delete all nodes with keys in interval I = [min,max] and also all connections adjacent to these nodes. This gives me a lot of smaller trees that I am to merge together in a particular way. More precise description of the problem:
Given a BST, which contains distinct keys, and interval I, the interval deletion works in two phases. During the first phase it removes all nodes whose key is in I and all edges adjacent to the removed nodes. Let the resulting graph contain k connected components T1,...,Tk. Each of the components is a BST where the root is the node with the smallest depth among all nodes of this component in the original BST. We assume that the sequence of trees Ti is sorted so that for each i < j all keys in Ti are smaller than keys in Tj. During the second phase, trees Ti are merged together to form one BST. We denote this operation by Merge(T1,...,Tk). Its output is defined recurrently as follows:
EDIT: I am also supposed to delete any edge that connects nodes, that are separated by the given interval, meaning in example 2 the edge connecting nodes 10 and 20 is deleted because the interval[13,15] is 'in between them' thus separating them.
For an empty sequence of trees, Merge() gives an empty BST.
For a one-element sequence containing a tree T, Merge(T) = T.
For a sequence of trees T1,...,Tk where k > 1, let A1< A2< ... < An be the sequence of keys stored in the union of all trees T1,...,Tk, sorted in ascending order. Moreover, let m = ⌊(1+k)/2⌋ and let Ts be the tree which contains Am. Then, Merge(T1,...,Tk) gives a tree T created by merging three trees Ts, TL = Merge(T1,...,Ts-1) and TR = Merge(Ts+1,...,Tk). These trees are merged by establishing the following two links: TL is appended as the left subtree of the node storing the minimal key of Ts and TR is appended as the right subtree of the node storing the maximal key of Ts.
After I do this my task is to find the depth D of the resulting merged tree and the number of nodes in depth D-1. My program should be finished in few seconds even for a tree of 100000s of nodes (4th example).
My problem is that I haven't got a clue on how to do this or where even start. I managed to construct the desired tree before deletion but that's about that.
I'd be grateful for implementation of a program to solve this or any advice at all. Preferably in some C-ish programming language.
examples:
input(first number is number of keys to be inserted in the empty tree, the second are the unique keys to be inserted in the order given, the third line containts two numbers meaning the interval to be deleted):
13
10 5 8 6 9 7 20 15 22 13 17 16 18
8 16
correct output of the program: 3 3 , first number being the depth D, the second number of nodes in depth D-1
input:
13
10 5 8 6 9 7 20 15 22 13 17 16 18
13 15
correct output: 4 3
pictures of the two examples
example 3: https://justpaste.it/1du6l
correct output: 13 6
example 4: link
correct output: 58 9
This is a big answer, I'll talk at high-level.Please examine the source for details, or ask in comment for clarification.
Global Variables :
vector<Node*> roots : To store roots of all new trees.
map<Node*,int> smap : for each new tree, stores it's size
vector<int> prefix : prefix sum of roots vector, for easy binary search in merge
Functions:
inorder : find size of a BST (all calls combinedly O(N))
delInterval : Main theme is,if root isn't within interval, both of it's childs might be roots of new trees. The last two if checks for that special edge in your edit. Do this for every node, post-order. (O(N))
merge : Merge all new roots positioned at start to end index in roots. First we find the total members of new tree in total (using prefix-sum of roots i.e prefix). mid denotes m in your question. ind is the index of root that contains mid-th node, we retrieve that in root variable. Now recursively build left/right subtree and add them in left/right most node. O(N) complexity.
traverse: in level map, compute the number of nodes for every depth of tree. (O(N.logN), unordered_map will turn it O(N))
Now the code (Don't panic!!!):
#include <bits/stdc++.h>
using namespace std;
int N = 12;
struct Node
{
Node* parent=NULL,*left=NULL,*right = NULL;
int value;
Node(int x,Node* par=NULL) {value = x;parent = par;}
};
void insert(Node* root,int x){
if(x<root->value){
if(root->left) insert(root->left,x);
else root->left = new Node(x,root);
}
else{
if(root->right) insert(root->right,x);
else root->right = new Node(x,root);
}
}
int inorder(Node* root){
if(root==NULL) return 0;
int l = inorder(root->left);
return l+1+inorder(root->right);
}
vector<Node*> roots;
map<Node*,int> smap;
vector<int> prefix;
Node* delInterval(Node* root,int x,int y){
if(root==NULL) return NULL;
root->left = delInterval(root->left,x,y);
root->right = delInterval(root->right,x,y);
if(root->value<=y && root->value>=x){
if(root->left) roots.push_back(root->left);
if(root->right) roots.push_back(root->right);
return NULL;
}
if(root->value<x && root->right && root->right->value>y) {
roots.push_back(root->right);
root->right = NULL;
}
if(root->value>y && root->left && root->left->value<x) {
roots.push_back(root->left);
root->left = NULL;
}
return root;
}
Node* merge(int start,int end){
if(start>end) return NULL;
if(start==end) return roots[start];
int total = prefix[end] - (start>0?prefix[start-1]:0);//make sure u get this line
int mid = (total+1)/2 + (start>0?prefix[start-1]:0); //or this won't make sense
int ind = lower_bound(prefix.begin(),prefix.end(),mid) - prefix.begin();
Node* root = roots[ind];
Node* TL = merge(start,ind-1);
Node* TR = merge(ind+1,end);
Node* temp = root;
while(temp->left) temp = temp->left;
temp->left = TL;
temp = root;
while(temp->right) temp = temp->right;
temp->right = TR;
return root;
}
void traverse(Node* root,int depth,map<int, int>& level){
if(!root) return;
level[depth]++;
traverse(root->left,depth+1,level);
traverse(root->right,depth+1,level);
}
int main(){
srand(time(NULL));
cin>>N;
int* arr = new int[N],start,end;
for(int i=0;i<N;i++) cin>>arr[i];
cin>>start>>end;
Node* tree = new Node(arr[0]); //Building initial tree
for(int i=1;i<N;i++) {insert(tree,arr[i]);}
Node* x = delInterval(tree,start,end); //deleting the interval
if(x) roots.push_back(x);
//sort the disconnected roots, and find their size
sort(roots.begin(),roots.end(),[](Node* r,Node* v){return r->value<v->value;});
for(auto& r:roots) {smap[r] = inorder(r);}
prefix.resize(roots.size()); //prefix sum root sizes, to cheaply find 'root' in merge
prefix[0] = smap[roots[0]];
for(int i=1;i<roots.size();i++) prefix[i]= smap[roots[i]]+prefix[i-1];
Node* root = merge(0,roots.size()-1); //merge all trees
map<int, int> level; //key=depth, value = no of nodes in depth
traverse(root,0,level); //find number of nodes in each depth
int depth = level.rbegin()->first; //access last element's key i.e total depth
int at_depth_1 = level[depth-1]; //no of nodes before
cout<<depth<<" "<<at_depth_1<<endl; //hoorray
return 0;
}

How to get minimum number of moves to solve `game of fifteen`?

I was reading about this and thought to form an algorithm to find the minimum number of moves to solve this.
Constraints I made: An N X N matrix having one empty slot ,say 0, would be plotted having numbers 0 to n-1.
Now we have to recreate this matrix and form the matrix having numbers in increasing order from left to right beginning from the top row and have the last element 0 i.e. (N X Nth)element.
For example,
Input :
8 4 0
7 2 5
1 3 6
Output:
1 2 3
4 5 6
7 8 0
Now the problem is how to do this in minimum number of steps possible.
As in game(link provided) you can either move left, right, up or bottom and shift the 0(empty slot) to corresponding position to make the final matrix.
The output to printed for this algorithm is number of steps say M and then Tile(number) moved in the direction say, 1 for swapping with upper adjacent element, 2 for lower adjacent element, 3 for left adjacent element and 4 for right adjacent element.
Like, for
2 <--- order of N X N matrix
3 1
0 2
Answer should be: 3 4 1 2 where 3 is M and 4 1 2 are steps to tile movement.
So I have to minimise the complexity for this algorithm and want to find minimum number of moves. Please suggest me the most efficient approach to solve this algorithm.
Edit:
What I coded in c++, Please see the algorithm rather than pointing out other issues in code .
#include <bits/stdc++.h>
using namespace std;
int inDex=0,shift[100000],N,initial[500][500],final[500][500];
struct Node
{
Node* parent;
int mat[500][500];
int x, y;
int cost;
int level;
};
Node* newNode(int mat[500][500], int x, int y, int newX,
int newY, int level, Node* parent)
{
Node* node = new Node;
node->parent = parent;
memcpy(node->mat, mat, sizeof node->mat);
swap(node->mat[x][y], node->mat[newX][newY]);
node->cost = INT_MAX;
node->level = level;
node->x = newX;
node->y = newY;
return node;
}
int row[] = { 1, 0, -1, 0 };
int col[] = { 0, -1, 0, 1 };
int calculateCost(int initial[500][500], int final[500][500])
{
int count = 0;
for (int i = 0; i < N; i++)
for (int j = 0; j < N; j++)
if (initial[i][j] && initial[i][j] != final[i][j])
count++;
return count;
}
int isSafe(int x, int y)
{
return (x >= 0 && x < N && y >= 0 && y < N);
}
struct comp
{
bool operator()(const Node* lhs, const Node* rhs) const
{
return (lhs->cost + lhs->level) > (rhs->cost + rhs->level);
}
};
void solve(int initial[500][500], int x, int y,
int final[500][500])
{
priority_queue<Node*, std::vector<Node*>, comp> pq;
Node* root = newNode(initial, x, y, x, y, 0, NULL);
Node* prev = newNode(initial,x,y,x,y,0,NULL);
root->cost = calculateCost(initial, final);
pq.push(root);
while (!pq.empty())
{
Node* min = pq.top();
if(min->x > prev->x)
{
shift[inDex] = 4;
inDex++;
}
else if(min->x < prev->x)
{
shift[inDex] = 3;
inDex++;
}
else if(min->y > prev->y)
{
shift[inDex] = 2;
inDex++;
}
else if(min->y < prev->y)
{
shift[inDex] = 1;
inDex++;
}
prev = pq.top();
pq.pop();
if (min->cost == 0)
{
cout << min->level << endl;
return;
}
for (int i = 0; i < 4; i++)
{
if (isSafe(min->x + row[i], min->y + col[i]))
{
Node* child = newNode(min->mat, min->x,
min->y, min->x + row[i],
min->y + col[i],
min->level + 1, min);
child->cost = calculateCost(child->mat, final);
pq.push(child);
}
}
}
}
int main()
{
cin >> N;
int i,j,k=1;
for(i=0;i<N;i++)
{
for(j=0;j<N;j++)
{
cin >> initial[j][i];
}
}
for(i=0;i<N;i++)
{
for(j=0;j<N;j++)
{
final[j][i] = k;
k++;
}
}
final[N-1][N-1] = 0;
int x = 0, y = 1,a[100][100];
solve(initial, x, y, final);
for(i=0;i<inDex;i++)
{
cout << shift[i] << endl;
}
return 0;
}
In this above code I am checking for each child node which has the minimum cost(how many numbers are misplaced from the final matrix numbers).
I want to make this algorithm further efficient and reduce it's time complexity. Any suggestions would be appreciable.
While this sounds a lot like a homework problem, I'll lend a bit of help.
For significantly small problems, like your 2x2 or 3x3, you can just brute force it. Basically, you do every possible combination with every possible move, track how many turns each took, and then print out the smallest.
To improve on this, maintain a list of solved solutions, and then any time you make a possible move, if that moves already done, stop trying that one since it can't possible be the smallest.
Example, say I'm in this state (flattening your matrix to a string for ease of display):
5736291084
6753291084
5736291084
Notice that we're back to a state we've seen before. That means it can't possible be the smallest move, because the smallest would be done without returning to a previous state.
You'll want to create a tree doing this, so you'd have something like:
134
529
870
/ \
/ \
/ \
/ \
134 134
529 520
807 879
/ | \ / | \
/ | X / X \
134 134 134 134 134 130
509 529 529 502 529 524
827 087 870 879 870 879
And so on. Notice I marked some with X because they were duplicates, and thus we wouldn't want to pursue them any further since we know they can't be the smallest.
You'd just keep repeating this until you've tried all possible solutions (i.e., all non-stopped leaves reach a solution), then you just see which was the shortest. You could also do it in parallel so you stop once any one has found a solution, saving you time.
This brute force approach won't be effective against large matrices. To solve those, you're looking at some serious software engineering. One approach you could take with it would be to break it into smaller matrices and solve that way, but that may not be the best path.
This is a tricky problem to solve at larger values, and is up there with some of the trickier NP problems out there.
Start from solution, determine ranks of permuation
The reverse of above would be how you can pre-generate a list of all possible values.
Start with the solution. That has a rank of permutation of 0 (as in, zero moves):
012
345
678
Then, make all possible moves from there. All of those moves have rank of permutation of 1, as in, one move to solve.
012
0 345
678
/ \
/ \
/ \
102 312
1 345 045
678 678
Repeat that as above. Each new level all has the same rank of permutation. Generate all possible moves (in this case, until all of your branches are killed off as duplicates).
You can then store all of them into an object. Flattening the matrix would make this easy (using JavaScript syntax just for example):
{
'012345678': 0,
'102345678': 1,
'312045678': 1,
'142305678': 2,
// and so on
}
Then, to solve your question "minimum number of moves", just find the entry that is the same as your starting point. The rank of permutation is the answer.
This would be a good solution if you are in a scenario where you can pre-generate the entire solution. It would take time to generate, but lookups would be lightning fast (this is similar to "rainbow tables" for cracking hashes).
If you must solve on the fly (without pre-generation), then the first solution, start with the answer and work your way move-by-move until you find a solution would be better.
While the maximum complexity is O(n!), there are only O(n^2) possible solutions. Chopping off duplicates from the tree as you go, your complexity will be somewhere in between those two, probably in the neighborhood of O(n^3) ~ O(2^n)
You can use BFS.
Each state is one vertex, and there is an edge between two vertices if they can transfer to each other.
For example
8 4 0
7 2 5
1 3 6
and
8 0 4
7 2 5
1 3 6
are connected.
Usually, you may want to use some numbers to represent your current state. For small grid, you can just follow the sequence of the number. For example,
8 4 0
7 2 5
1 3 6
is just 840725136.
If the grid is large, you may consider using the rank of the permutation of the numbers as your representation of the state. For example,
0 1 2
3 4 5
6 7 8
should be 0, as it is the first in permutation.
And
0 1 2
3 4 5
6 7 8
(which is represented by 0)
and
1 0 2
3 4 5
6 7 8
(which is represented by some other number X)
are connected is the same as 0 and X are connected in the graph.
The complexity of the algo should be O(n!) as there are at most n! vertices/permutations.

DFS algorithm to detect cycles in graph

#include <iostream>
#include <vector>
#include <stack>
using namespace std;
class Graph{
public:
vector<int> adjList[10001];
void addEdge(int u,int v){
adjList[u].push_back(v);
adjList[v].push_back(u);
}
};
bool dfs(Graph graph, int n){
vector<int> neighbors;
int curr,parent;
bool visited[10001] = {0};
stack<int> s;
//Depth First Search
s.push(1);
parent = 0;
while(!s.empty()){
curr = s.top();
neighbors = graph.adjList[curr];
s.pop();
//If current is unvisited
if(visited[curr] == false){
for(int j=0; j<neighbors.size(); j++){
//If node connected to itself, then cycle exists
if(neighbors[j] == curr){
return false;;
}
else if(visited[neighbors[j]] == false){
s.push(neighbors[j]);
}
//If the neighbor is already visited, and it is not a parent, then cycle is detected
else if(visited[neighbors[j]] == true && neighbors[j] != parent){
return false;
}
}
//Mark as visited
visited[curr] = true;
parent = curr;
}
}
//Checking if graph is fully connected
for(int i=1; i<=n; i++){
if(visited[i] == false){
return false;
}
}
//Only if there are no cycles, and it's fully connected, it's a tree
return true;
}
int main() {
int m,n,u,v;
cin>>n>>m;
Graph graph = Graph();
//Build the graph
for(int edge=0; edge<m; edge++){
cin>>u>>v;
graph.addEdge(u,v);
}
if(dfs(graph,n)){
cout<<"YES"<<endl;
}
else{
cout<<"NO"<<endl;
}
return 0;
}
I am trying to determine if a given graph is a tree.
I perform DFS and look for cycles, if a cycle is detected, then the given graph is not a tree.
Then I check if all nodes have been visited, if any node is not visited, then given graph is not a tree
The first line of input is:
n m
Then m lines follow, which represent the edges connecting two nodes
n is number of nodes
m is number of edges
example input:
3 2
1 2
2 3
This is a SPOJ question http://www.spoj.com/problems/PT07Y/ and I am getting Wrong Answer. But the DFS seems to be correct according to me.
So I checked your code against some simple test cases in comments, and it seems that for
7 6
3 1
3 2
2 4
2 5
1 6
1 7
you should get YES as answer, while your program gives NO.
This is how neighbours looks like in this case:
1: 3 6 7
2: 3 4 5
3: 1 2
4: 2
5: 2
6: 1
7: 1
So when you visit 1 you push 3,6,7 on the stack. Your parent is set as 1. This is all going good.
You pop 7 from the stack, you don't push anything on the stack and cycle check clears out, so as you exit while loop you set visited[7] as true and set you parent to 7 (!!!!!).
Here is you can see this is not going well, since once you popped 6 from the stack you have 7 saved as parent. And it should be 1. This makes cycle check fail on neighbor[0] != parent.
I'd suggest adding keeping parent in mapped array and detect cycles by applying union-merge.

As I read Djiktra's algo fails for negative edges but I implemented the same concept and my code is working? Is there some bug?

The code works for negative edges too and I have used priority queue
please check it and let me know what's wrong with it and why is this working even for negative edges.
Constraint: edges should be less than 10000 length
Am I doing something wrong here?
As I read Djiktra's algo fails for negative edges but I implemented the same concept and my code is working? Is there some bug?
#include<iostream>
#include<queue>
using namespace std;
struct reach
{
int n;
int weight;
};
struct cmp
{
bool operator()(reach a, reach b)
{
return a.weight>b.weight;
}
};
class sp
{
int *dist;
int n;
int **graph;
int src;
public:
sp(int y)
{
n=y;
src=1;
dist=new int[n+1];
graph=new int*[n+1];
for(int i=0;i<=n;i++)
{
graph[i]=new int[n+1];
}
for(int i=2;i<=n;i++)
{
dist[i]=10000;
}
// cout<<INT_MAX<<endl;
dist[src]=0;
for(int i=1;i<=n;i++)
{
for(int j=1;j<=n;j++)
{
graph[i][j]=10000;
}
}
graph[1][1]=0;
}
void read()
{
int a;
cout<<"enter number of edges"<<endl;
cin>>a;
cout<<"now enter the two vertices which has an edge followed by the weight of the edge"<<endl;
while(a--)
{//cout<<"location: "<<i<<" : "<<j<<endl;
int as, ad,val;
cin>>as>>ad>>val;
graph[as][ad]=val;
}
}
void finder()
{cout<<"enetered"<<endl;
priority_queue<reach, vector<reach>, cmp> q;
for(int i=1;i<=n;i++)
{
if(dist[src]+graph[src][i]<dist[i])
{
reach temp;
temp.n=i;
cout<<i<<endl;
temp.weight=graph[src][i];
q.push(temp);
dist[i]=graph[src][i]+dist[src];
}
}
while(q.empty()!=1)
{
reach now =q.top();
//cout<<" we have here: "<<now.n<<endl;
q.pop();
for(int i=1;i<=n;i++)
{
if((dist[now.n] + graph[now.n][i])<dist[i])
{
dist[i]=dist[now.n]+graph[now.n][i];
cout<<"it is: "<<i<<" : "<<dist[i]<<endl;
reach temp;
temp.n=i;
//cout<<temp.n<<endl;
temp.weight=graph[now.n][i];
q.push(temp);
}
}
}
}
void print()
{
for(int i=1;i<=n;i++)
{
cout<<"we have: "<<dist[i]<<" at "<<i;
cout<<endl;
}
cout<<endl;
}
};
int main()
{cout<<"enter no. of vertices"<<endl;
int n;
cin>>n;
sp sp(n);
sp.read();
sp.finder();
sp.print();
}
Consider this example:
Undirected Graph(6v,8e)
Edges (v1-v2 (weight)):
1-2 (2)
2-3 (1)
1-3 (-2)
1-6 (-2)
3-6 (-3)
5-6 (1)
4-5 (2)
2-5 (1)
Now, Let source be 1 and destination be 4. There is one cycle from source to source (1-3-6-1) which weighs (-7). So, lets list a few paths from source to destination with weights:
1-6-5-4 (1)
1-3-6-5-4 (-2)
1-3-6-1-3-6-5-4 (-9)
1-3-6-1-3-6-1-3-6-5-4 (-16)
etc.
So, which path is the shortest? Since it is ambiguous, the algorithm does not work. Now, you can argue that if a node is visited, you will not update it. In this case, you will not get the correct answer. May be there are some cases where in-spite of having negative edges, algo gives correct results, but this is not how Dijkstra works.
A really simple way to understand Dijkstra is that it performs BFS on the graph, from source till destination, and in every step it updates the visited nodes. So, if there is a node n which has cost c and a few levels deep in bfs, its cost becomes k (<c). Then again you will have to update all the nodes visited from n for their shorter paths (because path to n is now shorter). Since graph has negative edges, if it has a cycle, n will keep updating infinitely and will never end.
The simplest graph for which Dijkstra's algorithm fails with negative weights has adjacency matrix
0 1 2
1 0 -3
2 -3 0
and looks for a route from vertex 0 to vertex 1. The first vertex to come off the priority queue is vertex 1 at distance 1, so that's the route returned. But there was a route of total weight -1 via a vertex which is still in the priority queue, with weight 2.

Resources