UVA(820):Internet Bandwidth getting wrong answer? - algorithm

I am trying to solve this Problem on UVA.The question is about finding the max-flow in the graph.I used Edmond-karp algorithm but I am continuously getting wrong answer.Can any one tell me what's wrong in my code ?
My code :
#include<bits/stdc++.h>
using namespace std;
#define MX 1000000007
#define LL long long
#define ri(x) scanf("%d",&x)
#define rl(x) scanf("%lld",&x)
#define len(x) x.length()
#define FOR(i,a,n) for(int i=a;i<n;i++)
#define FORE(i,a,n) for(int i=a;i<=n;i++)
template<class T1> inline T1 maxi(T1 a,T1 b){return a>b?a:b;}
template<class T2> inline T2 mini(T2 a,T2 b){return a<b?a:b;}
int parent[101],G[101][101],rG[101][101];
bool bfs(int s,int t,int n)
{
bool vis[n+2];
memset(parent,0,sizeof parent);
memset(vis,0,sizeof vis);
queue<int>Q;
Q.push(s);
vis[s]=true;
while(!Q.empty())
{
int fnt=Q.front();
Q.pop();
for(int v=1;v<=n;v++)
{
if(!vis[v] and G[fnt][v]>0)
{
vis[v]=true;
parent[v]=fnt;
Q.push(v);
}
}
}
return vis[t];
}
int main()
{
int n,tst=1;
ri(n);
while(n)
{
int s,t,c,flow=0;
ri(s),ri(t),ri(c);
FORE(i,1,c)
{
int x,y,z;
ri(x),ri(y),ri(z);
G[x][y]+=z;
G[y][x]+=z;
}
while(bfs(s,t,n))
{
int path=9999999;
for(int v=t;v!=s;v=parent[v])
{
int u=parent[v];
path=mini(path,G[u][v]);
}
for(int v=t;v!=s;v=parent[v])
{
int u=parent[v];
G[u][v]-=path;
G[v][u]+=path;
}
flow+=path;
}
printf("Network %d\nThe bandwidth is %d.\n\n", tst++, flow);
ri(n);
}
}

You push flow the other way around:
G[u][v]-=path;
G[v][u]+=path;
This should be:
G[u][v] += path;
G[v][u] -= path;
Also, I'm not sure about this part:
if(!vis[v] and G[fnt][v]>0)
[...]
path=mini(path,G[u][v]);
Because you are also allowed to take paths on which the flow is negative. You should not change G, which seems to be your capacities graph. Instead, you should have a matrix F that stores how much flow you send. Then your two conditions should be changed to:
if (!vis[v] && G[fnt][v] != F[fnt][v])
[...]
path = mini(path, G[u][v] - F[u][v])
And push flow on F, not G.
You seem to have thought about this since you declared a matrix rG, but you're never using it.
There might be other issues too. It's hard to tell without knowing what problems you're seeing.

Related

How to fix "segmentation fault (core dumped)" dependant on size

I created a class "config" that contains 12 bool values, organized in a std::array. The class has an "icing" function that returns a double value.
Trying to order a vector of 2^12 (4096) configs through a std:: sort (contained in #include ) using a predicate i have written, i get a segmentation fault error.
Shrinking the vector to 205 (not 1 more) eliminates the error, but I don't know why.
If i make the vector 4096 long, and try to sort only a little part, it works until the part is long 175+.
Shrinking the vector to for example around 1000, limits the partial sorting to around 20, before it gives the segmentation error.
#include <array>
#include <vector>
#include <algorithm>
#include <iostream>
using namespace std;
class config {
public:
config (){ //constructor, default
array<bool,12> t;
for (bool& b: t){
b=false;
}
val=t;
g=1;
}
config (const config& fro): val(fro.val){}; //copy constructor
array<bool,12> get_val(){ return val; } //returns the array
void set_tf(int n, bool tf){ val[n]=tf; } //sets a certain boolean in the array to false/true
void set_g(double d){ g=d; } //this sets the constant for calculation to a number
void print(){
cout<<"values: ";
for (auto b: val){ cout<<b<<" "; }
cout<<endl;
}
config & incr(int n=1){ //this increases the vector by 1 following the rules for binary numbers, but has the digits reversed
for(int j=0; j<n; j++){
int i=0;
bool out=false;
while(val[i]==true){
val[i]=false;
i++;
}
val[i]=true;
}
return *this;
}
double energy(){
int ct=0;
int cf=0;
for(auto b:val){ if(b==true){ ct++; } else { cf++; } }
return (abs(ct-cf));
}
double icing(){ //here is the "value" for ordering purposes
int n=0;
for(int i=0; i<11; i++){
if(val[i]!=val[i+1]){ n++; }
}
double temp=-g*n+this->energy();
return temp;
}
private:
array<bool,12> val;
double g;
};
bool pred (config c1, config c2){ return c1.icing()>c2.icing(); } //this sets the ordering predicate
template <typename T> //this orders the vector
void csort (vector <T>& in){
sort(in.begin(), in.end(), pred);
}
int main(){
vector<config> v;
for (int i=0; i<4096; i++){ //cicle that creates a vector of successive binaries
for(auto& c:v){
c.incr();
}
config t;
v.push_back(t);
}
sort(v.begin(), v.begin()+174, pred); //this gives seg.fault when 175+
csort(v); //this gives segmentation fault when the vec is 206 long or longer
}
I expected the code to order the vector, but it goes into segmentation fault.
Your program has undefined behaviour in sort function because your predicate takes config by value, so copies are made and in this place copy constructor is called which copies only array val, but not g.
bool pred (config c1, config c2){ return c1.icing()>c2.icing(); }
// takes by value, copy ctor is called
config (const config& fro): val(fro.val){}; // only val is copied, g HAS GARBAGE VALUE
// icing in pred uses g !! - stric weak ordering is violated because g has GARBAGE VALUE
Fix 1:
pass config by const config&:
bool pred (const config& c1, const config& c2){ return c1.icing()>c2.icing(); }
or fix 2:
g is initialized in copy constructor:
config (const config& fro): val(fro.val), g(fro.g){};

why am i getting access violation error c++?

i am getting 0xc0000005 error(access violation error), where am i wrong in this code?
i couldnt debug this error. please help me.
question is this
Formally, given a wall of infinite height, initially unpainted. There occur N operations, and in ith operation, the wall is painted upto height Hi with color Ci. Suppose in jth operation (j>i) wall is painted upto height Hj with color Cj such that Hj >= Hi, the Cith color on the wall is hidden. At the end of N operations, you have to find the number of distinct colors(>=1) visible on the wall.
#include<iostream>
#include <bits/stdc++.h>
#include <algorithm>
using namespace std;
int main()
{
int t;
cin>>t;
for(int tt= 0;tt<t;tt++)
{
int h,c;
int temp = 0;
cin>>h>>c;
int A[h], B[c];
vector<int> fc;
for(int i = 0;i<h;i++)
{
cin>>A[i];
}
for(int j =0;j<h;j++)
{
cin>>B[j];
}
if(is_sorted(A,A+h))
{
return 1;
}
if(count(A,A+h,B[0]) == h)
{
return 1;
}
for(int i = 0;i<h;i++)
{
if(A[i]>=temp)
{
temp = A[i];
}
else
{
if(temp == fc[fc.size()-1])
{
fc[fc.size()-1] = B[i];
}
else
{
fc.push_back(B[i]);
}
}
}
}
}
There are several issues.
When reading values into B, your loop check is j<h. How many elements are in B?
You later look at fc[fc.size()-1]. This is Undefined Behavior if fc is empty, and is the likely source of your problem.
Other issues:
Don't use #include <bits/stdc++.h>
Avoid using namespace std;
Variable declarations like int A[h], where h is a variable, are not standard C++. Some compilers support them as an extension.

Coin Change Algorithm with Dynamic Programming

I am facing difficulty with Dynamic Programming. I was trying the trivial Coin Change problem- COIN CHANGE Problem UVa
I am trying to use top-down approach with memorization but I am getting TLE. Here is my code-
#include <bits/stdc++.h>
using namespace std;
#define ll long long
typedef vector <int > vi;
typedef vector <vi> vii;
const int maxn = 10000007;
int Set[maxn];
int Coin(int n,int m,vii & dp)
{
if(n==0)
return 1;
else if(n<0 || m<0)
return 0;
else if(dp[n][m]!=-1)
return dp[n][m];
else
{
dp[n][m]=Coin(n-Set[m],m,dp)+Coin(n,m-1,dp);
return dp[n][m];
}
}
int main()
{
int n,m=5;
Set[0]=50,Set[1]=25,Set[2]=10,Set[3]=5,Set[4]=1;
while(scanf("%d",&n)!=EOF)
{
vector <vector <int> > dp(n+1,vector<int> (m,-1));
dp[0][0]=0;
cout << Coin(n,m-1,dp) << endl;
}
}
I want to know am I doing memorization wrong or top-down will not work in this case and bottom-up approach is the only option.
You do have not to call Coin function for every testcase(each value of n) as m(number of types of coins) remains same in all cases so call it only once for maximum value which is 7489 here. and then answer for all testcase as dp[n][4]. Please see the code below for better understanding.
n = 7489;
vector <vector <int> > dp(n+1,vector<int> (m,-1));
dp[0][0]=0;
Coin(n,m-1,dp);
while(scanf("%d",&n)!=EOF)
{
cout<<dp[n][4]<<endl;
}

I am working on vectors in C++, however this simple code is throwing error somewhere

#include<iostream>
#include<stdio.h>
#include<vector>
#include<algorithm>
using namespace std;
int firstMissingPositive(vector<int> A) {
sort(A.begin() , A.end());
int i,start=-1,j;
for(i=0; i<A.size(); i++)//to find the least positive number
{
if(A.at(i)>=0)
{
start=i;
break;
}
}
if(start==-1)
return 1; //when the vector has no positive number
else
{
for(j=start; j<=A.at(A.size()); j++)//to find the least positive missing number
{
if ( find(A.begin(), A.end(), i)!=A.end() )
continue;
else
return i;
}
}
}
int main()
{
vector<int> b;
int myarray [] = { 501,504,503 };
b.insert (b.begin(), myarray, myarray+3);
firstMissingPositive(b);
}
The error shown is: terminate called after throwing an instance of
'std::out_of_range' what<>: vector::_M_range_check
I have been dealing with this since long but can not detect the err in it.
Here, you are stuck at this statement:
for(j=start; j<=A.at(A.size()); j++) //to find the least positive missing number
It should be:
for(j=start; j<=A.at(A.size()-1); j++)
because you are trying to access A.size()th element which is throwing out_of_range error.

Boost.Variant Vs Virtual Interface Performance

I'm trying to measure a performance difference between using Boost.Variant and using virtual interfaces. For example, suppose I want to increment different types of numbers uniformly, using Boost.Variant I would use a boost::variant over int and float and a static visitor which increments each one of them. Using class interfaces I would use a pure virtual class number and number_int and number_float classes which derive from it and implement an "increment" method.
From my testing, using interfaces is far faster than using Boost.Variant.
I ran the code at the bottom and received these results:
Virtual: 00:00:00.001028
Variant: 00:00:00.012081
Why do you suppose this difference is? I thought Boost.Variant would be a lot faster.
** Note: Usually Boost.Variant uses heap allocations to guarantee that the variant would always be non-empty. But I read on the Boost.Variant documentation that if boost::has_nothrow_copy is true then it doesn't use heap allocations which should make things significantly faster. For int and float boost::has_nothrow_copy is true.
Here is my code for measuring the two approaches against each other.
#include <iostream>
#include <boost/variant/variant.hpp>
#include <boost/variant/static_visitor.hpp>
#include <boost/variant/apply_visitor.hpp>
#include <boost/date_time/posix_time/ptime.hpp>
#include <boost/date_time/posix_time/posix_time_types.hpp>
#include <boost/date_time/posix_time/posix_time_io.hpp>
#include <boost/format.hpp>
const int iterations_count = 100000;
// a visitor that increments a variant by N
template <int N>
struct add : boost::static_visitor<> {
template <typename T>
void operator() (T& t) const {
t += N;
}
};
// a number interface
struct number {
virtual void increment() = 0;
};
// number interface implementation for all types
template <typename T>
struct number_ : number {
number_(T t = 0) : t(t) {}
virtual void increment() {
t += 1;
}
T t;
};
void use_virtual() {
number_<int> num_int;
number* num = &num_int;
for (int i = 0; i < iterations_count; i++) {
num->increment();
}
}
void use_variant() {
typedef boost::variant<int, float, double> number;
number num = 0;
for (int i = 0; i < iterations_count; i++) {
boost::apply_visitor(add<1>(), num);
}
}
int main() {
using namespace boost::posix_time;
ptime start, end;
time_duration d1, d2;
// virtual
start = microsec_clock::universal_time();
use_virtual();
end = microsec_clock::universal_time();
// store result
d1 = end - start;
// variant
start = microsec_clock::universal_time();
use_variant();
end = microsec_clock::universal_time();
// store result
d2 = end - start;
// output
std::cout <<
boost::format(
"Virtual: %1%\n"
"Variant: %2%\n"
) % d1 % d2;
}
For those interested, after I was a bit frustrated, I passed the option -O2 to the compiler and boost::variant was way faster than a virtual call.
Thanks
This is obvious that -O2 reduces the variant time, because that whole loop is optimized away. Change the implementation to return the accumulated result to the caller, so that the optimizer wouldn't remove the loop, and you'll get the real difference:
Output:
Virtual: 00:00:00.000120 = 10000000
Variant: 00:00:00.013483 = 10000000
#include <iostream>
#include <boost/variant/variant.hpp>
#include <boost/variant/static_visitor.hpp>
#include <boost/variant/apply_visitor.hpp>
#include <boost/date_time/posix_time/ptime.hpp>
#include <boost/date_time/posix_time/posix_time_types.hpp>
#include <boost/date_time/posix_time/posix_time_io.hpp>
#include <boost/format.hpp>
const int iterations_count = 100000000;
// a visitor that increments a variant by N
template <int N>
struct add : boost::static_visitor<> {
template <typename T>
void operator() (T& t) const {
t += N;
}
};
// a visitor that increments a variant by N
template <typename T, typename V>
T get(const V& v) {
struct getter : boost::static_visitor<T> {
T operator() (T t) const { return t; }
};
return boost::apply_visitor(getter(), v);
}
// a number interface
struct number {
virtual void increment() = 0;
};
// number interface implementation for all types
template <typename T>
struct number_ : number {
number_(T t = 0) : t(t) {}
virtual void increment() { t += 1; }
T t;
};
int use_virtual() {
number_<int> num_int;
number* num = &num_int;
for (int i = 0; i < iterations_count; i++) {
num->increment();
}
return num_int.t;
}
int use_variant() {
typedef boost::variant<int, float, double> number;
number num = 0;
for (int i = 0; i < iterations_count; i++) {
boost::apply_visitor(add<1>(), num);
}
return get<int>(num);
}
int main() {
using namespace boost::posix_time;
ptime start, end;
time_duration d1, d2;
// virtual
start = microsec_clock::universal_time();
int i1 = use_virtual();
end = microsec_clock::universal_time();
// store result
d1 = end - start;
// variant
start = microsec_clock::universal_time();
int i2 = use_variant();
end = microsec_clock::universal_time();
// store result
d2 = end - start;
// output
std::cout <<
boost::format(
"Virtual: %1% = %2%\n"
"Variant: %3% = %4%\n"
) % d1 % i1 % d2 % i2;
}

Resources