We need to construct a bipartite graph with N vertices each, on the two parts, and with total number of edges equal to M.
The vertices on the left are numbered from 1 to N.
The vertices on the right are also numbered from 1 to N.
The degree of every vertex to be greater than or equal to X, and to be lesser than or equal to Y. i.e. for all v, X ≤ deg(v) ≤ Y
Given four integers N, M, X, Y we need to construct some bipartite graph satisfying this property. If there does not exist any such graph, then also tell the same.
Example :
If N=2 , M=3 , X=1 and Y=2
then the 3 edges in bipartite graph will be : (1,1),(2,2) and (1,2)
If N=2 , M=3 , X=1 and Y=1 then no bipartite graph exists.
How can this problem be solved if
1 ≤ N ≤ 100
1 ≤ X ≤ Y ≤ N
0 ≤ M ≤ N * N
Original question link
Obviously, the variables need to satisfy:
X * N <= M <= Y * N
Otherwise, there will be no solution.
Finding the edges could be done in waves. Start by connecting each node i from the first set to the according node i from the second set. In the next wave, connect i with (i + 1) mod N. Then i with (i + 2) mod N and so one. This will increase the degree of each vertex by exactly one in each wave. Stop whenever you have constructed M edges. This may also happen during a wave.
ACM ICPC 2016 India Preliminary Round Problem.
Link
The contest is now ended. I couldn't submit the answer (was about to submit the code just 10 secs before the end and my Internet stopped working).
d is equivalent to X in the OP's version of the problem.
D is equivalent to Y in the OP's version of the problem.
t is the number of test cases.
I made the code as per the original question in the link.
The logic is similar to
Nico Schertler's one. My complexity will be a little more because instead of just connecting, i+xth node to i in the xth iteration, I have used a set that finds the first element not connected in the range [1..N] and connects them.
This is my code:
#include <bits/stdc++.h>
using namespace std;
int main() {
int t, n, m, d, D;
cin >> t;
while(t--) {
cin >> n >> m >> d >> D;
if(n*D < m || n*d > m)
printf("-1\n");
else {
vector <set <int> > v(n);
int edges = 0, count = 0;
while(count != d) {
for(int i = 0; i < n; i++) {
for(int j = 0; j < n; j++) {
if(v[i].find(j) == v[i].end()) {
v[i].insert(j);
++edges;
break;
}
if(edges == m)
break;
}
if(edges == m)
break;
}
++count;
}
while(edges < m) {
for(int i = 0; i < n; i++) {
if(v[i].size() == D)
continue;
for(int j = 0; j < n; j++) {
if(v[i].find(j) == v[i].end()) {
v[i].insert(j);
++edges;
break;
}
if(edges == m)
break;
}
if(edges == m)
break;
}
}
for(int i = 0; i < n; i++) {
set <int>::iterator it = v[i].begin();
for(; it != v[i].end(); ++it) {
printf("%d %d\n", i+1, (*it)+1);
}
}
}
}
return 0;
}
I don't know whether this code is correct or not.
Related
We are given n players, each player has 3 values assigned A, B and C.
A player i cannot win if there exists another player j with all 3 values A[j] > A[i], B[j] > B[i] and C[j] > C[i]. We are asked to find number of players cannot win.
I tried this problem using brute force, which is a linear search over players array. But it's showing TLE.
For each player i, I am traversing the complete array to find if there exists any other player j for which the above condition holds true.
Code :
int count_players_cannot_win(vector<vector<int>> values) {
int c = 0;
int n = values.size();
for(int i = 0; i < n; i++) {
for(int j = 0; j!= i && j < n; j++) {
if(values[i][0] < values[j][0] && values[i][1] < values[j][1] && values[i][2] < values[j][2]) {
c += 1;
break;
}
}
}
return c;
}
And this approach is O(n^2), as for every player we are traversing the complete array. Thus it is giving the TLE.
Sample testcase :
Sample Input
3(number of players)
A B C
1 4 2
4 3 2
2 5 3
Sample Output :
1
Explanation :
Only player1 cannot win as there exists player3 whose all 3 values(A, B and C) are greater than that of player1.
Contraints :
n(number of players) <= 10^5
What would be optimal way to solve this problem?
Solution:
int n;
const int N = 4e5 + 1;
int tree[N];
int get_max(int i, int l, int r, int L) { // range query of max in range v[B+1: n]
if(r < L || n <= l)
return numeric_limits<int>::min();
else if(L <= l)
return tree[i];
int m = (l + r)/2;
return max(get_max(2*i+1, l, m, L), get_max(2*i+2, m+1, r, L));
}
void update(int i, int l, int r, int on, int v) { // point update in tree[on]
if(r < on || on < l)
return;
else if(l == r) {
tree[i] = max(tree[i], v);
return;
}
int m = (l + r)/2;
update(2*i+1, l, m, on, v);
update(2*i+2, m + 1, r, on, v);
tree[i] = max(tree[2*i+1], tree[2*i+2]);
}
bool comp(vector<int> a, vector<int> b) {
return a[0] != b[0] ? a[0] > b[0] : a[1] < b[1];
}
int solve(vector<vector<int>> &v) {
n = v.size();
vector<int> b(n, 0); // reduce the scale of range from [0,10^9] to [0,10^5]
for(int i = 0; i < n; i++) {
b[i] = v[i][1];
}
for(int i = 0; i < n; i++) {
cin >> v[i][2];
}
// sort on 0th col in reverse order
sort(v.begin(), v.end(), comp);
sort(b.begin(), b.end());
int ans = 0;
for(int i = 0; i < n;) {
int j = i;
while(j < n && v[j][0] == v[i][0]) {
int B = v[j][1];
int pos = lower_bound(b.begin(), b.end(), B) - b.begin(); // position of B in b[]
int mx = get_max(0, 0, n - 1, pos + 1);
if(mx > v[j][2])
ans += 1;
j++;
}
while(i < j) {
int B = v[i][1];
int C = v[i][2];
int pos = lower_bound(b.begin(), b.end(), B) - b.begin(); // position of B in b[]
update(0, 0, n - 1, pos, C);
i++;
}
}
return ans;
}
This solution uses segment tree, and thus solves the problem in
time O(n*log(n)) and space O(n).
Approach is explained in the accepted answer by #Primusa.
First lets assume that our input comes in the form of a list of tuples T = [(A[0], B[0], C[0]), (A[1], B[1], C[1]) ... (A[N - 1], B[N - 1], C[N - 1])]
The first observation we can make is that we can sort on T[0] (in reverse order). Then for each tuple (a, b, c), to determine if it cannot win, we ask if we've already seen a tuple (d, e, f) such that e > b && f > c. We don't need to check the first element because we are given that d > a* since T is sorted in reverse.
Okay, so now how do we check this second criteria?
We can reframe it like so: out of all tuples (d, e, f), that we've already seen with e > b, what is the maximum value of f? If the max value is greater than c, then we know that this tuple cannot win.
To handle this part we can use a segment tree with max updates and max range queries. When we encounter a tuple (d, e, f), we can set tree[e] = max(tree[e], f). tree[i] will represent the third element with i being the second element.
To answer a query like "what is the maximum value of f such that e > b", we do max(tree[b+1...]), to get the largest third element over a range of possible second elements.
Since we are only doing suffix queries, you can get away with using a modified fenwick tree, but it is easier to explain with a segment tree.
This will give us an O(NlogN) solution, for sorting T and doing O(logN) work with our segment tree for every tuple.
*Note: this should actually be d >= a. However it is easier to explain the algorithm when we pretend everything is unique. The only modification you need to make to accommodate duplicate values of the first element is to process your queries and updates in buckets of tuples of the same value. This means that we will perform our check for all tuples with the same first element, and only then do we update tree[e] = max(tree[e], f) for all of those tuples we performed the check on. This ensures that no tuple with the same first value has updated the tree already when another tuple is querying the tree.
I recently came across this problem:
You are given height of n histograms each of width 1. You have to choose any two histograms such that if it starts raining and all other histograms(except the two you have selected) are removed, then the water collected between the two histograms is maximised.
Input:
9
3 2 5 9 7 8 1 4 6
Output:
25
Between third and last histogram.
This is a variant of Trapping rain water problem.
I tried two solutions but both had worst case complexity of N^2. How can we optimise further.
Sol1: Brute force for every pair.
int maxWaterCollected(vector<int> hist, int n) {
int ans = 0;
for (int i= 0; i < n; i++) {
for (int j = i + 1; j < n; j++) {
ans = max(ans, min(hist[i], hist[j]) * (j - i - 1));
}
}
return ans;
}
Sol2: Keep a sequence of histograms in increasing order of height. For every histogram, find its best histogram in this sequence. now, if all histograms are in increasing order then this solution also becomes N^2.
int maxWaterCollected(vector<int> hist, int n) {
vector< pair<int, int> > increasingSeq(1, make_pair(hist[0], 0)); // initialised with 1st element.
int ans = 0;
for (int i = 1; i < n; i++) {
// compute best result from current increasing sequence
for (int j = 0; j < increasingSeq.size(); j++) {
ans = max(ans, min(hist[i], increasingSeq[j].first) * (i - increasingSeq[j].second - 1));
}
// add this histogram to sequence
if (hist[i] > increasingSeq.back().first) {
increasingSeq.push_back(make_pair(hist[i], i));
}
}
return ans;
}
Use 2 iterators, one from begin() and one from end() - 1.
until the 2 iterator are equal:
Compare current result with the max, and keep the max
Move the iterator with smaller value (begin -> end or end -> begin)
Complexity: O(n).
Jarod42 has the right idea, but it's unclear from his terse post why his algorithm, described below in Python, is correct:
def candidates(hist):
l = 0
r = len(hist) - 1
while l < r:
yield (r - l - 1) * min(hist[l], hist[r])
if hist[l] <= hist[r]:
l += 1
else:
r -= 1
def maxwater(hist):
return max(candidates(hist))
The proof of correctness is by induction: the optimal solution either (1) belongs to the candidates yielded so far or (2) chooses histograms inside [l, r]. The base case is simple, because all histograms are inside [0, len(hist) - 1].
Inductively, suppose that we're about to advance either l or r. These cases are symmetric, so let's assume that we're about to advance l. We know that hist[l] <= hist[r], so the value is (r - l - 1) * hist[l]. Given any other right endpoint r1 < r, the value is (r1 - l - 1) * min(hist[l], hist[r1]), which is less because r - l - 1 > r1 - l - 1 and hist[l] >= min(hist[l], hist[r1]). We can rule out all of these solutions as suboptimal, so it's safe to advance l.
I have 2D Array where each element is O or Non Zero, I have to collect all 1 elements. For Doing so I will hire a workers
`0` - dead coconut tree
`Non Zero` - living tree
A worker starts harvesting at a living tree and continues harvesting along a straight line of trees in one of the four cardinal directions (i.e., north, south, east, or west). A worker stops harvesting coconuts when one of the following conditions is satisfied
The worker comes across a dead coconut tree.
The worker hits the edge of the plantation (i.e., there are no more
harvestable trees in that direction).
For Example Array Look Like This:
So Minimum 4 workers are required
Question:
I was shocked when i came to know this is Maximum Matching Problem , i had no idea how is it so.
Following is Code which Mark the Horizontal and Vertical Strips with a index
m=1;
for (int i=0; i<r; i++) {
int j = 0;
while (j < c && a[i][j] < m) j++;
while (j < c) {
while (j < c && a[i][j] >= m) {
h[i][j] = hcnt;
j++;
}
hcnt++;
while (j < c && a[i][j] < m) j++;
}
}
int vcnt = 0;
for (int j=0; j<c; j++) {
int i = 0;
while (i < r && a[i][j] < m) i++;
while (i < r) {
while (i < r && a[i][j] >= m) {
v[i][j] = vcnt;
i++;
}
vcnt++;
while (i < r && a[i][j] < m) i++;
}
}
So my Question is How to add edges and why it's maximum matching problem and can someone explain the intuition behind this why maximum matching works on this problem.
Original Question
Following Code Add the edges. I hade no idea how we are adding the edges and why is it working
int s = 0, t = hcnt + vcnt + 1;
for (int i=0; i<hcnt; i++) addEdge(s, i+1, 1);
for (int i=0; i<r; i++) {
for (int j=0; j<c; j++) if(a[i][j] >= m) {
addEdge(1 + h[i][j], 1 + hcnt + v[i][j], 1);
}
}
for (int i=0; i<vcnt; i++) addEdge(1 + hcnt + i, t, 1);
The idea is as follows: we need to take some vertical and horizontal lines in such a way that all trees are covered and the number of picked lines is minimized.
Each cell with a tree becomes an edge in the graph. Each vertical line becomes a vertex in the left part of the graph and each horizontal line becomes a vertex in the right part. Now the problem is equivalent to finding a vertex cover in this graph. The size of the maximum matching is equal to the size of the vertex cover in any bipartite graph (it's a more or less well-known theorem).
I have an undirected graph G = (V, E) represented by an adjacency matrix. For each edge I must compute its weakness. The weakness d is computed as follow:
Where Nx is the set of direct nodes of x (with direct nodes I mean the nodes with a path of 1 from x).
I've wrote this algorithm, but I'm not sure about how to evaluate its complexity.
float **graph_weakness(struct graph *g)
{
int i;
int j;
int n = g->n;
struct edge *edge;
int rel_union;
int rel_intersect;
int idx;
float **m = graph_to_matrix(g);
/* complessità: O(|V|/2|E|) */
for (i = 0; i < n; i++) {
edge = g->nodes[i]->edges;
while (edge != NULL) {
idx = edge->idx;
if (m[i][idx] == MATRIX_SET) {
rel_union = 0;
rel_intersect = 0;
for (j = 0; j < n; j++) {
if (m[i][j] != 0.0 || m[idx][j] != 0.0) {
rel_union++;
}
if (m[i][j] != 0.0 && m[idx][j] != 0.0) {
rel_intersect++;
}
}
m[i][idx] = 1 - (float) rel_intersect / rel_union;
m[idx][i] = m[i][idx];
}
edge = edge->next;
}
}
return m;
}
The algorithm iterates over the edges and for each edge computes the intersection and union of the sets using a loop from 1..|V|.
Tha matrix is symmetric so the computation is made on half the edges.
The complexity should therefore be O(|E|/2 * |V|) = O(|E|*|V|), am I right?
The line
float **m = graph_to_matrix(g);
is probably Θ(|V| |E|)
(it depends on your matrix library).
(Perhaps somewhat contrary to the statement in your question), the algorithm starts by looping over all nodes
for (i = 0; i < n; i++) {
For each node, it iterates over all neighbors
while (edge != NULL) {
and for each neighbor, it iterates over all nodes again
for (j = 0; j < n; j++) {
So, assuming your graph has adjacency-list representation, this first + second loop are run O(|E| + |v|) times altogether, and each iteration iterates over |V| items.
This algorithm is O((|E| + |V|) |V|), therefore.
This is the N queens problem. I know the N queens problem and its solution and I wrote a code in C++ programming language using backtracking approach:
#include <iostream>
using namespace std;
int col[100];
int n;
int m;
bool check(int i,int k){
for(int j=1 ; j<k ; j++){
if(col[j] == i || i-k == col[j] - j || i+k == col[j]+j)return false;
}
return true;
}
void queens(int k){
for(int i=1 ; i<=n ; i++){
if(check(i,k)){
col[k] = i;
if(k == n){
for(int j=1 ; j<=n ; j++)cout<<col[j]<<" ";
cout<<endl;
}
else queens(k+1);
}
}
}
int main(){
n = 4;
queens(1);
}
But here is my question if we have m queens instead of n which m < n how can I solve this question with backtracking approach I think a few changes in my code would solve the problem but I'm not sure of it.
I googled it but found nothing so is there any backtrack solution for this problem?
The answer is way more easier than you think! In regular N-queen problem, we iterate k, n times. So we can add our first queen at any place in the first column (between 1 till 8) and so on for other queens.
But if the queen count (m) is smaller than our chess size (n) we have to place that queen (our first queen) not only on the first column but everywhere (from 1 till 64) and do the same thing for other queens.
So the only thing you have to do is to iterate k, n*n times instead of n times.
You could make the function queens to have 2 parameters:
Column index you have currently reached (num_column)
Number of queens placed on the table (num_queens)
Now you must fix a column between [num_column, n] to place the queen.
Let (i, j) be the i'th row and j'th column (1 <= i <= n and num_column <= j <= n).
for j from num_column to n:
for i from 1 to n:
if check(i, j):
col[j] = i
queens(j + 1, num_queens + 1)
//resetting the queen on j'th column
col[j] = 0
You can stop the function when you reached num_queens == m.