why do I am getting random output and more output than accepted? - c++14

Question
After solving programming problems for years, Chef has become lazy and decided to get a better physique by doing some weight lifting exercises.
On any regular day, Chef does N exercises at times A1, A2,…, AN (in minutes, all distinct), and each exercise provides a tension of B1, B2,…, BN units. In the period between two consecutive exercises, his muscles relax R units of tension per minute.
More formally, Chef's tension is described by a number x. Before any workouts, x=0. When he does a workout at time Ai, the tension x instantly increases by Bi. Between workouts, the number x decreases by R units per minute maximized with 0.
Considering the time of exercise and hence tension to be negligible, find the maximum tension he will be feeling in his muscles during the entire period of his workout.
Input:
First line will contain T, the number of test cases. Then the test cases follow.
Each test case contains 3 lines of input.
The first line will contain 2 space-separated integers N, R, the number of timestamps at which Chef performs his exercise, and units of tension relaxed per minute.
The second line contains N space-separated integers A1, A2,…, AN.
The third line contains N space-separated integers B1, B2,…, BN.
Output:
For each test case, output in a single line the maximum amount of tension Chef will have in his muscles.
Constraints
1≤T≤10
1≤N≤5⋅10^4
1≤R,B[i]≤10^5
1≤A[i]≤10^9
A[i]−1<A[i], for all 2≤i≤N
Sample Input:
3
1 2
10
10
2 2
10 11
10 10
3 1
1 2 3
1 2 3
Sample Output:
10
18
4
my output
22090
-44170
1
987400726
-1974801434
1
1
10960
-10956
My implementation
#include <bits/stdc++.h>
using namespace std;
int before(int p,int q){
int x,c;
int a[p];
int b[p];
for(int j=0;j<p;j++){
cin>>a[j];
}
for(int h=0;h<p;h++){
cin>>b[h];
}
for(int k=0;k<p;k++){
x += b[k];
c = a[k+1]-a[k];
c *=q;
x -=c;
}
return x;
}
int main() {
int t;
cin>>t;
for(int i=0;i<t;i++){
int n,r;
cin>>n>>r;
int d = before(n,r);
cout<<d<<endl;
}
return 0;
}

Related

spoj Mixtures : Need help regarding logic

The question asks to minimise the smoke produced.
My approach:
Since at any instant, only adjacent mixtures will be picked up. So i tried using dp. As in if i know the answer for n-1 mixtures, i can get the answer for n mixtures.
How?:
The nth mixture will either be
Case1:mixed with (n-1)th mixture and their result mixed with resultant mixture of 1st n-2 mixtures. OR
Case2: It will be mixed the resultant mixture of n-1 mixtures.
Let dp[i] denote minimum smoke for first i mixtures and res[i] denote resultant mixture for first i mixtures. They will contain optimised values ofcourse. And A[i] denote the color of i th mixture
so
for Case1: dp[i]=dp[i-2]+ A[i-1]A[i] + res[i-2](A[i-1]+A[i])%100;
And res[i]=(res[i-2]+A[i]+A[i-1])%100;
for Case2: dp[i] = dp[i-1] + res[i-1]*A[i];
and res[i] = (res[i-1]+A[i])%100;
Base cases:
if only 1 mixture given smoke = 0 and resultant mixture is mixture itself.
and if only 2 mixtures given smoke = A[0]*A[1] and resuilt = (A[0]+A[1])%100
My code: out of 4 cases, it passed only 1 ( not the sample test case)
Where is my logic wrong?
Problem Statement
Harry Potter has n mixtures in front of him, arranged in a row. Each mixture has one of 100 different colors (colors have numbers from 0 to 99).
He wants to mix all these mixtures together. At each step, he is going to take two mixtures that stand next to each other and mix them together, and put the resulting mixture in their place.
When mixing two mixtures of colors a and b, the resulting mixture will have the color (a+b) mod 100.
Also, there will be some smoke in the process. The amount of smoke generated when mixing two mixtures of colors a and b is a*b.
Find out what is the minimum amount of smoke that Harry can get when mixing all the mixtures together.
Input
There will be a number of test cases in the input.
The first line of each test case will contain n, the number of mixtures, 1 <= n <= 100.
The second line will contain n integers between 0 and 99 - the initial colors of the mixtures.
Output
For each test case, output the minimum amount of smoke.
Example
Input:
2
18 19
3
40 60 20
Output:
342
2400
#include<bits/stdc++.h>
using namespace std;
int main() {
int n;
cin>>n; //no. of mixtures
int A[n];
for(int i=0;i<n;i++)
cin>>A[i]; // filling their values.
if(n==1) //base case
{
cout<<0<<endl;
return 0;
}
long long dp[n],res[n];
dp[0]=0; // for 2 mixtures
res[0]=A[0];
dp[1]=A[1]*A[0];
res[1]=(A[1]+A[0])%100; //
for(int i=2;i<n;i++)
{
long long ans1,ans2,res1,res2;
ans1=dp[i-1]+res[i-1]*A[i];
res1=(res[i-1]+A[i])%100;
ans2=dp[i-2]+A[i-1]*A[i]+res[i-2]*((A[i-1]+A[i])%100);
res2=(res[i-2]+(A[i-1]+A[i])%100)%100;
dp[i]=min(ans1,ans2);
if(dp[i]==ans1)
res[i]=res1;
else
res[i]=res2;
}
cout<<dp[n-1];
return 0;
}
Your code outputs 6500 for input 20 10 30 30 40 but the correct result is 3500:
20 10 30 30 40
200/30 900/60
30 60 40
2400/0
30 0
smoke = 200 + 900 + 2400 = 3500
One trick is to realise (or learn) that the final colour for any collapsed interval is the same, no matter the order of mixing on it. The Python code below, utilising divide-and-conquer, was accepted.
We can use two ideas (1) any specific interval we choose to collapse all the way to one element by mixing will result in the same colour no matter the order of the mixes because of the associative property of addition, and (2) given an optimal mix order for any interval (in particular the full list), because mixing is between adjacent colours, there must be some optimal single place for the last mix of that interval, in other words a single optimal place that divides the full interval in two such that each side is fully collapsed before the final mix.
Given those two ideas, we basically build a kind of "brute force" recurrence -- try each one of those possible splits, knowing that the colour for each part is not a dimension we need more than one possibility for, and perform the same recurrence on each of the two parts. Hopefully, the base cases in the code are pretty clear.
import sys
# Returns (smoke, colour)
def f(lst, i, j, memo):
# Empty interval
if i > j:
return (float('inf'), 0)
# Single element
if i == j:
return (0, lst[i])
if (i, j) in memo:
return memo[(i, j)]
best = (float('inf'), -1)
for k in xrange(i, j):
smoke_l, colour_l = f(lst, i, k, memo)
smoke_r, colour_r = f(lst, k + 1, j, memo)
smoke = smoke_l + smoke_r + colour_l * colour_r
colour = (colour_l + colour_r) % 100
best = min(best, (smoke, colour))
memo[(i, j)] = best
return best
# I/O
while True:
line = sys.stdin.readline()
if line:
n = int(line)
if n == 0:
continue
lst = sys.stdin.readline()
lst = map(int, lst.split())
print f(lst, 0, n-1, {})[0]
else:
break

Finding a number of maximally different binary vectors from a set

Consider the set, S, of all binary vectors of length n where each contains exactly m ones; so there are n-m zeros in each vector.
My goal is to construct a number, k, of vectors from S such that these vectors are as different as possible from each other.
As a simple example, take n=4, m=2 and k=2, then a possible solution is: [1,1,0,0] and [0,0,1,1].
It seems that this is an open problem in the coding theory literature (?).
Is there any way (i.e. algorithm) to find a suboptimal yet good solution ?
Is Hamming distance the right performance measure to use in this case ?
Some thoughts:
In this paper, the authors propose a couple of algorithms to find the subset of vectors such that the pairwise Hamming distance is >= a certain value, d.
I have implemented the Random approach as follows: take a set SS, which is initialized by any vector from S. Then, I consider the remaining vectors
in S. For each of these vectors, I check if this vector has at least a distance d with respect to each vector in SS. If so, then it is added to SS.
By taking the maximal possible d, if the size of SS is >= k, then I consider SS as an optimal solution, and I choose any subset of k vectors from SS.
Using this approach, I think that the resulting SS will depend on the identity of the initial vector in SS; i.e. there are multiple solutions(?).
But how to proceed if the size of SS is < k ?
From the proposed algorithms in the paper, I have only understood the Random one. I am interested in the Binary lexicographic search (section 2.3) but I don't know how to implement it (?).
Maybe you find this paper useful (I wrote it). It contains algorithms that efficiently create permutations of bitstrings.
For example, the inc() algorithm:
long inc(long h_in , long m0 , long m1) {
long h_out = h_in | (~m1); //pre -mask
h_out ++;
// increment
h_out = (h_out & m1) | m0; //post -mask
return h_out;
}
It takes an input h_in and return the next higher value that is at least 1 larger than h_in and 'matches' the boundaries m0 and m1. 'Matching' means: the result has a 1 whereever m0 has a 1, and the result has a 0 whereever m1 has a 0. Not that h_in MUST BE a valid value with regards to mo and m1! Also, note that m0 has to be bitwise smaller than m1, which means that m0 cannot have a 1 in a position where m1 has a 0.
This could be used to generate permutations with a minimum edit distance to a given input string:
Let's assume you have 0110, you first NEGATE it to 1001 (edit distance = k).
Set 'm0=1001' and 'm1=1001'. Using this would result only on '1001' itself.
Now to get all values with edit distance k-1, you can do the following, simply flip one of the bits of m0 or m1, then inc() will return an ordered series of all bitstring that have a difference of k or k-1.
I know, not very interesting yet, but you can modify up to k bits, and inc() will always return all permutations with the maximum allowed edit difference with regard to m0 and m1.
Now, to get all permutations, you would have to re-run the algorithm with all possibly combinations of m0 and m1.
Example: To get all possible permutations of 0110 with edit distance 2, you would have to run inc() with the following permutations of m0=0110 and m1=0110 (to get permutations, a bit position has to be expanded, meaning that m0 is set to 0 and m1 is set to 1:
Bit 0 and 1 expanded: m0=0010 and m1=1110
Bit 0 and 2 expanded: m0=0100 and m1=1110
Bit 0 and 3 expanded: m0=0110 and m1=1111
Bit 1 and 2 expanded: m0=0000 and m1=0110
Bit 1 and 3 expanded: m0=0010 and m1=0111
Bit 2 and 3 expanded: m0=0100 and m1=0111
As starting value for h_0 I suggest to use simply m0. Iteration can be aborted once inc() returns m1.
Summary
The above algorithm generates in O(x) all x binary vectors that differ in at least y bits (configurable) from a given vector v.
Using your definition of n=number of bits in a vector v, setting y=n generates exactly 1 vector which is the exact opposite of the input vector v. For y=n-1, it will generate n+1 vectors: n vectors which differ in all but one bits and 1 vector that differs in all bits. And so on different values of y.
**EDIT: Added summary and replaced erroneous 'XOR' with 'NEGATE' in the text above.
I don't know if maximizing the sum of the Hamming distances is the best criterion to obtain a set of "maximally different" binary vectors, but I strongly suspect it is. Furthermore I strongly suspect that the algorithm that I'm going to present yields exactly a set of k vectors that maximizes the sum of Hamming distances for vectors of n bits of with m ones and n - m zeroes. Unfortunately I don't have the time to prove it (and, of course, I might be wrong – in which case you would be left with a “suboptimal yet good” solution, as per your request.)
Warning: In the following I'm assuming that, as a further condition, the result set may not contain the same vector twice.
The algorithm I propose is the following:
Starting from a result set with just one vector, repeatedly add one of
those remaining vectors that have the maximum sum of Hamming distances
from all the vectors that are already in the result set. Stop when the
result set contains k vectors or all available vectors have been
added.
Please note that the sum of Hamming distances of the result set does not depend on the choice of the first or any subsequent vector.
I found a “brute force” approach to be viable, given the constraints you mentioned in a comment:
n<25, 1<m<10, 10<k<100 (or 10<k<50)
The “brute force” consists in precalculating all vectors in “lexicographical” order in an array, and also keeping up-to-date an array of the same size that contains, for each vector with the same index, the total Hamming distance of that vector to all the vectors that are in the result set. At each iteration the total Hamming distances are updated, and the first (in “lexicographical” order) of all vectors that have the maximum total Hamming distance from the current result set is chosen. The chosen vector is added to the result set, and the arrays are shifted in order to fill in its place, effectively decreasing their size.
Here is my solution in Java. It's meant to be easily translatable to any procedural language, if needed. The part that calculates the combinations of m items out of n can be replaced by a library call, if one is available. The following Java methods have a corresponding C/C++ macro that uses fast specialized processor instructions on modern CPUs:
Long.numberOfTrailingZeros→__builtin_ctzl, Long.bitCount→__builtin_popcountl.
package waltertross.bits;
public class BitsMain {
private static final String USAGE =
"USAGE: java -jar <thisJar> n m k (1<n<64, 0<m<n, 0<k)";
public static void main (String[] args) {
if (args.length != 3) {
throw new IllegalArgumentException(USAGE);
}
int n = parseIntArg(args[0]); // number of bits
int m = parseIntArg(args[1]); // number of ones
int k = parseIntArg(args[2]); // max size of result set
if (n < 2 || n > 63 || m < 1 || m >= n || k < 1) {
throw new IllegalArgumentException(USAGE);
}
// calculate the total number of available bit vectors
int c = combinations(n, m);
// truncate k to the above number
if (k > c) {
k = c;
}
long[] result = new long[k]; // the result set (actually an array)
long[] vectors = new long[c - 1]; // all remaining candidate vectors
long[] hammingD = new long[c - 1]; // their total Hamming distance to the result set
long firstVector = (1L << m) - 1; // m ones in the least significant bits
long lastVector = firstVector << (n - m); // m ones in the most significant bits
result[0] = firstVector; // initialize the result set
// generate the remaining candidate vectors in "lexicographical" order
int size = 0;
for (long v = firstVector; v != lastVector; ) {
// See http://graphics.stanford.edu/~seander/bithacks.html#NextBitPermutation
long t = v | (v - 1); // t gets v's least significant 0 bits set to 1
// Next set to 1 the most significant bit to change,
// set to 0 the least significant ones, and add the necessary 1 bits.
v = (t + 1) | (((~t & -~t) - 1) >>> (Long.numberOfTrailingZeros(v) + 1));
vectors[size++] = v;
}
assert(size == c - 1);
// chosenVector is always the last vector added to the result set
long chosenVector = firstVector;
// do until the result set is filled with k vectors
for (int r = 1; r < k; r++) {
// find the index of the new chosen vector starting from the first
int chosen = 0;
// add the distance to the old chosenVector to the total distance of the first
hammingD[0] += Long.bitCount(vectors[0] ^ chosenVector);
// initialize the maximum total Hamming distance to that of the first
long maxHammingD = hammingD[0];
// for all the remaining vectors
for (int i = 1; i < size; i++) {
// add the distance to the old chosenVector to their total distance
hammingD[i] += Long.bitCount(vectors[i] ^ chosenVector);
// whenever the calculated distance is greater than the max,
// update the max and the index of the new chosen vector
if (maxHammingD < hammingD[i]) {
maxHammingD = hammingD[i];
chosen = i;
}
}
// set the new chosenVector to the one with the maximum total distance
chosenVector = vectors[chosen];
// add the chosenVector to the result set
result[r] = chosenVector;
// fill in the hole left by the chosenVector by moving all vectors
// that follow it down by 1 (keeping vectors and total distances in sync)
System.arraycopy(vectors, chosen + 1, vectors, chosen, size - chosen - 1);
System.arraycopy(hammingD, chosen + 1, hammingD, chosen, size - chosen - 1);
size--;
}
// dump the result set
for (int r = 0; r < k; r++) {
dumpBits(result[r], n);
}
}
private static int parseIntArg(String arg) {
try {
return Integer.parseInt(arg);
} catch (NumberFormatException ex) {
throw new IllegalArgumentException(USAGE);
}
}
private static int combinations(int n, int m) {
// calculate n over m = n! / (m! (n - m)!)
// without using arbitrary precision numbers
if (n <= 0 || m <= 0 || m > n) {
throw new IllegalArgumentException();
}
// possibly avoid unnecessary calculations by swapping m and n - m
if (m * 2 < n) {
m = n - m;
}
if (n == m) {
return 1;
}
// primeFactors[p] contains the power of the prime number p
// in the prime factorization of the result
int[] primeFactors = new int[n + 1];
// collect prime factors of each term of n! / m! with a power of 1
for (int term = n; term > m; term--) {
collectPrimeFactors(term, primeFactors, 1);
}
// collect prime factors of each term of (n - m)! with a power of -1
for (int term = n - m; term > 1; term--) {
collectPrimeFactors(term, primeFactors, -1);
}
// multiply the collected prime factors, checking for overflow
int combinations = 1;
for (int f = 2; f <= n; f += (f == 2) ? 1 : 2) {
// multiply as many times as requested by the stored power
for (int i = primeFactors[f]; i > 0; i--) {
int before = combinations;
combinations *= f;
// check for overflow
if (combinations / f != before) {
String msg = "combinations("+n+", "+m+") > "+Integer.MAX_VALUE;
throw new IllegalArgumentException(msg);
}
}
}
return combinations;
}
private static void collectPrimeFactors(int n, int[] primeFactors, int power) {
// for each candidate prime that fits in the remaining n
// (note that non-primes will have been preceded by their component primes)
for (int i = 2; i <= n; i += (i == 2) ? 1 : 2) {
while (n % i == 0) {
primeFactors[i] += power;
n /= i;
}
}
}
private static void dumpBits(Long bits, int nBits) {
String binary = Long.toBinaryString(bits);
System.out.println(String.format("%"+nBits+"s", binary).replace(' ', '0'));
}
}
The algorithm's data for n=5, m=2, k=4:
result
00011 00101 00110 01001 01010 01100 10001 10010 10100 11000 vectors
0→2 0→2 0→2 0→2 0→4 0→2 0→2 0→4 0→4 hammingD
^ chosen
00011 00101 00110 01001 01010 10001 10010 10100 11000
01100 2→4 2→4 2→4 2→4 2→6 2→6 4→6 4→6
^
00011 00101 00110 01001 01010 10010 10100 11000
01100 4→6 4→8 4→6 4→8 6→8 6→8 6→8
10001 ^
00011 00101 01001 01010 10010 10100 11000
01100 6 6 8 8 8 8
10001
00110
Sample output (n=24, m=9, k=20):
[wtross ~/Dropbox/bits]$ time java -jar bits-1.0-SNAPSHOT.jar 24 9 20
000000000000000111111111
000000111111111000000000
111111000000000000000111
000000000000111111111000
000111111111000000000000
111000000000000000111111
000000000111111111000000
111111111000000000000000
000000000000001011111111
000000111111110100000000
111111000000000000001011
000000000000111111110100
001011111111000000000000
110100000000000000111111
000000001011111111000000
111111110100000000000000
000000000000001101111111
000000111111110010000000
111111000000000000001101
000000000000111111110010
real 0m0.269s
user 0m0.244s
sys 0m0.046s
The toughest case within your constraints (n=24, m=9, k=99) takes ~550 ms on my Mac.
The algorithm could be made even faster by some optimization, e.g., by shifting shorter array chunks. Remarkably, in Java I found shifting "up" to be considerably slower than shifting "down".
UPDATED ANSWER
Looking at the example output of Walter Tross's code, I think that generating a random solution can be simplified to this:
Take any vector to start with, e.g. for n=8, m=3, k=5:
A: 01001100
After every step, sum the vectors to get the number of times each position has been used:
SUM: 01001100
Then, for the next vector, place the ones at positions that have been used least (in this case zero times), e.g.:
B: 00110001
to get:
A: 01001100
B: 00110001
SUM: 01111101
Then, there are 2 least-used positions left, so for the 3 ones in the next vector, use those 2 positions, and then put the third one anywhere:
C: 10010010
to get:
A: 01001100
B: 00110001
C: 10010010
SUM: 11121111 (or reset to 00010000 at this point)
Then for the next vector, you have 7 least-used positions (the ones in the sum), so choose any 3, e.g.:
D: 10100010
to get:
A: 01001100
B: 00110001
C: 10010010
D: 10100010
SUM: 21221121
And for the final vector, choose any of the 4 least-used positions, e.g.:
E: 01000101
To generate all solutions, simply generate every possible vector in each step:
A: 11100000, 11010000, 11001000, ... 00000111
Then, e.g. when A and SUM are 11100000:
B: 00011100, 00011010, 00011001, ... 00000111
Then, e.g. when B is 00011100 and SUM is 11111100:
C: 10000011, 01000011, 00100011, 00010011, 00001011, 00000111
Then, e.g. when C is 10000011 and SUM is 21111111:
D: 01110000, 01101000, 01100100, ... 00000111
And finally, e.g. when D is 01110000 and SUM is 22221111:
E: 00001110, 00001101, 00001011, 00000111
This would result in C(8,3) × C(5,3) × C(8,1) × C(7,3) × C(4,3) = 56 × 10 × 8 × 35 × 4 = 627,200 solutions for n=8, m=3, k=5.
Actually, you need to add a method to avoid repeating the same vector, and avoid painting yourself into a corner; so I don't think this will be simpler than Walter's answer.
INITIAL ANSWER - HAS MAJOR ISSUES
(I will assume than m is not greater than n/2, i.e. the number of ones is not greater than the number of zeros. Otherwise, use a symmetrical approach.)
When k×m is not greater than n, there obviously are optimal solutions, e.g.:
n=10, m=3, k=3:
A: 1110000000
B: 0001110000
C: 0000001110
where the Hamming distances are all 2×m:
|AB|=6, |AC|=6, |BC|=6, total=18
When k×m is greater than n, solutions where the difference in Hamming distances between consecutive vectors are minimized offer the greatest total distance:
n=8, m=3, k=4:
A: 11100000
B: 00111000
C: 00001110
D: 10000011
|AB|=4, |AC|=6, |AD|=4, |BC|=4, |BD|=6, |CD|=4, total=28
n=8, m=3, k=4:
A: 11100000
B: 00011100
C: 00001110
D: 00000111
|AB|=6, |AC|=6, |AD|=6, |BC|=2, |BD|=4, |CD|=2, total=26
So, practically, you take m×k and see how much greater it is than n, let's call it x = m×k−n, and this x is the number of overlaps, i.e. how often a vector will have a one in the same position as the previous vector. You then spread out the overlap over the different vectors as evenly as possible to maximize the total distance.
In the example above, x = 3×4−8 = 4 and we have 4 vectors, so we can spread out the overlap evenly and every vector has 1 one in the same position as the previous vector.
To generate all unique solutions, you could:
Calculate x = m×k−n and generate all partitions of x into k parts, with the lowest possible maximum value:
n=8, m=3, k=5 -> x=7
22111, 21211, 21121, 21112, 12211, 12121, 12112, 11221, 11212, 11122
(discard partitions with value 3)
Generate all vectors to be used as vector A, e.g.:
A: 11100000, 11010000, 11001000, 11000100, ... 00000111
For each of these, generate all vectors B, which are lexicographically smaller than vector A, and have the correct number of overlapping ones with vector A (in the example that is 1 and 2), e.g.:
A: 10100100
overlap=1:
B: 10011000, 10010010, 10010001, 10001010, 10001001, 10000011, 01110000, ... 00000111
overlap=2:
B: 10100010, 10100001, 10010100, 10001100, 10000110, 10000101, 01100100, ... 00100101
For each of these, generate all vectors C, and so on, until you have sets of k vectors. When generating the last vector, you have to take into account the overlapping with the previous as well as the next (i.e. first) vector.
I assume it's best to treat the partitions of x into k as a binary tree:
1 2
11 12 21 22
111 112 121 122 211 212 221
1112 1121 1122 1211 1212 1221 2111 2112 2121 2211
11122 11212 11221 12112 12121 12211 21112 21121 21211 22111
and traverse this tree while creating solutions, so that each vector only needs to be generated once.
I think this method only works for some values of n, m and k; I'm not sure it can be made to work for the general case.

Debugging hackerrank week of code Lazy Sorting

I am doing a question on hackerrank(https://www.hackerrank.com/contests/w21/challenges/lazy-sorting) right now, and I am confused as to why doesn't my code fulfill the requirements. The questions asks:
Logan is cleaning his apartment. In particular, he must sort his old favorite sequence, P, of N positive integers in nondecreasing order. He's tired from a long day, so he invented an easy way (in his opinion) to do this job. His algorithm can be described by the following pseudocode:
while isNotSorted(P) do {
WaitOneMinute();
RandomShuffle(P)
}
Can you determine the expected number of minutes that Logan will spend waiting for to be sorted?
Input format:
The first line contains a single integer, N, denoting the size of permutation .The second line contains N space-separated integers describing the respective elements in the sequence's current order, P_0, P_1 ... P_N-1.
Constraints:
2 <= N <= 18
1 <= P_i <= 100
Output format:
Print the expected number of minutes Logan must wait for P to be sorted, rounded to a scale of exactly 6 decimal places (i.e.,1.234567 format).
Sample input:
2
5 2
Sample output:
2.000000
Explanation
There are two permutations possible after a random shuffle, and each of them has probability 0.5. The probability to get the sequence sorted after the first minute is 0.5. The probability that will be sorted after the second minute is 0.25, the probability will be sorted after the third minute is 0.125, and so on. The expected number of minutes hence equals to:
summation of i*2^-i where i goes from 1 to infinity = 2
I wrote my code in c++ as follow:
#include <cmath>
#include <cstdio>
#include <vector>
#include <iostream>
#include <algorithm>
#include <map>
using namespace std;
int main() {
/* Enter your code here. Read input from STDIN. Print output to STDOUT */
map <int, int> m; //create a map to store the number of repetitions of each number
int N; //number of elements in list
//calculate the number of permutations
cin >> N;
int j;
int total_perm = 1;
int temp;
for (int i = 0; i < N; i++){
cin >> temp;
//if temp exists, add one to the value of m[temp], else initialize a new key value pair
if (m.find(temp) == m.end()){
m[temp] = 1;
}else{
m[temp] += 1;
}
total_perm *= i+1;
}
//calculate permutations taking into account of repetitions
for (map<int,int>::iterator iter = m.begin(); iter != m.end(); ++iter)
{
if (iter -> second > 1){
temp = iter -> second;
while (temp > 1){
total_perm = total_perm / temp;
temp -= 1;
}
}
}
float recur = 1 / float(total_perm);
float prev;
float current = recur;
float error = 1;
int count = 1;
//print expected number of minutes up to 6 sig fig
if (total_perm == 1){
printf("%6f", recur);
}else{
while (error > 0.0000001){
count += 1;
prev = current;
current = prev + float(count)*float(1-recur)*pow(recur,count-1);
error = abs(current - prev);
}
printf("%6f", prev);
}
return 0;
}
I don't really care about the competition, it's more about learning for me, so I would really appreciate it if someone can point out where I was wrong.
Unfortunately I am not familiar with C++ so I don't know exactly what your code is doing. I did, however, solve this problem. It's pretty cheeky and I think they posed the problem the way they did just to be confusing. So the important piece of knowledge here is that for an event with probability p, the expected number of trials until a success is 1/p. Since each trial here costs us a minute, that means we can find the expected number of trials and add ".000000" to the end.
So how do you do that? Well each permutation of the numbers is equally likely to occur, which means that if we can find how many permutations there are, we can find p. And then we take 1/p to get E[time]. But notice that each permutation has probability 1/p of occurring, where p is the total number of permutations. So really E[time] = number of permutations. I leave the rest to you.
This is just simple problem.
This problem looks like bogo sort.
How many unique permutations of the given array are possible? In the sample case, there are two permutations possible, so the expected time for any one permutation to occur is 2.000000. Extend this approach to the generic case, taking into account any repeated numbers.
However in the question, the numbers can be repeated. This reduces the number of unique permutations, and thus the answer.
Just find the number of unique permutations of the array, upto 6 decimal places. That is your answer.
Think about if array is sorted then what happen?
E.g
if test case is
5 5
5 4 3 2 1
then ans would be 120.000000 (5!/1!)
5 5
1 2 3 4 5
then ans would be 0.000000 in your question.
5 5
2 2 2 2 2
then also ans would be 0.000000
5 5
5 1 2 2 3
then ans is 60.000000
In general ans is if array is not sorted : N!/P!*Q!.. and so on..
Here is another useful link:
https://math.stackexchange.com/questions/1844133/expectation-over-sequencial-random-shuffles

How to total number of Special Numbers between two Numbers X and Y

In a game where you are given two
integers (X and Y), and you have to print the number of special
numbers between X and Y both inclusive.
The property of a special numbers is as follows:
A special number is not divisible by any number of the form Z*Z where (Z>1).
Input:
T, the number of testcases. Each testcase consists of two space separated integers denoting X and Y.
Output: The required answer in one line for each testcase.
Constraints:
1 <= T <= 100
1 <= X,Y <= 10^9
0 <= |X-Y| <= 10^6
My problem is that when i try to count all numbers out of memory error comes. and time limit is 3 seconds can some one suggest an efficient algorithm for this?
i have wriiten the code like this:
public class GameOfNumbers {
public static void main(String[] args) {
GameOfNumbers g=new GameOfNumbers();
Scanner s=new Scanner(System.in);
int tc=s.nextInt(); //no of test cases
for(int i=1;i<=tc;i++){ // for each test case.
int x=s.nextInt(); // range 1(lower)
int y=s.nextInt(); // range 2(upper)
g.countSpecialNumbers(x,y);
}
}
private void countSpecialNumbers(int x, int y) {
int arr_nums[]=new int[y-x+1];
int z=0,l=x,count=0;
while(z<arr_nums.length){
arr_nums[z]=l++;
z++;
}
int c=(int)Math.sqrt(y);
for(int i=2;i<=c;i++){
for(int k=0;k<arr_nums.length;k++){if(arr_nums[k] !=-1 && arr_nums[k]%(i*i) == 0){arr_nums[k]=-1;}}
}
for(int k=0;k<arr_nums.length;k++){if(arr_nums[k] !=-1)count++;}
System.out.println(count);
}
}
Please note that according to the statement you need no count all the numbers between 1000 and 10^9. In each test case you will need to count the numbers only in a fixed interval of length no more than 10^6(see last constraint).
And another optimization you can include - you need to only iterate over the prime numbers not greater than the square root of Y(try to figure out why). You can precompute the list of prime numbers less than sqrt of 10^9 and iterate only over them.
I can see, 3 ways you can optimize your code:
You should not create an array with all the numbers between X and Y, that makes no sense. Simply loop from X to Y (This is BTW what is causing your memory shortage)
You don't need to test all the numbers smaller than the square root of C, you only need to test prime numbers up to the square root of C
As a shortcut, always try and see if square root of C is not an integer (that means that C = int(sqrt(C)²))
Another shortcut, would be to search C in the list of prime numbers. This would be done in log(n) using binary search.
Good luck with the implementation.

Minimizing time in transit

[Updates at bottom (including solution source code)]
I have a challenging business problem that a computer can help solve.
Along a mountainous region flows a long winding river with strong currents. Along certain parts of the river are plots of environmentally sensitive land suitable for growing a particular type of rare fruit that is in very high demand. Once field laborers harvest the fruit, the clock starts ticking to get the fruit to a processing plant. It's very costly to try and send the fruits upstream or over land or air. By far the most cost effective mechanism to ship them to the plant is downstream in containers powered solely by the river's constant current. We have the capacity to build 10 processing plants and need to locate these along the river to minimize the total time the fruits spend in transit. The fruits can take however long before reaching the nearest downstream plant but that time directly hurts the price at which they can be sold. Effectively, we want to minimize the sum of the distances to the nearest respective downstream plant. A plant can be located as little as 0 meters downstream from a fruit access point.
The question is: In order to maximize profits, how far up the river should we build the 10 processing plants if we have found 32 fruit growing regions, where the regions' distances upstream from the base of the river are (in meters):
10, 40, 90, 160, 250, 360, 490, ... (n^2)*10 ... 9000, 9610, 10320?
[It is hoped that all work going towards solving this problem and towards creating similar problems and usage scenarios can help raise awareness about and generate popular resistance towards the damaging and stifling nature of software/business method patents (to whatever degree those patents might be believed to be legal within a locality).]
UPDATES
Update1: Forgot to add: I believe this question is a special case of this one.
Update2: One algorithm I wrote gives an answer in a fraction of a second, and I believe is rather good (but it's not yet stable across sample values). I'll give more details later, but the short is as follows. Place the plants at equal spacings. Cycle over all the inner plants where at each plant you recalculate its position by testing every location between its two neighbors until the problem is solved within that space (greedy algorithm). So you optimize plant 2 holding 1 and 3 fixed. Then plant 3 holding 2 and 4 fixed... When you reach the end, you cycle back and repeat until you go a full cycle where every processing plant's recalculated position stops varying.. also at the end of each cycle, you try to move processing plants that are crowded next to each other and are all near each others' fruit dumps into a region that has fruit dumps far away. There are many ways to vary the details and hence the exact answer produced. I have other candidate algorithms, but all have glitches. [I'll post code later.] Just as Mike Dunlavey mentioned below, we likely just want "good enough".
To give an idea of what might be a "good enough" result:
10010 total length of travel from 32 locations to plants at
{10,490,1210,1960,2890,4000,5290,6760,8410,9610}
Update3: mhum gave the correct exact solution first but did not (yet) post a program or algorithm, so I wrote one up that yields the same values.
/************************************************************
This program can be compiled and run (eg, on Linux):
$ gcc -std=c99 processing-plants.c -o processing-plants
$ ./processing-plants
************************************************************/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
//a: Data set of values. Add extra large number at the end
int a[]={
10,40,90,160,250,360,490,640,810,1000,1210,1440,1690,1960,2250,2560,2890,3240,3610,4000,4410,4840,5290,5760,6250,6760,7290,7840,8410,9000,9610,10240,99999
};
//numofa: size of data set
int numofa=sizeof(a)/sizeof(int);
//a2: will hold (pt to) unique data from a and in sorted order.
int *a2;
//max: size of a2
int max;
//num_fixed_loc: at 10 gives the solution for 10 plants
int num_fixed_loc;
//xx: holds index values of a2 from the lowest error winner of each cycle memoized. accessed via memoized offset value. Winner is based off lowest error sum from left boundary upto right ending boundary.
//FIX: to be dynamically sized.
int xx[1000000];
//xx_last: how much of xx has been used up
int xx_last=0;
//SavedBundle: data type to "hold" memoized values needed (total traval distance and plant locations)
typedef struct _SavedBundle {
long e;
int xx_offset;
} SavedBundle;
//sb: (pts to) lookup table of all calculated values memoized
SavedBundle *sb; //holds winning values being memoized
//Sort in increasing order.
int sortfunc (const void *a, const void *b) {
return (*(int *)a - *(int *)b);
}
/****************************
Most interesting code in here
****************************/
long full_memh(int l, int n) {
long e;
long e_min=-1;
int ti;
if (sb[l*max+n].e) {
return sb[l*max+n].e; //convenience passing
}
for (int i=l+1; i<max-1; i++) {
e=0;
//sum first part
for (int j=l+1; j<i; j++) {
e+=a2[j]-a2[l];
}
//sum second part
if (n!=1) //general case, recursively
e+=full_memh(i, n-1);
else //base case, iteratively
for (int j=i+1; j<max-1; j++) {
e+=a2[j]-a2[i];
}
if (e_min==-1) {
e_min=e;
ti=i;
}
if (e<e_min) {
e_min=e;
ti=i;
}
}
sb[l*max+n].e=e_min;
sb[l*max+n].xx_offset=xx_last;
xx[xx_last]=ti; //later add a test or a realloc, etc, if approp
for (int i=0; i<n-1; i++) {
xx[xx_last+(i+1)]=xx[sb[ti*max+(n-1)].xx_offset+i];
}
xx_last+=n;
return e_min;
}
/*************************************************************
Call to calculate and print results for given number of plants
*************************************************************/
int full_memoization(int num_fixed_loc) {
char *str;
long errorsum; //for convenience
//Call recursive workhorse
errorsum=full_memh(0, num_fixed_loc-2);
//Now print
str=(char *) malloc(num_fixed_loc*20+100);
sprintf (str,"\n%4d %6d {%d,",num_fixed_loc-1,errorsum,a2[0]);
for (int i=0; i<num_fixed_loc-2; i++)
sprintf (str+strlen(str),"%d%c",a2[ xx[ sb[0*max+(num_fixed_loc-2)].xx_offset+i ] ], (i<num_fixed_loc-3)?',':'}');
printf ("%s",str);
return 0;
}
/**************************************************
Initialize and call for plant numbers of many sizes
**************************************************/
int main (int x, char **y) {
int t;
int i2;
qsort(a,numofa,sizeof(int),sortfunc);
t=1;
for (int i=1; i<numofa; i++)
if (a[i]!=a[i-1])
t++;
max=t;
i2=1;
a2=(int *)malloc(sizeof(int)*t);
a2[0]=a[0];
for (int i=1; i<numofa; i++)
if (a[i]!=a[i-1]) {
a2[i2++]=a[i];
}
sb = (SavedBundle *)calloc(sizeof(SavedBundle),max*max);
for (int i=3; i<=max; i++) {
full_memoization(i);
}
free(sb);
return 0;
}
Let me give you a simple example of a Metropolis-Hastings algorithm.
Suppose you have a state vector x, and a goodness-of-fit function P(x), which can be any function you care to write.
Suppose you have a random distribution Q that you can use to modify the vector, such as x' = x + N(0, 1) * sigma, where N is a simple normal distribution about 0, and sigma is a standard deviation of your choosing.
p = P(x);
for (/* a lot of iterations */){
// add x to a sample array
// get the next sample
x' = x + N(0,1) * sigma;
p' = P(x');
// if it is better, accept it
if (p' > p){
x = x';
p = p';
}
// if it is not better
else {
// maybe accept it anyway
if (Uniform(0,1) < (p' / p)){
x = x';
p = p';
}
}
}
Usually it is done with a burn-in time of maybe 1000 cycles, after which you start collecting samples. After another maybe 10,000 cycles, the average of the samples is what you take as an answer.
It requires diagnostics and tuning. Typically the samples are plotted, and what you are looking for is a "fuzzy caterpilar" plot that is stable (doesn't move around much) and has a high acceptance rate (very fuzzy). The main parameter you can play with is sigma.
If sigma is too small, the plot will be fuzzy but it will wander around.
If it is too large, the plot will not be fuzzy - it will have horizontal segments.
Often the starting vector x is chosen at random, and often multiple starting vectors are chosen, to see if they end up in the same place.
It is not necessary to vary all components of the state vector x at the same time. You can cycle through them, varying one at a time, or some such method.
Also, if you don't need the diagnostic plot, it may not be necessary to save the samples, but just calculate the average and variance on the fly.
In the applications I'm familiar with, P(x) is a measure of probability, and it is typically in log-space, so it can vary from 0 to negative infinity.
Then to do the "maybe accept" step it is (exp(logp' - logp))
Unless I've made an error, here are exact solutions (obtained through a dynamic programming approach):
N Dist Sites
2 60950 {10,4840}
3 40910 {10,2890,6760}
4 30270 {10,2250,4840,7840}
5 23650 {10,1690,3610,5760,8410}
6 19170 {10,1210,2560,4410,6250,8410}
7 15840 {10,1000,2250,3610,5290,7290,9000}
8 13330 {10,810,1960,3240,4410,5760,7290,9000}
9 11460 {10,810,1690,2890,4000,5290,6760,8410,9610}
10 9850 {10,640,1440,2250,3240,4410,5760,7290,8410,9610}
11 8460 {10,640,1440,2250,3240,4410,5290,6250,7290,8410,9610}
12 7350 {10,490,1210,1960,2890,3610,4410,5290,6250,7290,8410,9610}
13 6470 {10,490,1000,1690,2250,2890,3610,4410,5290,6250,7290,8410,9610}
14 5800 {10,360,810,1440,1960,2560,3240,4000,4840,5760,6760,7840,9000,10240}
15 5190 {10,360,810,1440,1960,2560,3240,4000,4840,5760,6760,7840,9000,9610,10240}
16 4610 {10,360,810,1210,1690,2250,2890,3610,4410,5290,6250,7290,8410,9000,9610,10240}
17 4060 {10,360,810,1210,1690,2250,2890,3610,4410,5290,6250,7290,7840,8410,9000,9610,10240}
18 3550 {10,360,810,1210,1690,2250,2890,3610,4410,5290,6250,6760,7290,7840,8410,9000,9610,10240}
19 3080 {10,360,810,1210,1690,2250,2890,3610,4410,5290,5760,6250,6760,7290,7840,8410,9000,9610,10240}
20 2640 {10,250,640,1000,1440,1960,2560,3240,4000,4840,5290,5760,6250,6760,7290,7840,8410,9000,9610,10240}
21 2230 {10,250,640,1000,1440,1960,2560,3240,4000,4410,4840,5290,5760,6250,6760,7290,7840,8410,9000,9610,10240}
22 1860 {10,250,640,1000,1440,1960,2560,3240,3610,4000,4410,4840,5290,5760,6250,6760,7290,7840,8410,9000,9610,10240}
23 1520 {10,250,490,810,1210,1690,2250,2890,3240,3610,4000,4410,4840,5290,5760,6250,6760,7290,7840,8410,9000,9610,10240}
24 1210 {10,250,490,810,1210,1690,2250,2560,2890,3240,3610,4000,4410,4840,5290,5760,6250,6760,7290,7840,8410,9000,9610,10240}
25 940 {10,250,490,810,1210,1690,1960,2250,2560,2890,3240,3610,4000,4410,4840,5290,5760,6250,6760,7290,7840,8410,9000,9610,10240}
26 710 {10,160,360,640,1000,1440,1690,1960,2250,2560,2890,3240,3610,4000,4410,4840,5290,5760,6250,6760,7290,7840,8410,9000,9610,10240}
27 500 {10,160,360,640,1000,1210,1440,1690,1960,2250,2560,2890,3240,3610,4000,4410,4840,5290,5760,6250,6760,7290,7840,8410,9000,9610,10240}
28 330 {10,160,360,640,810,1000,1210,1440,1690,1960,2250,2560,2890,3240,3610,4000,4410,4840,5290,5760,6250,6760,7290,7840,8410,9000,9610,10240}
29 200 {10,160,360,490,640,810,1000,1210,1440,1690,1960,2250,2560,2890,3240,3610,4000,4410,4840,5290,5760,6250,6760,7290,7840,8410,9000,9610,10240}
30 100 {10,90,250,360,490,640,810,1000,1210,1440,1690,1960,2250,2560,2890,3240,3610,4000,4410,4840,5290,5760,6250,6760,7290,7840,8410,9000,9610,10240}
31 30 {10,90,160,250,360,490,640,810,1000,1210,1440,1690,1960,2250,2560,2890,3240,3610,4000,4410,4840,5290,5760,6250,6760,7290,7840,8410,9000,9610,10240}
32 0 {10,40,90,160,250,360,490,640,810,1000,1210,1440,1690,1960,2250,2560,2890,3240,3610,4000,4410,4840,5290,5760,6250,6760,7290,7840,8410,9000,9610,10240}

Resources