I came across this question during an interview -
Convert a number source to target in the minimum number of operations.
Allowed Operations
Multiplied by 2.
Addition by 1.
subtraction by 1.
0 < source, target <= 1000.
I tried going the naive recursive route(O(3^n)) ie. subtract 1, add 1 and multiply by 2 at each level to try and find a solution that I could extend to Dynamic Programming but couldnt because of an infinite loop.
//Naive approach Via Recursion
int minMoves(int source, int target){
if(source <1 || source > target){
return -1;
}
int moves =0;
// Potential infinite loop - consider 3,6-> 2,6- >1,6->(0,6)x (2,6)->1,6->(0,6)x (1,6)->(0,6)x (2,6)->1,6..
int movesLeft = minMoves(source -1, target) ==-1? Integer.MAX_VALUE:minMoves(source -1, target);
int movesRight = minMoves(source +1, target) ==-1? Integer.MAX_VALUE:minMoves(source +1, target);
int moves2X = minMoves(2*source, target) ==-1? Integer.MAX_VALUE:minMoves(2*source, target);
moves = 1+ Math.min(Math.min(movesRight,movesLeft), moves2X);
return moves;
}
Any ideas on how I can tweak my solution? Or possibly a better way to solve it?
If you think about your solution like a graph traversal, where each node is an intermediate value you can produce, your recursive solution is like a depth first search (DFS). You'll have to fully expand until you've tried all solutions from that "branch" of the search space before you can proceed anywhere else. If you have an infinite loop, this means it will never terminate even if a shorter path exists, and even if you don't have an infinite loop, you still have to search the rest of the solution space to make sure its optimal.
Instead, consider an approach similar to breadth first search (BFS). You expand outward uniformly, and will never search a path longer than the optimal solution. Just use FIFO queue to schedule which node to access next. This is the approach I've taken with my solver.
from queue import Queue
def solve(source, target):
queue = Queue()
path = [source]
queue.put(path)
while source != target:
queue.put(path + [source * 2])
queue.put(path + [source + 1])
queue.put(path + [source - 1])
path = queue.get()
source = path[-1]
return path
if __name__ == "__main__":
print(solve(4,79))
One way in which you can speed up(and possibly fix) this code, while maintaining the recursive implementation, is to use memoization.
The issue here is that you are recalculating the same value many times. Instead you can use a map to store the results that you already calculated, and reuse them when you need it again.
This problem can be solved constructively. First, the easy cases. If s=t, the answer is 0. If s > t, the answer is s-t because subtraction by 1 is the only operation that lowers s, and the other two can only increase the number of subtractions required.
Now let's assume s < t. Since s>0 is given, doubling will always be the fastest way to increase (if s is 1, it's tied with incrementing). So if the challenge was to make s >= t, the answer would always be the number of doublings required to do that. This procedure may overshoot t, but the first doubling greater than t and the last doubling not greater than t must be within a factor of 2 of t.
Let's look at the effect of when we do an addition or subtraction. First, look only at addition:
(((s*2) * 2) * 2) + 1 = 8s + 1
vs:
((((s+1)*2) * 2) * 2) = 8s + 8
Putting an addition before n doublings makes the final result 2^n bigger. So consider if s is 3 and t is 8. The last double not bigger than 8 is 6. This is 2 off, so if we put an addition 1 double before the last double, we get what we want: (3+1) * 2. Alternatively we could try overshooting to the first double greater than 8, which is 12. This is 4 off, so we need to put a subtraction two doublings before the last : (3-1)*2*2 = 8
In general if we are x below the target, we need to put a +1 at n doublings before the last if the binary representation of x has a 1 at the nth place.
Similarly, if we are x above the target, we do likewise with -1's.
This procedure won't help for the 1's in x's binary representation that are at a position more than the number of doublings there are. For example, if s = 100, t=207, there is only 1 doubling to do, but x is 7, which is 111. We can knock out the middle one by doing an addition first, the rest we have to do one by one (s+1)*2 + 1 + 1 + 1 + 1 + 1.
Here is an implementation that has a debug flag that also outputs the list of operations when the flag is defined. The run time is O(log(t)):
#include <iostream>
#include <string>
#include <sstream>
#define DEBUG_INFO
int MinMoves(int s, int t)
{
int ans = 0;
if (t <= s)
{
return s - t; //Only subtraction will help
}
int firstDoubleGreater = s;
int lastDoubleNotGreater = s;
int nDouble = 0;
while(firstDoubleGreater <= t)
{
nDouble++;
lastDoubleNotGreater = firstDoubleGreater;
firstDoubleGreater *= 2;
}
int d1 = t - lastDoubleNotGreater;
int d2 = firstDoubleGreater - t;
if (d1 == 0)
return nDouble -1;
int strat1 = nDouble -1; //Double and increment
int strat2 = nDouble; //Double and decrement
#ifdef DEBUG_INFO
std::cout << "nDouble: " << nDouble << "\n";
std::stringstream s1Ops;
std::stringstream s2Ops;
int s1Tmp = s;
int s2Tmp = s;
#endif
int mask = 1<<strat1;
for(int pos = 0; pos < nDouble-1; pos++)
{
#ifdef DEBUG_INFO
if (d1 & mask)
{
s1Ops << s1Tmp << "+1=" << s1Tmp+1 << "\n" << s1Tmp+1 << "*2= " << (s1Tmp+1)*2 << "\n";
s1Tmp = (s1Tmp + 1) * 2;
}
else
{
s1Ops << s1Tmp << "*2= " << s1Tmp*2 << "\n";
s1Tmp = s1Tmp*2;
}
#endif
if(d1 & mask)
strat1++;
d1 = d1 & ~mask;
mask = mask >> 1;
}
strat1 += d1;
#ifdef DEBUG_INFO
if (d1 != 0)
s1Ops << s1Tmp << " +1 " << d1 << " times = " << s1Tmp + d1 << "\n";
#endif
mask = 1<<strat2;
for(int pos = 0; pos < nDouble; pos++)
{
#ifdef DEBUG_INFO
if (d2 & mask)
{
s2Ops << s2Tmp << "-1=" << s2Tmp-1 << "\n" << s2Tmp-1 << "*2= " << (s2Tmp-1)*2 << "\n";
s2Tmp = (s2Tmp-1)*2;
}
else
{
s2Ops << s2Tmp << "*2= " << s2Tmp*2 << "\n";
s2Tmp = s2Tmp*2;
}
#endif
if(d2 & mask)
strat2++;
d2 = d2 & ~mask;
mask = mask >> 1;
}
strat2 += d2;
#ifdef DEBUG_INFO
if (d2 != 0)
s2Ops << s2Tmp << " -1 " << d2 << " times = " << s2Tmp - d2 << "\n";
std::cout << "Strat1: " << strat1 << "\n";
std::cout << s1Ops.str() << "\n";
std::cout << "\n\nStrat2: " << strat2 << "\n";
std::cout << s2Ops.str() << "\n";
#endif
if (strat1 < strat2)
{
return strat1;
}
else
{
std::cout << "Strat2\n";
return strat2;
}
}
int main()
{
int s = 25;
int t = 193;
std::cout << "s = " << s << " t = " << t << "\n";
std::cout << MinMoves(s, t) << std::endl;
}
Short BFS algorithm. It finds the shortest path in graph where every vertex x is connected to x + 1, x - 1 and x * 2; O(n)
#include <bits/stdc++.h>
using namespace std;
const int _MAX_DIS = 2020;
const int _MIN_DIS = 0;
int minMoves(int begin, int end){
queue<int> Q;
int dis[_MAX_DIS];
fill(dis, dis + _MAX_DIS, -1);
dis[begin] = 0;
Q.push(begin);
while(!Q.empty()){
int v = Q.front(); Q.pop();
int tab[] = {v + 1, v - 1, v * 2};
for(int i = 0; i < 3; i++){
int w = tab[i];
if(_MIN_DIS <= w && w <= _MAX_DIS && dis[w] == -1){
Q.push(w);
dis[w] = dis[v] + 1;
}
}
}
return dis[end];
}
int main(){
ios_base::sync_with_stdio(false);
cin.tie(0);
cout.tie(0);
cout << minMoves(1, 1000);
return 0;
}
Related
Suppose I have n circles of radius r. I want to place them randomly inside a rectangle of size AxA.
It is guaranteed that they fit. One can suppose that the sum of the area of all circles is about 60% of the area of the rectangle.
I can try it by doing a backtracking, trying to place, going back, etc., but there should be a better way to do it.
One possibility is to generate random points inside the rectangle without further constraints, and then move the points/centres iteratively (by little steps) such that avoiding overlapping. If two points are too near one from each other, each point can bring pressure to the other, to make it going away a little bit. The higher the pressure, the higher the move.
This process was implemented in C++. In the following simple code, to facilitate implementation, points and vectors are represented par std::complex type.
Note that I used srandand rand for test purpose. You may used better random algorithms, depending on your constraints.
According to the tests that I have performed, convergence seems guaranteed for a density of 60%. I also made some tests with a density of 70%: sometimes convergence, sometimes not.
Complexity is O(n^2 n_iter), where nis the number of circles and n_iterthe number of iterations.
n_iteris generally between 100 and 300, for a density of 60%. It could be decreased with relaxing the convergence criteria.
It could be seems high complexity, compared to other proposals in comments. In practice, for n = 15, the work is performed in less than 30ms on my PC. Huge time or fast enough, depending on the context. I have included a figure to illustrate the algorithm.
#include <cstdlib>
#include <iostream>
#include <fstream>
#include <vector>
#include <ctime>
#include <complex>
#include <cmath>
#include <tuple>
#include <ios>
#include <iomanip>
using dcomplex = std::complex<double>;
void print (const std::vector<dcomplex>& centers) {
std::cout << std::setprecision (9);
std::cout << "\ncenters:\n";
for (auto& z: centers) {
std::cout << real(z) << ", " << imag(z) << "\n";
}
}
std::tuple<bool, int, double> process (double A, double R, std::vector<dcomplex>& centers, int n_iter_max = 100) {
bool check = true;
int n = centers.size();
std::vector<dcomplex> moves (n, 0.0);
double acceleration = 1.0001; // to accelerate the convergence, if density not too large
// could be made dependent of the iteration index
double dmin;
auto limit = [&] (dcomplex& z) {
double zx = real(z);
double zi = imag(z);
if (zx < R) zx = R;
if (zx > A-R) zx = A-R;
if (zi < R) zi = R;
if (zi > A-R) zi = A-R;
return dcomplex(zx, zi);
};
int iter;
for (iter = 0; iter < n_iter_max; ++iter) {
for (int i = 0; i < n; ++i) moves[i] = 0.0;
dmin = A;
for (int i = 0; i < n; ++i) {
for (int j = i+1; j < n; ++j) {
auto vect = centers[i] - centers[j];
double dist = std::abs(vect);
if (dist < dmin) dmin = dist;
double x = std::max (0.0, 2*R*acceleration - dist) / 2.0;
double coef = x / (dist + R/10000);
moves[i] += coef * vect;
moves[j] -= coef * vect;
}
}
std::cout << "iteration " << iter << " dmin = " << dmin << "\n";
if (dmin/R >= 2.0 - 1.0e-6) break;
for (int i = 0; i < n; ++i) {
centers[i] += moves[i];
centers[i] = limit (centers[i]);
}
}
dmin = A;
for (int i = 0; i < n; ++i) {
for (int j = i+1; j < n; ++j) {
auto vect = centers[i] - centers[j];
double dist = std::abs(vect);
if (dist < dmin) dmin = dist;
}
}
std::cout << "Final: dmin/R = " << dmin/R << "\n";
check = dmin/R >= 2.0 - 1.0e-6;
return {check, iter, dmin};
}
int main() {
int n = 15; // number of circles
double R = 1.0; // ray of each circle
double density = 0.6; // area of all circles over total area A*A
double A; // side of the square
int n_iter = 1000;
A = sqrt (n*M_PI*R*R/density);
std::cout << "number of circles = " << n << "\n";
std::cout << "density = " << density << "\n";
std::cout << "A = " << A << std::endl;
std::vector<dcomplex> centers (n);
std::srand(std::time(0));
for (int i = 0; i < n; ++i) {
double x = R + (A - 2*R) * (double) std::rand()/RAND_MAX;
double y = R + (A - 2*R) * (double) std::rand()/RAND_MAX;
centers[i] = {x, y};
}
auto [check, n_iter_eff, dmin] = process (A, R, centers, n_iter);
std::cout << "check = " << check << "\n";
std::cout << "Relative min distance = " << std::setprecision (9) << dmin/R << "\n";
std::cout << "nb iterations = " << n_iter_eff << "\n";
print (centers);
return 0;
}
Given a set of positive integers and value X, find a subset S whose sum is >= X, such that sum(S) is the lowest of all sums of such existing subsets.
Can it be done in polynomial time? What would be the solution?
Checking all subsets is 2^n.
Backtracking is a possibility for this problem.
It allows examining all the possibilities recursively, without the need of a large amount of memory.
It stops as soon as an optimal solution is found: sum = X, up to a given tolerance (for example 10^-10 in the programme below)
It allows to implement a simple procedure of premature abandon:
at a given time, if sum + the sum of all remaining elements is higher than X, then we can give up examining the current path, without examining the remaining elements. This procedure is optimized by sorting the input data in decreasing order
Here is a code, in C++. The code being quite basic, it should be easy to migrate it to another language.
This programme tests the algorithm with random (uniform) elements, and display the number of iterations.
The complexity (i.e. the number of iterations) is really varying with the random elements (of course), but also greatly depends of the tolerance that we accept. With a tolerance of 10^-10 and a size of n=100, the complexity generally stays quite acceptable. It is no longer the case with a smaller tolerance.
With n = 100 and five runs, I obtained for the number of iterations: 6102, 3672, 8479, 2235, 12926. However, it is clear that there is no warranty to have good performances in all cases. For n = 100, the number of candidates (subsets) is huge.
// Find min sum greater than a given number X
#include <iostream>
#include <iomanip>
#include <vector>
#include <algorithm>
#include <tuple>
#include <cstdlib>
#include <cmath>
#include <ctime>
std::tuple<double, std::vector<double>> min_sum_greater(std::vector<double> &a, double X) {
int n = a.size();
std::vector<bool> parti (n, false); // current partition studies
std::vector<bool> parti_opt (n, false); // optimal partition
std::vector<double> sum_back (n, 0); // sum of remaining elements
//std::cout << "n = " << n << " \tX = " << X << "\n";
std::sort(a.begin(), a.end(), std::greater<double>());
sum_back[n-1] = a[n-1];
for (int i = n-2; i >= 0; --i) {
sum_back[i] = sum_back[i+1] + a[i];
}
double sum = 0.0; // current sum
int i = 0; // index of the element being examined
double best_sum = sum_back[0] + 1.0;
bool up_down = true;
double eps = 1.0e-10; // error tolerance
long long cpt = 0; // to check the number of iterations
while (true) { // UP
//std::cout << "Start of while loop: i = " << i << "\n";
cpt++;
if (up_down) {
bool abandon = (sum + sum_back[i] < X - eps) || (sum > best_sum);
if (abandon) { //premature abandon
parti[i] = false;
up_down = false;
i--;
continue;
}
parti[i] = true;
sum += a[i];
//std::cout << "UP, i = " << i << " \tsum = " << sum << "\n";
if (fabs(sum - X) < eps) {
best_sum = sum;
parti_opt = parti;
break;
}
if (sum >= X) {
if (sum < best_sum) {
best_sum = sum;
parti_opt = parti;
//std::cout << "i = " << i << " \tbest sum = " << best_sum << "\n";
}
parti[i] = false;
sum -= a[i];
}
if (i == (n-1)) { // leaf
up_down = false;
i--;
continue;
}
i++;
} else { // DOWN
if (i < 0) break;
if (parti[i]) {
sum -= a[i];
parti[i] = false;
i++;
up_down = true;
} else {
i--;
up_down = false;
}
}
}
std::vector<double> answer;
for (int i = 0; i < n; ++i) {
if (parti_opt[i]) answer.push_back (a[i]);
}
std::cout << "number of iterations = " << cpt << " for n = " << n << "\n";
return std::make_tuple (best_sum, answer);
}
int main () {
//std::vector<double> a = {5, 6, 2, 10, 2, 3, 4, 13, 17, 38, 42};
double X = 33.5;
srand (time(NULL));
int n = 100;
double vmax = 100;
X = vmax * n / 4;
std::vector<double> a (n);
for (int i = 0; i < n; ++i) {
a[i] = vmax * double(rand())/RAND_MAX;
}
double sum;
std::vector<double> y;
std::tie (sum, y) = min_sum_greater (a, X);
std::cout << std::setprecision(15) << "sum = " << sum << "\n";
if (n < 20) {
std::cout << "set: ";
for (auto val: y) {
std::cout << val << " ";
}
std::cout << "\n";
}
}
You are given an integer N. You have to find smallest multiple of N which consists of digits 0 and 1 only. Since this multiple could be large, return it in form of a string.
Returned string should not contain leading zeroes.
For example,
For N = 55, 110 is smallest multiple consisting of digits 0 and 1.
For N = 2, 10 is the answer.
I saw several related problems, but I could not find the problem with my code.
Here is my code giving TLE on some cases even after using map instead of set.
#define ll long long
int getMod(string s, int A)
{
int res=0;
for(int i=0;i<s.length();i++)
{
res=res*10+(s[i]-'0');
res%=A;
}
return res;
}
string Solution::multiple(int A) {
if(A<=1)
return to_string(A);
queue<string>q;
q.push("1");
set<int>st;
string s="1";
while(!q.empty())
{
s=q.front();
q.pop();
int mod=getMod(s,A);
if(mod==0)
{
return s;
}
else if(st.find(mod)==st.end())
{
st.insert(mod);
q.push(s+"0");
q.push(s+"1");
}
}
}
Here is an implementation in Raku.
my $n = 55;
(1 .. Inf).map( *.base(2) ).first( * %% $n );
(1 .. Inf) is a lazy list from one to infinity. The "whatever star" * establishes a closure and stands for the current element in the map.
base is a method of Rakus Num type which returns a string representation of a given number in the wanted base, here a binary string.
first returns the current element when the "whatever star" closure holds true for it.
The %% is the divisible by operator, it implicitly casts its left side to Int.
Oh, and to top it off. It's easy to parallelize this, so your code can use multiple cpu cores:
(1 .. Inf).race( :batch(1000), :degree(4) ).map( *.base(2) ).first( * %% $n );
As mentioned in the "math" reference, the result is related to the congruence of the power of 10 modulo A.
If
n = sum_i a[i] 10^i
then
n modulo A = sum_i a[i] b[i]
Where the a[i] are equal to 0 or 1, and the b[i] = (10^i) modulo A
Then the problem is to find the minimum a[i] sequence, such that the sum is equal to 0 modulo A.
From a graph a point of view, we have to find the shortest path to zero modulo A.
A BFS is generally well adapted to find such a path. The issue is the possible exponential increase of the number of nodes to visit. Here, were are sure to get a number of nodes less than A, by rejecting the nodes, the sum of which (modulo A) has already been obtained (see vector used in the program). Note that this rejection is needed in order to get the minimum number at the end.
Here is a program in C++. The solution being quite simple, it should be easy to understand even by those no familiar with C++.
#include <iostream>
#include <string>
#include <vector>
struct node {
int sum = 0;
std::string s;
};
std::string multiple (int A) {
std::vector<std::vector<node>> nodes (2);
std::vector<bool> used (A, false);
int range = 0;
int ten = 10 % A;
int pow_ten = 1;
if (A == 0) return "0";
if (A == 1) return "1";
nodes[range].push_back (node{0, "0"});
nodes[range].push_back (node{1, "1"});
used[1] = true;
while (1) {
int range_new = (range + 1) % 2;
nodes[range_new].resize(0);
pow_ten = (pow_ten * ten) % A;
for (node &x: nodes[range]) {
node y = x;
y.s = "0" + y.s;
nodes[range_new].push_back(y);
y = x;
y.sum = (y.sum + pow_ten) % A;
if (used[y.sum]) continue;
used[y.sum] = true;
y.s = "1" + y.s;
if (y.sum == 0) return y.s;
nodes[range_new].push_back(y);
}
range = range_new;
}
}
int main() {
std::cout << "input number: ";
int n;
std::cin >> n;
std::cout << "Result = " << multiple(n) << "\n";
return 0;
}
EDIT
The above program is using a kind of memoization in order to speed up the process but for large inputs memory becomes too large.
As indicated in a comment for example, it cannot handle the case N = 60000007.
I improved the speed and the range a little bit with the following modifications:
A function (reduction) was created to simplify the search when the input number is divisible by 2 or 5
For the memorization of the nodes (nodes array), only one array is used now instead of two
A kind of meet-in-the middle procedure is used: in a first step, a function mem_gen memorizes all relevant 01 sequences up to N_DIGIT_MEM (=20) digits. Then the main procedure multiple2 generates valid 01 sequences "after the 20 first digits" and then in the memory looks for a "complementary sequence" such that the concatenation of both is a valid sequence
With this new program the case N = 60000007 provides the good result (100101000001001010011110111, 27 digits) in about 600ms on my PC.
EDIT 2
Instead of limiting the number of digits for the memorization in the first step, I now use a threshold on the size of the memory, as this size does not depent only on the number of digits but also of the input number. Note that the optimal value of this threshold would depend of the input number. Here, I selected a thresholf of 50k as a compromise. With a threshold of 20k, for 60000007, I obtain the good result in 36 ms. Besides, with a threshold of 100k, the worst case 99999999 is solved in 5s.
I made different tests with values less than 10^9. In about all tested cases, the result is provided in less that 1s. However, I met a corner case N=99999999, for which the result consists in 72 consecutive "1". In this particular case, the program takes about 6.7s. For 60000007, the good result is obtained in 69ms.
Here is the new program:
#include <iostream>
#include <string>
#include <vector>
#include <map>
#include <unordered_map>
#include <chrono>
#include <cmath>
#include <algorithm>
std::string reverse (std::string s) {
std::string res {s.rbegin(), s.rend()};
return res;
}
struct node {
int sum = 0;
std::string s;
node (int sum_ = 0, std::string s_ = ""): sum(sum_), s(s_) {};
};
// This function simplifies the search when the input number is divisible by 2 or 5
node reduction (int &X, long long &pow_ten) {
node init {0, ""};
while (1) {
int digit = X % 10;
if (digit == 1 || digit == 3 || digit == 7 || digit == 9) break;
switch (digit) {
case(0):
X /= 10;
break;
case(2):
case(4):
case(6):
case(8):
X = (5*X)/10;
break;
case(5):
X = (2*X)/10;
break;
}
init.s.push_back('0');
pow_ten = (pow_ten * 10) % X;
}
return init;
}
const int N_DIGIT_MEM = 30; // 20
const int threshold_size_mem = 50000;
// This function memorizes all relevant 01 sequences up to N_DIGIT_MEM digits
bool gene_mem (int X, long long &pow_ten, int index_max, std::map<int, std::string> &mem, node &result) {
std::vector<node> nodes;
std::vector<bool> used (X, false);
bool start = true;
for (int index = 0; index < index_max; ++index){
if (start) {
node x = {int(pow_ten), "1"};
nodes.push_back (x);
} else {
for (node &x: nodes) {
x.s.push_back('0');
}
int n = nodes.size();
for (int i = 0; i < n; ++i) {
node y = nodes[i];
y.sum = (y.sum + pow_ten) % X;
y.s.back() = '1';
if (used[y.sum]) continue;
used[y.sum] = true;
if (y.sum == 0) {
result = y;
return true;
}
nodes.push_back(y);
}
}
pow_ten = (10 * pow_ten) % X;
start = false;
int n_mem = nodes.size();
if (n_mem > threshold_size_mem) {
break;
}
}
for (auto &x: nodes) {
mem[x.sum] = x.s;
}
//std::cout << "size mem = " << mem.size() << "\n";
return false;
}
// This function generates valid 01 sequences "after the 20 first digits" and then in the memory
// looks for a "complementary sequence" such that the concatenation of both is a valid sequence
std::string multiple2 (int A) {
std::vector<node> nodes;
std::map<int, std::string> mem;
int ten = 10 % A;
long long pow_ten = 1;
int digit;
if (A == 0) return "0";
int X = A;
node init = reduction (X, pow_ten);
if (X != A) ten = ten % X;
if (X == 1) {
init.s.push_back('1');
return reverse(init.s);
}
std::vector<bool> used (X, false);
node result;
int index_max = N_DIGIT_MEM;
if (gene_mem (X, pow_ten, index_max, mem, result)) {
return reverse(init.s + result.s);
}
node init2 {0, ""};
nodes.push_back(init2);
while (1) {
for (node &x: nodes) {
x.s.push_back('0');
}
int n = nodes.size();
for (int i = 0; i < n; ++i) {
node y = nodes[i];
y.sum = (y.sum + pow_ten) % X;
if (used[y.sum]) continue;
used[y.sum] = true;
y.s.back() = '1';
if (y.sum != 0) {
int target = X - y.sum;
auto search = mem.find(target);
if (search != mem.end()) {
//std::cout << "mem size 2nd step = " << nodes.size() << "\n";
return reverse(init.s + search->second + y.s);
}
}
nodes.push_back(y);
}
pow_ten = (pow_ten * ten) % X;
}
}
int main() {
std::cout << "input number: ";
int n;
std::cin >> n;
std::string res;
auto t1 = std::chrono::high_resolution_clock::now();
res = multiple2(n),
std::cout << "Result = " << res << " ndigit = " << res.size() << std::endl;
auto t2 = std::chrono::high_resolution_clock::now();
auto duration2 = std::chrono::duration_cast<std::chrono::microseconds>( t2 - t1 ).count();
std::cout << "time = " << duration2/1000 << " ms" << std::endl;
return 0;
}
For people more familiar with Python, here is a converted version of #Damien's code. Damien's important insight is to strongly reduce the search tree, taking advantage of the fact that each partial sum only needs to be investigated once, namely the first time it is encountered.
The problem is also described at Mathpuzzle, but there they mostly fix on the necessary existence of a solution. There's also code mentioned at the online encyclopedia of integer sequences. The sage version seems to be somewhat similar.
I made a few changes:
Starting with an empty list helps to correctly solve A=1 while simplifying the code. The multiplication by 10 is moved to the end of the loop. Doing the same for 0 seems to be hard, as log10(0) is minus infinity.
Instead of alternating between nodes[range] and nodes[new_range], two different lists are used.
As Python supports integers of arbitrary precision, the partial results could be stored as decimal or binary numbers instead of as strings. This is not yet done in the code below.
from collections import namedtuple
node = namedtuple('node', 'sum str')
def find_multiple_ones_zeros(A):
nodes = [node(0, "")]
used = set()
pow_ten = 1
while True:
new_nodes = []
for x in nodes:
y = node(x.sum, "0" + x.str)
new_nodes.append(y)
next_sum = (x.sum + pow_ten) % A
y = node((x.sum + pow_ten) % A, x.str)
if next_sum in used:
continue
used.add(next_sum)
y = node(next_sum, "1" + x.str)
if next_sum == 0:
return y.str
new_nodes.append(y)
pow_ten = (pow_ten * 10) % A
nodes = new_nodes
This is a basic dynamic programming problem - Number of score combinations. I am aware of the bottom up approach for this problem which works well.
However, I am unable to work towards finding a top-down solution approach for the problem. Caching the recursive part gives us more than necessary combinations(where ordering/sequence of scores is also a factor, so, to avoid it we need to provide a constraint to make the sequence increase monotonically. Here is recursive approach for the same. Dynamic Programming - Number of distinct combinations to reach a given score
Here is my current code:
#include <iostream>
#include <vector>
using namespace std;
int helper(int target, vector<int>& coins, vector<int>& cache, int min) {
if(target < 0) return 0;
if(target == 0) return 1;
if(cache[target] != 0) return cache[target];
for(auto& c : coins) {
if(target >= c && min <= c) {
//cout << min << " " << c << " " << target << endl;
cache[target] += helper(target-c, coins, cache, c) ;
//cout << cache[target] << endl;
}
}
return cache[target];
}
int main() {
vector<int> coins{2, 3};
int target = 7;
vector<int> cache(target+1, 0);
cache[0] = 1;
cache [7] = helper(target, coins, cache, 1);
for (auto& x : cache) cout << x << endl;
return 0;
}
Here is run-able ideone link.
I've implemented a Kolakoski's sequence with a low memory footprint, using the reference from Wikipedia
#include <iostream>
#include <iomanip>
#include <vector>
int IncrementPointer(std::vector<int>& vec, int k)
{
if (vec.size() <= k)
{
vec.push_back(22);
}
if (vec[k] == 11)
{
vec[k] = 1;
return 1;
}
else if (vec[k] == 22)
{
vec[k] = 2;
return 2;
}
else if (vec[k] == 1)
{
vec[k] = IncrementPointer(vec, k + 1) == 1 ? 2 : 22;
return 2;
}
else if (vec[k] == 2)
{
vec[k] = IncrementPointer(vec, k + 1) == 1 ? 1 : 11;
return 1;
}
return 0;
}
int main()
{
long long iteration = 2;
long long nextPowOf10 = 10;
long long numOf1s = 1;
std::vector<int> vec;
std::cout << std::setw(15) << 'n' << std::setw(15) << "#1s" << std::setw(8) << "P(n)\n";
std::cout << std::setw(15) << 1 << std::setw(15) << numOf1s << '\n';
while (iteration++ <= 100'000'000'000'000)
{
int retvalue = IncrementPointer(vec, 0);
if (retvalue == 1)
++numOf1s;
if (iteration % nextPowOf10 == 0)
{
std::cout << std::setw(15) << nextPowOf10 << std::setw(15) << numOf1s << std::setw(8) << vec.size() << '\n';
nextPowOf10 *= 10;
}
}
return 0;
}
Now, the program internally calculates right elements of the sequence in Debug Mode and outputs expected results. So far, so good.
The problem starts in Release mode, vector gets optimized away (how could it be?), and the elements calculated are now wrong.
The expected sequence is [[1 2] 2 1 1 2 1 2 2 etc.], with first two are preset.
and in release mode the elements are [1 2] 2 1 1 1 1 1 2 ... Clearly, something wrong went on. And subsequently the output is unexpected, and the program crashes, with calling to malloc (so it does have somewhere vector reallocated).
What am I doing wrong? Is it simultaneous push_back to vector and update to the element of vector?
I believe a construct like this exhibits undefined behavior:
vec[k] = IncrementPointer(vec, k + 1) == 1 ? 2 : 22;
It is unspecified whether vec[k] or IncrementPointer(vec, ...) gets evaluated first. vec[k] returns a reference to the corresponding element. If IncrementPointer is called later, it may push new elements into vec which in turn may cause it to reallocate, whereupon that reference becomes dangling.
Make it
int val = IncrementPointer(vec, k + 1);
vec[k] = val == 1 ? 2 : 22;