Info
Hi everyone
I was searching an efficient way to check if a number is multiple of 5. So I searched on google and found this solution on geeksforgeeks.org.
There were 3 solutions of my problem.
First solution was to subtract 5 until reaching zero,
Second solution was to convert the number to string and check last character to be 5 or 0,
Third solution was by doing some interesting operations on bitwise level.
I'm interested in third solution as I can fully understand the first and the second.
Here's the code from geeksforgeeks.
bool isMultipleof5(int n)
{
// If n is a multiple of 5 then we
// make sure that last digit of n is 0
if ( (n & 1) == 1 )
n <<= 1;
float x = n;
x = ( (int)(x * 0.1) ) * 10;
// If last digit of n is 0 then n
// will be equal to (int)x
if ( (int)x == n )
return true;
return false;
}
I understand only some parts of the logic. I haven't even tested this code. So I need to understand it to use freely.
As said in mentioned article this function is multiplying number by 2 if last bit is set and then checking last bit to be 0 and returns true in that case. But after checking binary representations of numbers I got confused as last bit is 1 in case of any odd number and last bit is 0 in case of any even number. So...
Actual question is
What's the logic of this function?
Any answer is appreciated!
Thanks for all!
The most straightforward way to check if a number is a multiple of 5 is to simply
if (n % 5 == 0) {
// logic...
}
What the bit manipulation code does is:
If the number is odd, multiply it by two. Notice that for multiples of 5, the ones digit will end in either 0 or 5, and doubling the number will make it end in 0.
We create a number x that is set to n, but with a ones digit set to 0. (We do this by multiplying n by 0.1, which removes the ones digit, and then multiply by 10 in order to add a 0, which has a total effect of just changing the ones digit to 0).
We know that originally, if n was a multiple of 5, it would have a ones digit of 0 after step 1. So we check if x is equal to it, and if so, then we can say n was a multiple of 5.
Related
Given two non-negative integers A and B, find the minimum number of operations to make them equal simultaneously. In one operation, you can:
either change A to 2*A
or change B to 2*B
or change both A and B to A-1, B-1
For example: A = 7, B = 25
Sequence of operations would be:
6 24
12 24
24 24
We cannot make them equal in less than 3 operations
I was asked this coding question in a test a week ago. Cannot think of a solution, it is stuck in my head.The input A and B were somewhat over 10^12 so it is clear that I cannot use a loop else it will exceed time limit.
A slow but working solution:
If they are equal, stop.
If one of them is 0, stop with failure (there is no solution if negative numbers are not allowed).
While both are larger than 1, decrease both.
Now the smaller is 1, the other is larger.
While the smaller has a shorter binary representation, double the smaller.
Continue at step 1.
In step 4, the maximum decreases. In step 5, the absolute difference decreases. Thus eventually the algorithm terminates.
This should give the optimal solution. We have to compare a few different ways and take the best solution.
One working solution is to double the smaller number as many times as it stays below the larger number (can be zero times). Then calculate the difference between the double of the (possibly multiple times) doubled smaller number and the larger number. And decrease the numbers as many times. Then double the smaller number one more time. [If the numbers are equal from the beginning, the solution is trivial instead.] This gives an upper bound of the steps.
Now try out the following optimizations:
2a) Choose a number n between 0 and up to the number of steps of the best solution so far.
2b) Choose one number as A and one number as B (two possibilities).
2c) Now count the applied steps of the following procedure.
Double A n times.
Calculate the smallest power of 2 (=m), with which B * 2^m >= A. m should be at least 1.
Calculate the difference of A with the product from step 4 in a mixed base (correct term?) system with each digit having a positional value of 2^(n+1)-1, which is from the least significant right digit to the left: 1, 3, 7, 15, 31, 63, ... From all possible representations the number must have the smallest crosssum, e.g. 100 for 7 is correct, 021 not. Sidenote: For the least checksum there will mostly be digits 0 and 1 and at most one digit 2, no other digits. There will never be a digit 1 right of a 2.)
Represent the number as m digits by filling the left positions with zero. If the number does not fit, go back to step 2 for another selection.
Take the most significant not processed digit from step 6 and do as many decreasing steps.
Double B.
Repeat from 7. with the next digit; if there are no more digits left, the numbers are equal.
If the number of steps is less than the best solution so far, choose this as the proposed solution.
Go back to step 2 for another selection.
After doing all selections from 2 we should have the optimal solution with the minimum number of steps.
The following examples are from an earlier version of the answer, where A is always the larger number and n=0, so we test only one selection.
Example 17 and 65
Power of 2: 2^2=4; 4x17=68
Difference: 68-65=3
3 = 010=10 in base 7/3/1
Start => 17/65
Decrease. Double. => 32/64
Double. => 64/64
Example 18 and 67
Power of 2: 2^2=4; 4x18=72
Difference: 72-67=5
5 = 012=12 in base 7/3/1
Start => 18/67
Decrease. Double. => 34/66
Decrease. Decrease. Double. => 64/64
Example 10 and 137
Power of 2: 2^4=16; 16*10=160
Difference: 160-137=23
23 = 1101 in base 15/7/3/1
Start => 10/137
Decrease. Double. => 18/136
Decrease. Double. => 34/135
Double. => 68/135
Decrease. Double. => 134/134
Here's a breadth-first search that does return the correct answer but may not be an optimal method of finding it. Maybe it can help others detect a pattern.
JavasScript code:
function f(a, b) {
const q = [[a, b, [a, b]]];
while (true){
const [x, y, path] = q.shift();
if (x == y) {
return path;
}
if (x > 0 && y > 0) {
q.push([x-1, y-1, path.concat([x-1, y-1])]);
}
q.push([2*x, y, path.concat([2*x, y])]);
q.push([x, 2*y, path.concat([x, 2*y])]);
}
return [];
}
function showPath(path) {
let out1 = "";
let out2 = "";
for (let i = 0; i < path.length; i += 2) {
const s1 = path[i].toString(2);
const s2 = path[i+1].toString(2);
const len = Math.max(s1.length, s2.length);
out1 += s1.padStart(len, "0");
out2 += s2.padStart(len, "0");
if (i < path.length - 2) {
out1 += " --> ";
out2 += " --> ";
}
}
console.log(out1);
console.log(out2);
}
showPath(f(89, 7));
I have come up with divide and conquer algorithm for this. Just wanted to know if this would work or not?
First mid is calculated from the integer range i.e. (0+(1<<32-1))>>1 and then this idea is applied: range of number from start to mid or from mid to end will always be less than the numbers we are going to consider as we are considering billion numbers and there will definitely some numbers which are going to be repeated as the range of 32bit integer is much smaller compare to billion numbers.
def get_duplicate(input, start, end):
while True:
mid = (start >> 1) + end - (end >> 1)
less_to_mid = 0
more_to_mid = 0
equal_to_mid = 0
for data in input:
data = int(data, 16)
if data < mid:
less_to_mid += 1
elif data == mid:
equal_to_mid += 1
else:
more_to_mid += 1
if equal_to_mid > 1:
return mid
elif mid-start < less_to_mid:
end = mid-1
elif end-mid < more_to_mid:
start = mid+1
with open("codes\output.txt", 'r+') as f:
content = f.read().split()
print(get_duplicate(content, 0, 1<<32-1))
I know we can use bit array but I just want to get your views on this solution and if implementation is buggy.
Your method is OK. But you will probably need to read the input many times to find the answer.
Here is a variant, which allows you to find a duplicate with few memory, but you only need to read the input twice.
Initialize an array A[65536] of integers to zero.
Read the numbers one by one. Every time a number x is read, add 1 to A[x mod 65536].
When the reading ends, there will be at least one i such that A[i] is strictly bigger than 65536. This is because 65536 * 63356 < 4.3 billion. Let us say A[i0] is bigger than 65536.
Clear the array A to zero.
Read the numbers again, but this time, only look at those numbers x such that x mod 65536 = i0. For every such x, add 1 to A[x / 65536].
When the reading ends, there will be at least one j such that A[j] is strictly bigger than 1. Then the number 65536 * j + i0 is the final answer.
2^32 bits memory is nothing special for the modern systems. So you have to use bitset, this data structure needs only a bit per number and all modern languages have an implementation. Here is the idea - you just remember if a number has been already seen:
void print_twice_seen (Iterator &it)//iterates through all numbers
{
std::bitset< (1L<<32) > seen;//bitset for 2^32 elements, assume 64bit system
while(it.hasNext()){
unsigned int val=it.next();//return current element and move the iterator
if(seen[val])
std::cout<<"Seen at least twice: "<<val<<std::endl;
else
seen.set(val, true);//remember as seen
}
}
Can anyone help me with some algorithm for this problem?
We have a big number (19 digits) and, in a loop, we subtract one of the digits of that number from the number itself.
We continue to do this until the number reaches zero. We want to calculate the minimum number of subtraction that makes a given number reach zero.
The algorithm must respond fast, for a 19 digits number (10^19), within two seconds. As an example, providing input of 36 will give 7:
1. 36 - 6 = 30
2. 30 - 3 = 27
3. 27 - 7 = 20
4. 20 - 2 = 18
5. 18 - 8 = 10
6. 10 - 1 = 9
7. 9 - 9 = 0
Thank you.
The minimum number of subtractions to reach zero makes this, I suspect, a very thorny problem, one that will require a great deal of backtracking potential solutions, making it possibly too expensive for your time limitations.
But the first thing you should do is a sanity check. Since the largest digit is a 9, a 19-digit number will require about 1018 subtractions to reach zero. Code up a simple program to continuously subtract 9 from 1019 until it becomes less than ten. If you can't do that within the two seconds, you're in trouble.
By way of example, the following program (a):
#include <stdio.h>
int main (int argc, char *argv[]) {
unsigned long long x = strtoull(argv[1], NULL, 10);
x /= 1000000000;
while (x > 9)
x -= 9;
return x;
}
when run with the argument 10000000000000000000 (1019), takes a second and a half clock time (and CPU time since it's all calculation) even at gcc insane optimisation level of -O3:
real 0m1.531s
user 0m1.528s
sys 0m0.000s
And that's with the one-billion divisor just before the while loop, meaning the full number of iterations would take about 48 years.
So a brute force method isn't going to help here, what you need is some serious mathematical analysis which probably means you should post a similar question over at https://math.stackexchange.com/ and let the math geniuses have a shot.
(a) If you're wondering why I'm getting the value from the user rather than using a constant of 10000000000000000000ULL, it's to prevent gcc from calculating it at compile time and turning it into something like:
mov $1, %eax
Ditto for the return x which will prevent it noticing I don't use the final value of x and hence optimise the loop out of existence altogether.
I don't have a solution that can solve 19 digit numbers in 2 seconds. Not even close. But I did implement a couple of algorithms (including a dynamic programming algorithm that solves for the optimum), and gained some insight that I believe is interesting.
Greedy Algorithm
As a baseline, I implemented a greedy algorithm that simply picks the largest digit in each step:
uint64_t countGreedy(uint64_t inputVal) {
uint64_t remVal = inputVal;
uint64_t nStep = 0;
while (remVal > 0) {
uint64_t digitVal = remVal;
uint_fast8_t maxDigit = 0;
while (digitVal > 0) {
uint64_t nextDigitVal = digitVal / 10;
uint_fast8_t digit = digitVal - nextDigitVal * 10;
if (digit > maxDigit) {
maxDigit = digit;
}
digitVal = nextDigitVal;
}
remVal -= maxDigit;
++nStep;
}
return nStep;
}
Dynamic Programming Algorithm
The idea for this is that we can calculate the optimum incrementally. For a given value, we pick a digit, which adds one step to the optimum number of steps for the value with the digit subtracted.
With the target function (optimum number of steps) for a given value named optSteps(val), and the digits of the value named d_i, the following relationship holds:
optSteps(val) = 1 + min(optSteps(val - d_i))
This can be implemented with a dynamic programming algorithm. Since d_i is at most 9, we only need the previous 9 values to build on. In my implementation, I keep a circular buffer of 10 values:
static uint64_t countDynamic(uint64_t inputVal) {
uint64_t minSteps[10] = {1, 1, 1, 1, 1, 1, 1, 1, 1, 1};
uint_fast8_t digit0 = 0;
for (uint64_t val = 10; val <= inputVal; ++val) {
digit0 = val % 10;
uint64_t digitVal = val;
uint64_t minPrevStep = 0;
bool prevStepSet = false;
while (digitVal > 0) {
uint64_t nextDigitVal = digitVal / 10;
uint_fast8_t digit = digitVal - nextDigitVal * 10;
if (digit > 0) {
uint64_t prevStep = 0;
if (digit > digit0) {
prevStep = minSteps[10 + digit0 - digit];
} else {
prevStep = minSteps[digit0 - digit];
}
if (!prevStepSet || prevStep < minPrevStep) {
minPrevStep = prevStep;
prevStepSet = true;
}
}
digitVal = nextDigitVal;
}
minSteps[digit0] = minPrevStep + 1;
}
return minSteps[digit0];
}
Comparison of Results
This may be considered a surprise: I ran both algorithms on all values up to 1,000,000. The results are absolutely identical. This suggests that the greedy algorithm actually calculates the optimum.
I don't have a formal proof that this is indeed true for all possible values. It intuitively kind of makes sense to me. If in any given step, you choose a smaller digit than the maximum, you compromise the immediate progress with the goal of getting into a more favorable situation that allows you to catch up and pass the greedy approach. But in all the scenarios I thought about, the situation after taking a sub-optimal step just does not get significantly more favorable. It might make the next step bigger, but that is at most enough to get even again.
Complexity
While both algorithms look linear in the size of the value, they also loop over all digits in the value. Since the number of digits corresponds to log(n), I believe the complexity is O(n * log(n)).
I think it's possible to make it linear by keeping counts of the frequency of each digit, and modifying them incrementally. But I doubt it would actually be faster. It requires more logic, and turns a loop over all digits in the value (which is in the range of 2-19 for the values we are looking at) into a fixed loop over 10 possible digits.
Runtimes
Not surprisingly, the greedy algorithm is faster to calculate a single value. For example, for value 1,000,000,000, the runtimes on my MacBook Pro are:
greedy: 3 seconds
dynamic: 36 seconds
On the other hand, the dynamic programming approach is obviously much faster at calculating all the values, since its incremental approach needs them as intermediate results anyway. For calculating all values from 10 to 1,000,000:
greedy: 19 minutes
dynamic: 0.03 seconds
As already shown in the runtimes above, the greedy algorithm gets about as high as 9 digit input values within the targeted runtime of 2 seconds. The implementations aren't really tuned, and it's certainly possible to squeeze out some more time, but it would be fractional improvements.
Ideas
As already explored in another answer, there's no chance of getting the result for 19 digit numbers in 2 seconds by subtracting digits one by one. Since we subtract at most 9 in each step, completing this for a value of 10^19 needs more than 10^18 steps. We mostly use computers that perform in the rough range of 10^9 operations/second, which suggests that it would take about 10^9 seconds.
Therefore, we need something that can take shortcuts. I can think of scenarios where that's possible, but haven't been able to generalize it to a full strategy so far.
For example, if your current value is 9999, you know that you can subtract 9 until you reach 9000. So you can calculate that you will make 112 steps ((9999 - 9000) / 9 + 1) where you subtract 9, which can be done in a few operations.
As said in comments already, and agreeing with #paxdiablo’s other answer, I’m not sure if there is an algorithm to find the ideal solution without some backtracking; and the size of the number and the time constraint might be tough as well.
A general consideration though: You might want to find a way to decide between always subtracting the highest digit (which will decrease your current number by the largest possible amount, obviously), and by looking at your current digits and subtracting which of those will give you the largest “new” digit.
Say, your current number only consists of digits between 0 and 5 – then you might be tempted to subtract the 5 to decrease your number by the highest possible value, and continue with the next step. If the last digit of your current number is 3 however, then you might want to subtract 4 instead – since that will give you 9 as new digit at the end of the number, instead of “only” 8 you would be getting if you subtracted 5.
Whereas if you have a 2 and two 9 in your digits already, and the last digit is a 1 – then you might want to subtract the 9 anyway, since you will be left with the second 9 in the result (at least in most cases; in some edge cases it might get obliterated from the result as well), so subtracting the 2 instead would not have the advantage of giving you a “high” 9 that you would otherwise not have in the next step, and would have the disadvantage of not lowering your number by as high an amount as subtracting the 9 would …
But every digit you subtract will not only affect the next step directly, but the following steps indirectly – so again, I doubt there is a way to always chose the ideal digit for the current step without any backtracking or similar measures.
Being given a positive integer n for the input, output a list of
consecutive positive integers which, summed up, will make up this
number or IMPOSSIBLE if it can't be done. In case of multiple possible
anwers, output any with the least summands.
It's a problem from CERC competition held a few days ago and while I can solve it using some weird theories I made up, I have no idea how to tackle it elegantly.
I know that no number in the form 2^i where i is a non-negitve integer can be made and that any odd number can be presented in a two-summand form of floor(n/2)+(floor(n/2)+1) but when it comes to the even numbers, I'm clueless about some elegant solution and I heard it can be solved with some one formula. I thought about dividing the number till we're left with something odd and then trying to put the summands of this odd number in the center of the even one or trying to put the number divided by the odd one in the middle but it doesn't sound right, not to mention elegant. It was solved in under 10 minutes by some teams so the route I mention above is almost certainly wrong and I'm overthinking it way too much.
What would be the best and fastest way to solve this?
The sum of a set of positive integers from m to n is
n(n + 1)/2 - m(m - 1)/2
(ie. sum of numbers from 1 to n less the sum from 1 to m - 1)
which is
((n^2 - m^2) +(n + m))/2
which is
(n + m)( n - m + 1)/2
So if you need to make that equal x for some x
2x = (n + m)(n - m + 1)
So look for factors of 2x and see which fit that pattern
Ookay, since #Pokechu22 was interested in how my team solved it (and it seems even one more person was, judging by the upvote next to the message :)), here it goes. It works, it fit the time limit but beware that it's kinda twisted and even after those few days I have serious doubts if I still remember how it worked :D Let's get to it.
So first, we if two scenarios - if n is 1, output Impossible and if n is odd, output floor(n/2)+(floor(n/2)+1). Now for the real fun:
We know that any even number has to be a sum of at least three consecutive integers cause for it to be a sum of two, we'd need two consecutive odd or two consecutive even numbers and that's impossible. Thus, we derived a sequence of sequences and their corresponding sums in the form of:
Length | Sequence | Sum
3 1-2-3 6
4 0-1-2-3 6
5 0-1-2-3-4 10
6 IMPOSSIBLE N/A
7 1-2-3-4-5-6-7 28
8 0-1-2-3-4-5-6-7 28
9 0-1-2-3-4-5-6-7-8 36
10 IMPOSSIBLE N/A
...and so on
As it turns out, every possible answer sequence is just one of those, shifted to the right by some number. To find out what the number is, for each element sum_i of the table above, we check if n-sum_i is divisible by length_i. If it is, we have our solution cause all that's left is to shift the given sequence by that many places (in other words, add (n-sum_i)/length_i to every element of sequence_i). In case of using the sequence with 0 at the beginning, we also have to remember to shift it one additional space right (start at the next integer). Thus, we arrive at a code looking like this:
bool shouldBeShifted = false;
bool foundSolution = false;
long long sum_i = 6, length_i = 3;
while(n - sum_i >= 0)
{
long long tmp = n - sum_i;
if(tmp % length_i == 0) {foundSolution = true; break;}
++length_i;
if(tmp % length_i == 0) {foundSolution = true; shouldBeShifted = true; break;}
sum_i += length_i;
++length_i;
tmp = n - sum_i;
if(tmp % length_i == 0) {foundSolution = true; break;}
length_i += 2;
sum_i = ((3 * length_i) - 3) + sum_i;
shouldBeShifted = false;
}
If after the loop foundSolution turned out to be true, we've found a solution of length length_i which starts at number (n / length_i) - (length_i / 2) + (shouldBeShifted ? 1 : 0) :) Believe it or not, it does work and finds optimal solution... :D
I would like an alogrithm that would use only shift, add or subtract operations to find whether a number is a multiple of 6. So, basically just binary operations.
So far I think I should logical right shift the number twice to divide by 4 and then subtract 6 once from it. But I know something is wrong with my approach and cannot figure out what.
1) Simple (N & 1) == 0 to check if number is divisible by 2.
2) Use the Bit hack answer (from This thread. )to check for divisibility by 3.
If both are true, your number is divisible by 6.
how about keep subtracting the number by 6 until it reaches zero.
If you get zero the number is divisible by 6 otherwise not.
OR
keep dividing the number by 2 (shift operation on binary) until the number is less than 12.
then subtract 6 from it . If less than zero (not divisible )
if zero divisible.
if not subtract 3
If less than zero (not divisible )
if zero divisible.
You could try implementing a division algorithm with the primitive operations available to you. The basic long-division algorithm from 4th grade might be enough (just do things in base 2 instead of base 10, with bitshifting instead of multiplication)
OK. This is how I would go about it (just a first thought) :
A multiple of 6 is both a multiple of 2 and 3, so it should satisfy the divisibility criteria of 2 and 3 at the same time... So...
Check divisibility by 2 :
Right shift the number
If remainder>1, repeat 1.
If remainder=1, then FALSE, else continue.
Checking the divisibility by 2, could obviously be also implemented by (N & 1 == 0), as stated above. This simply checks the last digit of N's binary representation : if it's 1, N is odd (thus NOT divisible by 2), if it's 0, it's perfectly divisible...
Check divisibility by 3 :
Substract 3.
If remainder>3, repeat 1.
If remainder>0, then FALSE, else TRUE.
Reference: http://wiki.answers.com/Q/How_can_you_tell_if_a_number_is_a_multiple_of_6
It is a multiple of six if BOTH of the following statements are true:
1) The last digit (ones place) is 0, 2, 4, 6, or 8.
2) When you add all the digits together, you get a multiple of 3.
Reference: http://wiki.answers.com/Q/How_can_you_tell_if_a_number_is_a_multiple_of_3
1) Start with a number N.
2) Sum the digits of the number, and get M.
3) If M is larger than 10, set N=M and return to stage 2.
4) Otherwise, M is now smaller than 10. If M is 0,3,6 or 9, then N is a multiple of 3
If we extend the range of operations to "bit-masking" and "bit-shifting", it's simple.
As quite a few have stated, divisibility by two is equivalent to (n & 1) == 0. Divisbility by 3 is (relatively) easy in binary. Initialize an accumulator a to 0, then repeat a += (n & 3); n = (n >> 2); until n is 0. If (and only if) a is 3 is n divisible by 3.