I have to make a recursive function that finds the number of ones from 0 to n recursively.
So f(16) = 9
(1,10,11,12,13,14,15,16)
This is obviously homework so I would appreciated if you did NOT post any code, just the reasoning behind it.
What I've reasoned so far is that if you do %10 of a number it will tell you if the least significant is a one, also if you do an integer division by 10 you lose that digit.
So I'm guessing the approach will be can be checking if number%10 == 1 and then calling the function with f(n/10), but then I get lost in the actual implementation.
I would appreciate if you could comment what approach would you use, it has to be recursive just because it's home work, the procedural approach was trivial.
For these types of problems, I find it helps to write some sort of diagram showing the patterns. For instance, if you count by tens, you know that the first set (0-9) contains one 1.
(0-9) -- 1
(10-19) -- 11
(21-29) -- 1
| | -- 1
(100-109) -- 11
(110-119) -- 21
(120-129) -- 11
| | -- 11
(200-209) -- 1
(210-219) -- 11
(220-229) -- 1
| | -- 1
...
(1000-1009) etc...
It may take a while, but this will help you find patterns so you can come up with a more sytematic answer. I don't want to give you too much help since it's a homework problem, but that's the approach I take when I'm solving creative math problems.
You've got two parts to your problem.
Find numbers of ones in number.
Find numbers of ones in all the numbers less than it (but more than zero).
Part one first:
If the right-most digit is 1, then number % 10 == 1.
If the number is > 9 you need to check other digits, you can do this by doing the same test on the number after integer-divide 10. If the number <= 9, then that would give you zero.
So your OnesInNumber function is a bit like:
If number == 0, return 0.
Otherwise call OnesInNumber on number / 10.
If number % 10 == 1 add 1 to that result.
Return the result.
This will for example, give you 1 when called on 10, 1, 12, 303212, give you 2 when called on 11, and so on.
Your OnesInZeroUntil function is then like:
If number <= 0, return 0.
Otherwise call OnesInZeroUntil on number - 1.
Add OnesInNumber(number) to this.
Return the result.
So you've a recursive function that works out the number of 1 in a number, and another recursive function that works out the number of 1 in every number up to that one, building on that first function.
That'd be enough to write a quick 2 functions in, had you not requested that we don't.
(Tip: If your teacher isn't already requiring it, see if you can work out how to do this without recursion. Every recursive function can be re-written as a non-recursive form, and it's a practical skill to be able to do that some people teaching recursion don't seem to cover).
You are quite right in your approach. For every number you want a function that will return the "number of ones" in the decimal representation. A recursive representation of this (note you could also do this iteratively).
Like all recursive functions, you need your end-state catch, i.e., if the input = 0 return 0. Besides that (without giving it all away) you just need to add your current result to the sub-result:
if number==0
return 0
if number%10==1
return myFunc(number/10) + 1
else
return myFunc(number/10)
However, as I said before, there is no need to use recursion. An iterative solution is probably better here since the function is linear with respect to the number of digits.
Related
Given a sorted array, I believe an equation can be created to determine the index where any given number would be inserted.
For instance, given the sorted array of [ -1, 0, 1 ], there is an input/output table for my desired function like this:
x | f(x)
----------
-2 | 0
-1.5| 0
-1 | 0, 1
-0.5| 1
0 | 1, 2
0.5| 2
1 | 2, 3
1.5| 3
2 | 3
I have chosen to use x as the number I wish to insert into the array, and the function would output the indices that an insert function could use to insert x into the array sorted.
What interests me is that given this simplification of the problem, I notice two things:
The output of the function must be an integer
There are cases where the function could return 2 different values
And this is where I leave my thoughts to those who have more experience than me...
My first thought is that the output reminds me of Karnaugh mapping. There are two values the output can be in cases, but it doesn't matter which result is chosen.
My second thought is of quantum computing. I am not experienced enough to be specific, but if two functional outputs can be mapped to the qubit and processed quantumly, what opportunities does that hold in such a context? Could a quantum computer help derive this formula I'm looking for?
My example is very simple, but I just wanted to share this here in case anyone was interested.
A polynomial of degree n can be uniquely defined by n+1 points. However, you'll want a polynomial that can be fit to your n+1 points, while remaining monotonic. I'm not entirely certain how to accomplish this, but I'm sure that curve fitting libraries have already solved it for us. It probably just means adding a few more degrees of freedom to the polynomial, and minimizing some constraints.
Regarding your note on superpositioning- I doubt it has many implications for the world of quantum computing. Actually, I would argue that F shouldn't map to more than one value- as that would violate the definition of a function. If there are two indices it could map to, its because the values are equal, and hence order doesn't matter, so you should just pick one (insert before, or insert after an equal value) as you'd have to do in the implementation anyways.
This question already has answers here:
Get all 1-k tuples in a n-tuple
(3 answers)
Closed 8 years ago.
Given a binary string or a binary number(one is free to take it in any way), I need to find out the next smaller binary number but retaining the number of 0s and 1s in the original binary string or number.
For e.g.
If the given binary number or string was 11100000, the required output would be 11010000.
If the given binary number or string was 11010000, the required output would be 11001000.
Of course, I can do this with Brute Force approach. But I needed a better solution. What could be an optimal way of doing it? I was wondering if someone can help me reach a solution to this in O(1) using bit wise operations.
This is an elaboration on Setzer22's answer, which was close but which lacked one vital piece.
FindNextSmallestWithSameNumberOfBits(string[1...n])
1. for i = n - 1 to 1 do
2. if string[i+1] = 0 and string[i] = 1 then
3. string[i] := 0
4. string[i+1] := 1
5. sort(string[i+2...n], descending)
6. return string[1...n]
7. return "no solution"
This is an O(n) algorithm, which is a provably optimal asymptotic bound for this problem when the input size is unrestricted; while this is "bitwise" in the sense that it operates on bits, it clearly doesn't use what one would typically think of as "bitwise operations." Luckily, for inputs which can be of arbitrary length, there can be no asymptotic advantage to using traditional "bitwise operations" over this method. For inputs of fixed length, to which asymptotic analysis does not readily apply, one might do better using a technique such as those linked to by Asuka in the other answer to this question.
Note, based on comments, that sorting on line 5 can be replaced with simply reversing the string. The reason for this is that this substring is guaranteed to be of the form 0...01...1 (that is, any 0s to the left of any 1s) since, if it weren't, we'd have already found an occurrence of the string 10 and satisfied the condition on line 2.
The key that was missing in Setzer22's answer is that, once you move the rightmost 1 with a 0 to the right of it to the right, you then need to left-shift all the 1s that are even further right as far left as they will go. The reason for this is that the 1 bit shifted to the right is more significant than the bits to the right of it, so left-shifting any 1s which are less significant will give a larger number, but not large enough to undo the effect of reducing the more significant bit.
Clarification based on comments: notice that in line 7 of the pseudocode presented above, it's possible that the algorithm won't return a valid string. The reason for this is that, sometimes, there is no string with the same number of 1s which represents a smaller number. This occurs if and only if the string 01 does not appear as a substring in the input string (in which case the condition on line 2 is never satisfied).
This isn't the clearest explanation of all time, so please let me know if it needs more work. Here's an example:
10011 // input
01011 // right-shift the right-most 1 bit with a 0 to the right of it
01110 // left-shift all 1 bits to the right of the right-shifted as far as possible
1010100011 // input
1010010011 // right-shift the right-most 1 bit with a 0 to the right of it
1010011100 // left-shift all 1 bits to the right of the right-shifted bit as far as possible
One way to clarify this which just occurred to me: right-shifting the 1 bit guarantees that the result will be smaller than the original number; left-shifting the 1s to the right guarantees that the result will no smaller than is necessary.
May be this is what you are finding:
https://github.com/hcs0/Hackers-Delight/blob/master/snoob.c.txt
The functions snoob(), snoob1(), snoob2(), snoob3(), snoob4() and next_set_of_n_elements() are various implementations.
These functions are helper functions which are called by the above functions:
ntz() stands for "number of trailing zeros"
nlz() stands for "number of leading zeros"
pop() stands for "population count" (number of bit set (number of "1"s) in the string)
This is very efficient but only works on fix size integers (eg 32-bit, 64-bit).
I'd like to be able to have a variable "num" that I can increment and have it be understood what number base its in.
For example, if num is base 7 and is equal to 66, if I do num+= 1, num should be set to 100.
One solution involves to_s and to_i, however, theres so much converting going on it doesn't seem like it would be very efficient.
def increment_with_base(number, base)
number_base_ten = number.to_s.to_i(base)
number_base_ten += 1
number_base_ten.to_s(base).to_i
end
Is there something more appropriate than this? Is it possible to tell Ruby which number base I'm using so I don't have to do so many conversions?
As I mentioned in a comment below, I'm very familiar with number bases - just not number bases in Ruby. I actually need to display each incremented number (and I'll be incrementing a lot).
Reading this next part isn't necessary if you know the answer to the question. However, I've added it to clarify why I'm doing what I'm doing. No need to read what follows unless you're looking for more information.
For further information, what I'm doing is generating a set of graphs where each node only has either 0 or 1 transition. Each node is represented by a digit, and the specific digit represents which other node there is a directed edge towards. For example, the number 4.3.1.0 is a graph with four nodes where the first node has an edge to the fourth node, the second has an edge to the third, the third has an edge to the first, and the fourth node does not have any transition.
So if I wanted to generate all four node graphs, where each node only has one exiting edge, I'd need to count from 0.0.0.0 to 4.4.4.4.
Bases don't matter for arithmetic. Numbers are just numbers, all bases are just slightly different ways of representing them. For example, 667 (xy means x is written in base y) is 4810 and regardless of what base you think in while adding one to it, the result is always the same number: 1007 = 4910 = 3116 = ... and so on for any other base (even the really weird ones like the golden ratio base). It's the same operation with the same result.
Just do your arithmetic on numbers and only worry about base when it actually matters (e.g. when rendering it for a user). Even if you need to display it after every update, you still save a couple conversions, which is not only faster but also much simpler and cleaner.
If you simply want to enumerate in a different base, you don't need to explicitly increment:
BASE = 4
MAX = 100
(0..MAX).each do |x|
puts x.to_s( BASE )
end
This isn't much code, and it's pretty fast. Is that suitable for requirements?
And to better match your underlying problem (as well as I understand it?)
NODES = 4
BASE = NODES + 1
MAX = BASE ** NODES
(0...MAX).each do |x|
puts ("0" * NODES + x.to_s( BASE )).chars.to_a[(-NODES..-1)].join('.')
end
I timed the above at 0.01 seconds on my laptop. But if you try with 9 nodes, it takes much much longer (no surprise, you would be looping one billion times!)
Continued from my last question "Range Minimum Query approach (from tree to restricted RMQ)" (It's recommended to give it a read)
Again, from this tutorial on TopCoder, I have a few questions here and there, and I hope someone can clear them out.
So I transform a RMQ (Range Minimum Query) problem to a LCA (Lowest Common Ancestor) problem, and then transform it back, I can have an array that's simplified. (both transform can be found in the tutorial, and the simplified array is array L discussed in "From LCA to RMQ")
Anyway, I can get that array by using Euler Tour, and that's the core part of all the calculation.
First, I need to make it even simpler by making the whole array consists of only 1 and -1, so this is what I do: Ls[i] = L[i] - L[i-1].
The second step is actually partition, and that's simple enough, but there's this third step that confuses me.
Let A'[i] be the minimum value for the i-th block in A and B[i] be the
position of this minimum value in A.
A refers to the L array in this sentence, so the minimum value would always be 1 or -1, and there's gonna be multiple 1s and -1s. It confuses me since I don't think this makes calculation easier.
The fourth step,
Now, we preprocess A' using the ST algorithm described in Section1.
This will take O(N/l * log(N/l)) = O(N) time and space.
If A' only keep records of 1s and -1s, it would seemed useless to do anything on it.
The last step,
To index table P, preprocess the type of each block in A and store it
in array T[1, N/l]. The block type is a binary number obtained by
replacing -1 with 0 and +1 with 1.
What does it mean? To calculate each kind of combination? Like, 000 - 001 -.....?
It looks like multiple questions, but I was hoping that someone can just walk me thorough these last steps. Thanks!
Hopefully this helps explain things.
A refers to the L array in this sentence, so the minimum value would always be 1 or -1, and there's gonna be multiple 1s and -1s. It confuses me since I don't think this makes calculation easier.
I think that the author is mixing up terms here. In this case, I believe that array A refers to the array of original values before they've been preprocessed into -1's and +1's. These values are good to have lying around, since having the minimum value computed for each block of the original array makes it a lot faster to do RMQ. More on that later. For now, don't worry about the +1 and -1 values. They come into play later.
If A' only keep records of 1s and -1s, it would seemed useless to do anything on it.
That's true. However, here A' holds the minimum values from each block before they've been preprocessed into -1 and +1 values, so this actually is an interesting problem to solve. Again, the -1 and +1 steps haven't come into play yet.
To index table P, preprocess the type of each block in A and store it in array T[1, N/l]. The block type is a binary number obtained by replacing -1 with 0 and +1 with 1.
This is where the -1 and +1 values come in. The key idea behind this step is that with small block sizes, there aren't very many possible combinations of -1's and +1's in a block. For example, if the block size is 3, then the possible blocks are
---
--+
-+-
-++
+--
+-+
++-
+++
Here, I'm using + and - to mean +1 and -1.
The article you're reading gives the following trick. Rather than using -1 and +1, use binary 0 and 1. This means the possible blocks are
000 = 0
001 = 1
010 = 2
011 = 3
100 = 4
101 = 5
110 = 6
111 = 7
The advantage of this scheme is twofold. First, since there are only finitely many blocks, it's possible to precompute, for each possible block, the RMQ answer for any pair of indices within that block. Second, since each block can be interpreted as an integer, it's possible to store the answers to these questions in an array keyed by integers, where each integer is what you get by converting the block's -1 and +1 values into 0s and 1s.
Hope this helps!
I was asked the following question in an interview. I don't have any idea about how to solve this. Any suggestions?
Given the start and an ending integer as user input,
generate all integers with the following property.
Example:
123 , 1+2 = 3 , valid number
121224 12+12 = 24 , valid number
1235 1+2 = 3 , 2+3 = 5 , valid number
125 1+2 <5 , invalid number
A couple of ways to accomplish this are:
Test every number in the input range to see if it qualifies.
Generate only those numbers that qualify. Use a nested loop for the two starting values, append the sum of the loop indexes to the loop indexes to come up with the qualifying number. Exit the inner loop when the appended number is past the upper limit.
The second method might be more computationally efficient, but the first method is simpler to write and maintain and is O(n).
I don't know what the interviewer is looking for, but I would suspect the ability to communicate is more important than the answer.
The naive way to solve this problem is by iterating the numbers in the set range, parsing the numbers into a digit sequence and then testing the sequence according to the rule. There is an optimization in that the problem essentially asks you to find fibonnaci numbers so you can use two variables or registers and add them sequentially.
It is unclear from your question whether the component numbers have to have the same number of digits. If not, then you will have to generate all the combinations of the component number arrangements.