What might be a simple Ruby way to round numbers using probability, i.e., based on how close the value is to one boundary or the other (floor or ceiling)?
For example, given a current price value of 28.33, I need to add 0.014.
Equivalent to starting with 28.34 and needing to add 0.004, but the final value must be rounded to two decimal places(which can be provided as parameter, or fixed for now).
The final value should therefore be:
28.34 with 60% chance, since it is that much closer, OR
28.35 with 40% random chance
The reason it occured to me this could serve best is that the application is stateless and independent across runs, but still needs to approximate the net effect of accumulating the less significant digits normally rounded into oblivion (eg. micropenny values that do have an impact over time). For example, reducing a stop-loss by some variable increment every day (subtraction like -0.014 above instead).
It would be useful to extend this method to the Float class directly.
How about:
rand(lower..upper) < current ? lower.round(2) : upper.round(2)
EDIT:
The above will only work if you use Ruby 1.9.3 (due to earlier versions not supporting rand in a range).
Else
random_number = rand * (upper-lower) + lower
random_number < current ? lower.round(2) : upper.round(2)
Wound up using this method:
class Float
def roundProb(delta, prec=2)
ivalue=self
chance = rand # range 0..1, nominally averaged at 0.5
# puts lower=((ivalue + delta)*10**prec -0.5).round/10.0**prec # aka floor
# puts upper=((ivalue + delta)*10**prec +0.5).round/10.0**prec # ceiling
ovalue=((ivalue + delta)*10**prec +chance-0.5).round/10.0**prec # proportional probability
return ovalue
rescue
puts $#, $!
end
end
28.33.roundProb(0.0533)
=> 28.39
Maybe not the most elegant approach but seems to work for the general case of any precision, default 2. Even works on Ruby 1.8.7 I'm stuck with in one case, which lacks a precision parameter to round().
Related
Getting a confusing result when doing an exponential equation in ruby:
shifted_x = -97.0
exponent = 1.5
shifted_x**exponent
# Result: (0.0-955.3392067742221i)
-97.0**1.5
# Result: -955.3392067742221
My expectation is that the results would be the same but they are not. What changes when using the variable that causes ruby to return an imaginary (or complex) number?
Operator precedence.
-97.0**1.5 is equivalent to -(97.0**1.5)
shifted_x**exponent is, of course, equivalent to (-97.0)**1.5
Note that (-97.0)**1.5 is equivalent to sqrt((-97)^3) and taking the root of a negative real number gives you a complex number.
I'm trying to find the decimal value from a percentage that a user inputs.
For example, if a user inputs "15", i will need to do a calculation of 0.15 * number.
I've tried using .to_f, but it returns 15.0:
15.to_f
#=> 15.0
I also tried to add 0. to the beginning of the percentage, but it just returns 0:
15.to_s.rjust(4, "0.").to_i
#=> 0
Divide by 100.0
The easiest way to do what you're trying to do is to divide your input value by a Float (keeping in mind the inherent inaccuracy of floating point values). For example:
percentage = 15
percentage / 100.0
#=> 0.15
One benefit of this approach (among others) is that it can handle fractional percentages just as easily. Consider:
percentage = 15.6
percentage / 100.0
#=> 0.156
If floating point precision isn't sufficient for your use case, then you should consider using Rational or BigDecimal numbers instead of a Float. Your mileage will very much depend on your semantic intent and accuracy requirements.
Caveats
Make sure you have ahold of a valid Integer in the first place. While others might steer you towards String#to_i, a more robust validation is to use Kernel#Integer so that an exception will be raised if the value can't be coerced into a valid Integer. For example:
print "Enter integer: "
percentage = Integer gets
If you enter 15\n then:
percentage.class
#=> Integer
If you enter something that can't be coerced to an Integer, like foo\n, then:
ArgumentError (invalid value for Integer(): "foo\n")
Using String#to_i is much more permissive, and can return 0 when you aren't expecting it, such as when called on nil, an empty string, or alphanumeric values that don't start with an integer. It has other interesting edge cases as well, so it's not always the best option for validating input.
I'm trying to find the amount from a percentage that a user inputs
If you retrieve the input via gets, you typically convert it to a numeric value first, e.g.
percentage = gets.to_i
#=> 15
Ruby is not aware that this 15 is a percentage. And since there's no Percentage class, you have to convert it into one of the existing numeric classes.
15% is equal to the fraction 15/100, the ratio 15:100, or the decimal number 0.15.
If you want the number as a (maybe inexact) Float, you can divide it by 100 via fdiv:
15.fdiv(100)
#=> 0.15
If you prefer a Rational you can use quo: (it might also return an Integer)
15.quo(100)
#=> (3/20)
Or maybe BigDecimal for an arbitrary-precision decimal number:
require 'bigdecimal'
BigDecimal(15) / 100
#=> 0.15e0
BigDecimal also accepts strings, so you could pass the input without prior conversion:
input = gets
BigDecimal(input) / 100
#=> 0.15e0
I want to get the most possible precis time using Ruby. For example:
3.times.map do
Thread.new do
# Expect 3 differnt results from each thread
p Time.now.precis_time
end
end.each(&:join)
However, even using the strftime, I still can not achieve the goal. So is there any other way to get this?
The most precise timer available to Ruby is Process::clock_gettime. To avoid losing precision to float rounding, use :nanosecond unit:
3.times { p Process.clock_gettime(Process::CLOCK_REALTIME, :nanosecond) }
# => 1491185078101717000
# => 1491185078101741000
# => 1491185078101747000
EDIT: This is the same time that is available by Time.now. On Linux, the two have nanosecond precision. However, there is another clock that has nanosecond precision even on OSX: CLOCK_MONOTONIC. This clock does not track time from epoch, but time from "some event", this event normally being your computer's boot time. To get the most precise time, one can take the difference between CLOCK_REALTIME and CLOCK_MONOTONIC and apply it later:
clock_diff = Process.clock_gettime(Process::CLOCK_REALTIME, :nanosecond) -
Process.clock_gettime(Process::CLOCK_MONOTONIC, :nanosecond)
3.times {
nsec = Process.clock_gettime(Process::CLOCK_MONOTONIC, :nanosecond) + clock_diff
time = Time.at(nsec / 1_000_000_000, nsec % 1_000_000_000 / 1_000.0)
p time.strftime("%Y-%m-%d %H:%i:%s.%N")
}
On Linux, I think the most precise time is just Time.now. The to_r method "is intended to be used to get an accurate value representing the nanoseconds since the Epoch" (from the docs).
t = Time.now
p t.to_r # =>(1491206113721862629/1000000000)
p [t.to_i, t.nsec] # =>[1491206113, 721862629]
On JRuby, you can use java.lang.System.nano_time :
java.lang.System.nano_time - java.lang.System.nano_time
# -15607
to get nanoseconds since a fixed but arbitrary origin. From the documentation :
This method can only be used to measure elapsed time and is not
related to any other notion of system or wall-clock time. The value
returned represents nanoseconds since some fixed but arbitrary origin
time (perhaps in the future, so values may be negative). The same
origin is used by all invocations of this method in an instance of a
Java virtual machine; other virtual machine instances are likely to
use a different origin.
If you want a precise Time with Java < 9, you could use currentTimeMillis :
java.lang.System.current_time_millis
#=> 1491214503112
But then, you wouldn't get more information than from Time.now :
Time.now.to_f
#=> 1491214592.562
So Time.now might be your best bet : it will work on any Ruby version on any system. Note that nanoseconds precision doesn't mean nanoseconds accuracy.
I dare say you could ignore any digit related to a shorter time than milliseconds. You could output the distance between NYC and Los-Angeles in micrometers, it doesn't mean it would be useful though.
I am trying to transfer money from one "account" to another:
puts ("\nTransfer how much?")
require 'bigdecimal'
amount = gets.chomp
amount = BigDecimal(amount) #<--Does BigDecimal have to work with strings???
puts ("\nTransfer to which account?")
acct_to = gets.chomp.to_i #<--Accounts are numbered previously for the user.
acct_to = acct_to - 1 #<--Since the account numbers are stored in an array...
#having a problem with #{amount} in the string below. It is showing something like
#1.2E0???
puts ("\nTransfer #{amount} from #{names[acct_from]} to #{names[acct_to]}? [1 - yes] / [2 - no]")
#Makes transfer==============================
e = gets.chomp.to_i
if e == 1
puts ("\nTransferring!")
sum1 = 0
sum2 = 0
sum1 = BigDecimal(ac[names[acct_from]].to_s) #<-- ac is a hash
sum2 = BigDecimal(ac[names[acct_to]].to_s)
ac[names[acct_from]] = sum1 - amount
ac[names[acct_to]] = sum2 + amount
puts ("\n#{names[acct_from]}'s new balance is #{ac[names[acct_from]]}.")
puts ("\n#{names[acct_to]}'s new balance is #{ac[names[acct_to]]}.")
end
end
Ok, I have this working really well with numbers operating as floats; however, as you know, floats are causing problems.
Please help me get an introductory grasp of how bigdecimal works.
Also, if you are really awesome, help me get it to work in this specific situation.
First, if it is working with floats, you can get it to work with BigDecimal as well, and you should, because of the obvious reasons.
So, to answer your first question in the comments to your code: Yes, BigDecimal instantiation has to work with strings, the reason is quite obvious: Stringified number values are not prone to any inaccuracies and do not share the limits of float representation:
# Think of this number
float = 23.12323423142342348273498721348923748712340982137490823714089374
# Ruby will truncate its precision to 17, because Float's are not made for precise, but for fast calculation
float #=> 23.123234231423424
# Now, if BigDecimal would initialize with a float value, the precision would get lost on the way, too. Therefore, BigDecimal needs strings
big_decimal_from_float = BigDecimal.new(23.12323423142342348273498721348923748712340982137490823714089374.to_s)
big_decimal_from_string = BigDecimal.new("23.12323423142342348273498721348923748712340982137490823714089374")
# Now you'll see that the BigDecimal initialized "with a float value" will have lost some precision
To answer your second question, 1.2E0 is just Scientific Notation for 1.2. BigDecimal always uses Scientific Notation since it is intented for use in really precise calculations used in science and financial math.
To comment on your example, using BigDecimal is surely the right way to go, but you have to use it throughout and store your values accordingly. That means that if you write to an SQL database, you will have to use a decimal format with the right precision. Also, all instantiations thereform have to be instances of BigDecimal and never Float. One float in your entire finance application can rain on your parade if you intend to do financial math with really tiny fractures or high values.
To relieve you of some of the pitfalls of money handling, have a look at the Exchange Gem. I wrote it to have a way of representing money in a ruby application using BigDecimal and ISO4217 compatible instantiation. It may help you handling money throughout an application and avoid some of the pitfalls involved.
I suggesting you to use this gem: github/RubyMoney/money
Read a bit more about it. It is working out-of-the-box. It use nor float, nor BigDecimal, but just integers. So no precision loss at all.
Ok, so say you have a really big Range in ruby. I want to find a way to get the max value in the Range.
The Range is exclusive (defined with three dots) meaning that it does not include the end object in it's results. It could be made up of Integer, String, Time, or really any object that responds to #<=> and #succ. (which are the only requirements for the start/end object in Range)
Here's an example of an exclusive range:
past = Time.local(2010, 1, 1, 0, 0, 0)
now = Time.now
range = past...now
range.include?(now) # => false
Now I know I could just do something like this to get the max value:
range.max # => returns 1 second before "now" using Enumerable#max
But this will take a non-trivial amount of time to execute. I also know that I could subtract 1 second from whatever the end object is is. However, the object may be something other than Time, and it may not even support #-. I would prefer to find an efficient general solution, but I am willing to combine special case code with a fallback to a general solution (more on that later).
As mentioned above using Range#last won't work either, because it's an exclusive range and does not include the last value in it's results.
The fastest approach I could think of was this:
max = nil
range.each { |value| max = value }
# max now contains nil if the range is empty, or the max value
This is similar to what Enumerable#max does (which Range inherits), except that it exploits the fact that each value is going to be greater than the previous, so we can skip using #<=> to compare the each value with the previous (the way Range#max does) saving a tiny bit of time.
The other approach I was thinking about was to have special case code for common ruby types like Integer, String, Time, Date, DateTime, and then use the above code as a fallback. It'd be a bit ugly, but probably much more efficient when those object types are encountered because I could use subtraction from Range#last to get the max value without any iterating.
Can anyone think of a more efficient/faster approach than this?
The simplest solution that I can think of, which will work for inclusive as well as exclusive ranges:
range.max
Some other possible solutions:
range.entries.last
range.entries[-1]
These solutions are all O(n), and will be very slow for large ranges. The problem in principle is that range values in Ruby are enumerated using the succ method iteratively on all values, starting at the beginning. The elements do not have to implement a method to return the previous value (i.e. pred).
The fastest method would be to find the predecessor of the last item (an O(1) solution):
range.exclude_end? ? range.last.pred : range.last
This works only for ranges that have elements which implement pred. Later versions of Ruby implement pred for integers. You have to add the method yourself if it does not exist (essentially equivalent to special case code you suggested, but slightly simpler to implement).
Some quick benchmarking shows that this last method is the fastest by many orders of magnitude for large ranges (in this case range = 1...1000000), because it is O(1):
user system total real
r.entries.last 11.760000 0.880000 12.640000 ( 12.963178)
r.entries[-1] 11.650000 0.800000 12.450000 ( 12.627440)
last = nil; r.each { |v| last = v } 20.750000 0.020000 20.770000 ( 20.910416)
r.max 17.590000 0.010000 17.600000 ( 17.633006)
r.exclude_end? ? r.last.pred : r.last 0.000000 0.000000 0.000000 ( 0.000062)
Benchmark code is here.
In the comments it is suggested to use range.last - (range.exclude_end? ? 1 : 0). It does work for dates without additional methods, but will never work for non-numeric ranges. String#- does not exist and makes no sense with integer arguments. String#pred, however, can be implented.
I'm not sure about the speed (and initial tests don't seem incredibly fast), but the following might do what you need:
past = Time.local(2010, 1, 1, 0, 0, 0)
now = Time.now
range = past...now
range.to_a[-1]
Very basic testing (counting in my head) showed that it took about 4 seconds while the method you provided took about 5-6. Hope this helps.
Edit 1: Removed second solution as it was totally wrong.
I can't think there's any way to achieve this that doesn't involve enumerating the range, at least unless as already mentioned, you have other information about how the range will be constructed and therefore can infer the desired value without enumeration. Of all the suggestions, I'd go with #max, since it seems to be most expressive.
require 'benchmark'
N = 20
Benchmark.bm(30) do |r|
past, now = Time.local(2010, 2, 1, 0, 0, 0), Time.now
#range = past...now
r.report("range.max") do
N.times { last_in_range = #range.max }
end
r.report("explicit enumeration") do
N.times { #range.each { |value| last_in_range = value } }
end
r.report("range.entries.last") do
N.times { last_in_range = #range.entries.last }
end
r.report("range.to_a[-1]") do
N.times { last_in_range = #range.to_a[-1] }
end
end
user system total real
range.max 49.406000 1.515000 50.921000 ( 50.985000)
explicit enumeration 52.250000 1.719000 53.969000 ( 54.156000)
range.entries.last 53.422000 4.844000 58.266000 ( 58.390000)
range.to_a[-1] 49.187000 5.234000 54.421000 ( 54.500000)
I notice that the 3rd and 4th option have significantly increased system time. I expect that's related to the explicit creation of an array, which seems like a good reason to avoid them, even if they're not obviously more expensive in elapsed time.