I'm building a simple JSON API using Rails 3.2.1 and Jbuilder on Ruby 1.8.7 (1.9.x might help me here, but my hosting provider only has 1.8.7).
Since the API consumer expects timestamps as floats, I'm currently just doing a simple to_f on the time attributes:
json.updated_at record.updated_at.to_f #=> 1328242368.02242
But to_f incurs a precision loss. This causes some issues when the client requests records that have been modified since a given point in time, as the SQL query finds the same record that the client uses for reference. I.e. when trying to find "newer" records than the example above, the SQL query (e.g. updated_at > Time.at(1328242368.02242)) returns that same record, since the actual value of updated_at is more precise and fractionally larger than the given timestamp.
In fact, record.updated_at.usec #=> 22425 or 0.022425seconds. Notice the extra decimal.
So optimally, the timestamp should be JSON'ified with 1 extra decimal, e.g. 1328242368.022425, but I can't find a way to make that happen.
updated_at.to_i #=> 1328242368 # seconds
updated_at.usec #=> 22425 # microseconds
updated_at.to_f #=> 1328242368.02242 # precision loss
# Hacking around `to_f` doesn't help
decimals = updated_at.usec / 1000000.0 #=> 0.022425 # yay, decimals!
updated_at.to_i + decimals #=> 1328242368.02242 # dammit!
I've looked around for ways to set the default float precision, but I'm stumped. Any ideas?
Edit: I should add that the API consumer isn't running JavaScript, so the float can have higher precision. It would break JS-compatibility (and thus the JSON spec) to add another digit (decimal or otherwise), since JS floats can't handle that, I believe. So perhaps I need an entirely different approach...
After the deliberation in the comments, the best option seems to be monkeypatching ActiveSupport::JSON to make it handle BigDecimals the same as Numerics:
class BigDecimal
def as_json(options = nil) self end #:nodoc:
def encode_json(encoder) to_s end #:nodoc:
end
This overrides the Rails team's decision to prevent serialised BigDecimals from being parsed as floats (and losing precision) in JSON deserialisers with no support for decimal numbers.
Related
I am having real difficulty with this and every answer I have seen doesnt seem to work. I have been able to pass a value such as 1.44 as 1.00 but the two decimal values are being lost. I have a number of values passed from a from which i then want to submit to an api via a call. The code is below:
IncomeWagesWeekly = params[:WagesWeekly].to_i
How do i ensure that when this is passed the two decimal places are present. Thanks for any help.
You don't have in Ruby language such thing as fixed digits after decimal point.
1 is the same as 1.00 as it was rightfully mentioned before in comment (almost the same).
If you don't check it's type like that:
1.is_a? Integer # => true
1.is_a? Float # => false
it is all the same.
Just use 1.44.to_i.
If by some reason you want to have an instance of Float instead use to_f method. To crop number explicitly you should use round, or ceil, or floor method:
1.44.round.to_f # => 1.0
1.55.round.to_f # => 2.0
1.44.ceil.to_f # => 2.0
1.55.ceil.to_f # => 2.0
1.44.floor.to_f # => 1.0
1.55.floor.to_f # => 1.0
Maybe don't try to make in integer and use to_f instead. Also consider any sprintf( "%0.02f", your_number), but it returns string.
First, it seems that you want to do fixed-point arithmetic here. So, using a floating point number is not the right choice, since floating point calculations can produce mathematically incorrect results.
A solution for this would be either to stick with Integers (as has been suggested already), or to use the data type BigDecimal, which is defined in the Ruby standard library, and in particular its methods fix, frac and to_digits.
Now to your database: You didn't say what database you are using, and how you pass the values to it, but in general, it is a bad idea to store a non-integer value into a database-field which is supposed to accept integers. As you observed, the fractional part was dropped. Correct behaviour.
You could redefine your database schema, or you can convert your decimal value by yourself into something which matches the field definition in the database. Which way to go, depends on what you want to actually do with this value afterwards. For instance, if you just want to display it, but not perform any calculations, you could use a string. Or, if you know that the number of fractional digits don't exceed a certain limit, you could define a suitable numeric format for the database column - etc.
RubyMonk makes a point on how you can use underscores for convenience to write large numbers that can become difficult to read without demarcation.
Their task is: Try using underscores to make a huge, readable number. They provide this code:
def describe(something)
puts "I am a: #{something.class} and I look like: #{something}"
end
def big_num
# create a huge number
end
describe(big_num)
Could anyone explain how I would go about creating a huge number? According to the error messages below, I have to use underscores in the code to make it pass.
RubyMonk expects a object of class Bignum, which is part of the standard Ruby library. From the documentation:
Bignum objects hold integers outside the range of Fixnum. Bignum
objects are created automatically when integer calculations would
otherwise overflow a Fixnum.
So you just have to create a number that is bigger than what Fixnum can handle. For example, this will pass RubyMonk's spec:
def big_num
5_000_000_000_000_000_000_000
end
Because the number is bigger than Fixnum can handle, Ruby automagically returns a Bignum instead. For example, try running this:
5_000_000_000.class
# => Fixnum
5_000_000_000_000_000_000_000.class
# => Bignum
Ruby allows using underscores as placeholders (i.e. they are only to increase readability for humans and are otherwise ignored). So your big_num method can simply have one line:
return 1_000_000_000_000_000_000_000_000_000_000_000
And calling that will return 1000000000000000000000000000000000
(the return keyword is optional)
Ruby allows you to put underscores while writing literal numbers so that large numbers can be easier to read.
The convention goes that you shouldn't put underscores if the number is 4 or less digits long and on every three numbers if it's longer, starting from right:
10_000_000 # => 10000000
Works for floating numbers too:
10_000.0 # => 10000.0
The underscores are ignored by the interpreter if you put them between two digits:
1_2_3_4_5 # => 12345
After looking at the error message, it is clear that RubyMonk expects Bignum. This is another magic, which the interpreter does transparently - if the number is small enough to be mapped to the architecture's int, the number is an instance of Fixnum:
100.class # => Fixnum
If that is not the case, Ruby automagically uses a dedicated class (Bignum):
(10_000_000**100).class # => Bignum
# 10_000_000 to the power of 100,
# which is a very big number and
# thus stored in Bignum
I got a little problem that I really would like to understand. I am using assert_equal to compare two BigDecimal numbers that are supposed to be identical. They actually are except a very little tiny fraction, see below:
-#<BigDecimal:7f4b40e8de78,'0.4021666666 6666666666 666666667E2',36(45)>
+#<BigDecimal:7f4b40e85db8,'0.4021666666 6666666666 6666668E2',36(63)>
I use assert_in_delta in order to not fail the test cases. So I got a reasonable workaround. I do wonder though whether it would be possible to have it equal:
assert_equal (241.30.to_d/6), model.division_function
The model's division_function does exactly the same. It divides a BigDecimal of value 241.3 by the length of an array, which is 6.
There seems to be a very tiny difference in the precision.
I would like to know where that might come from?
Is there a way I can control precision more accurately?
EDIT
I am using Mongoid. It is worth to note that Mongoid offers BigDecimal as a field type, but it is stored as a string. However, I don't think this is the problem. I believe it is a Ruby thing.
EDIT
I got a little further with an example which hints that it is a Ruby issue and not directly related to Rails. Please see below:
irb(main):041:0* amount1 = BigDecimal("241.3")
=> #<BigDecimal:7f49bcb03558,'0.2413E3',18(18)>
irb(main):042:0> amount2 = BigDecimal("1800")
=> #<BigDecimal:7f49bcaf3400,'0.18E4',9(18)>
irb(main):043:0> rate = amount1 / amount2
=> #<BigDecimal:7f49bcae8398,'0.1340555555 5555555555 5555556E0',27(45)>
irb(main):044:0> rate * amount2 #should return amount1 = 241.3, but it does not :-(
=> #<BigDecimal:7f49bcad6a30,'0.2413000000 0000000000 00000008E3',36(45)>
irb(main):045:0>
I reported the bug to the Ruby core team. However, this is not a bug as you can see in the rejection response.
BigDecimal, though offers arbitrary precision, cannot represent numbers like 1/3 precisely. Thus during some arithmetic you can encounter imprecisions.
You can use Rational in Ruby for exact numbers. Exercise caution when doing arithmetics if you wish to keep the result exact.
If I decide to use Rationals in Ruby for a console application, but don't want them displayed as a fraction, is there an idiomatic way to specify they should be displayed in decimal notation?
Options I know about include:
fraction = Rational(1, 2)
puts "Use to_f: #{fraction.to_f}"
puts(sprintf("Use sprintf %f", fraction))
class Rational
def to_s
to_f.to_s
end
end
puts "Monkey patch Rational#to_s: #{fraction}"
Are there any alternatives?
You can use %g in the sprintf to get the 'right' precision:
puts "%g" % Rational(1,2)
#=> 0.5
I'd personally monkey patch rational as you have, but then I'm a fan of monkey patching. Overriding the built-in to_s behavior seems entirely appropriate for a console application.
Perhaps you have another alternative where you write your own console that defines Object#to_console to call to_s, and then you could monkey patch in your own Rational#to_console instead. That extra effort would make it safer for the 0.1% chance that some library is using Rational#to_s already in a way that will break with your patch.
I'm working with UTF-8 strings. I need to get a slice using byte-based indexes, not char-based.
I found references on the web to String#subseq, which is supposed to be like String#[], but for bytes. Alas, it seems not to have made it to 1.9.1.
Now, why would I want to do that? There's a chance I'll end up with an invalid string should I slice in the middle of a multi-byte char. This sounds like a terrible idea.
Well, I'm working with StringScanner, and it turns out its internal pointers are byte-based. I accept other options here.
Here's what I'm working with right now, but it's rather verbose:
s.dup.force_encoding("ASCII-8BIT")[ix...pos].force_encoding("UTF-8")
Both ix and pos come from StringScanner, so are byte-based.
You can do this too: s.bytes.to_a[ix...pos].join(""), but that looks even more esoteric to me.
If you're calling the line several times, a nicer way to do it could be this:
class String
def byteslice(*args)
self.dup.force_encoding("ASCII-8BIT").slice(*args).force_encoding("UTF-8")
end
end
s.byteslice(ix...pos)
Doesn't String#bytes do what you want? It returns an enumerator to the bytes in a string (as numbers, since they might not be valid characters, as you pointed out)
str.bytes.to_a.slice(...)
Use this monkeypatch until String#byteslice() is added to Ruby 1.9.
class String
unless method_defined? :byteslice
##
# Does the same thing as String#slice but
# operates on bytes instead of characters.
#
def byteslice(*args)
unpack('C*').slice(*args).pack('C*')
end
end
end