Where BigDecimal "/" is defined? - ruby

I thought '3.0'.to_d.div(2) is same as '3.0'.to_d / 2, but the former return 1 while latter returns 1.5.
I searched by def / in Bigdecimal's github repository, but I couldn't find it.
https://github.com/ruby/bigdecimal/search?utf8=%E2%9C%93&q=def+%2F&type=Code
Where can I find the definition? And which method is a equivalent to / in Bigdecimal?
In Float there is a fdiv method. Is there similar one in Bigdecimal?

You can find it in the source code of the bigdecimal library, in the repository you linked to. On line 3403 of ext/bigdecimal/bigdecimal.c, BigDecimal#/ is bound to the function BigDecimal_div:
rb_define_method(rb_cBigDecimal, "/", BigDecimal_div, 1);
This function looks like this:
static VALUE
BigDecimal_div(VALUE self, VALUE r)
/* For c = self/r: with round operation */
{
ENTER(5);
Real *c=NULL, *res=NULL, *div = NULL;
r = BigDecimal_divide(&c, &res, &div, self, r);
if (!NIL_P(r)) return r; /* coerced by other */
SAVE(c); SAVE(res); SAVE(div);
/* a/b = c + r/b */
/* c xxxxx
r 00000yyyyy ==> (y/b)*BASE >= HALF_BASE
*/
/* Round */
if (VpHasVal(div)) { /* frac[0] must be zero for NaN,INF,Zero */
VpInternalRound(c, 0, c->frac[c->Prec-1], (BDIGIT)(VpBaseVal() * (BDIGIT_DBL)res->frac[0] / div->frac[0]));
}
return ToValue(c);
}

This is because BigDecimal#div takes a second argument, precision, which defaults to 1.
irb(main):017:0> '3.0'.to_d.div(2, 2)
=> 0.15e1
However, when / is defined on BigDecimal,
rb_define_method(rb_cBigDecimal, "/", BigDecimal_div, 1);
They used 1 for the # of arguments, rather than -1 which means "variable number of arguments". So BigDecimal#div thinks it takes one required argument and one optional argument, whereas BigDecimal#/ takes one required argument and the optional arg is ignored. Because the optional argument is ignored, it's not initialized correctly, it gets an empty int or 0.
This may be considered a bug. You should consider opening an issue with the ruby devs.

Related

Pass arbitrary number of lambdas (or procs) in Ruby

I am getting my head around the functional model in Ruby and ran into a problem. I am able to successfully pass any given number of arguments to an arbitrary function as follows:
add = ->(x, y) { return x + y }
mul = ->(x, y) { return x * y }
def call_binop(a, b, &func)
return func.call(a, b)
end
res = call_binop(2, 3, &add)
print("#{res}\n") #5
res = call_binop(3, 4, &mul)
print("#{res}\n") #12
However, I am not able to pass an arbitrary number of functions:
dbl = ->(x) { return 2 * x }
sqr = ->(x) { return x * x }
def call_funccomp(a, &func1, &func2)
return func2.call(func1.call(a))
end
res = call_funccomp(3, &dbl, &sqr)
print("#{res}\n") #Expect 36 but compiler error
The compiler error is syntax error, unexpected ',', expecting ')'
I have already added both lambdas and procs to an array and then executed elements of the array, so I know I can get around this by passing such an array as an argument, but for simple cases this seems to be a contortion for something (I hope) is legal in the language. Does Ruby actually limit the number or lambdas one can pass in the argument list? It seems to have a reasonably modern, flexible functional model (the notation is a little weird) where things can just execute via a call method.
Does Ruby actually limit the number or lambdas one can pass in the argument list?
No, you can pass as many procs / lambdas as you like. You just cannot pass them as block arguments.
Prepending the proc with & triggers Ruby's proc to block conversion, i.e. your proc becomes a block argument. And Ruby only allows at most one block argument.
Attempting to call call_funccomp(3, &dbl, &sqr) is equivalent to passing two blocks:
call_funccomp(3) { 2 * x } { x * x }
something that Ruby doesn't allow.
The fix is to omit &, i.e. to pass the procs / lambdas as positional arguments:
dbl = ->(x) { 2 * x }
sqr = ->(x) { x * x }
def call_funccomp(a, func1, func2)
func2.call(func1.call(a))
end
res = call_funccomp(3, dbl, sqr)
print("#{res}\n")
There's also Proc#>> which combines two procs:
def call_funccomp(a, func1, func2)
(func1 >> func2).call(a)
end

Referencing / dereferencing a vector element in a for loop

In the code below, I want to retain number_list, after iterating over it, since the .into_iter() that for uses by default will consume. Thus, I am assuming that n: &i32 and I can get the value of n by dereferencing.
fn main() {
let number_list = vec![24, 34, 100, 65];
let mut largest = number_list[0];
for n in &number_list {
if *n > largest {
largest = *n;
}
}
println!("{}", largest);
}
It was revealed to me that instead of this, we can use &n as a 'pattern':
fn main() {
let number_list = vec![24, 34, 100, 65];
let mut largest = number_list[0];
for &n in &number_list {
if n > largest {
largest = n;
}
}
println!("{}", largest);
number_list;
}
My confusion (and bear in mind I haven't covered patterns) is that I would expect that since n: &i32, then &n: &&i32 rather than it resolving to the value (if a double ref is even possible). Why does this happen, and does the meaning of & differ depending on context?
It can help to think of a reference as a kind of container. For comparison, consider Option, where we can "unwrap" the value using pattern-matching, for example in an if let statement:
let n = 100;
let opt = Some(n);
if let Some(p) = opt {
// do something with p
}
We call Some and None constructors for Option, because they each produce a value of type Option. In the same way, you can think of & as a constructor for a reference. And the syntax is symmetric:
let n = 100;
let reference = &n;
if let &p = reference {
// do something with p
}
You can use this feature in any place where you are binding a value to a variable, which happens all over the place. For example:
if let, as above
match expressions:
match opt {
Some(1) => { ... },
Some(p) => { ... },
None => { ... },
}
match reference {
&1 => { ... },
&p => { ... },
}
In function arguments:
fn foo(&p: &i32) { ... }
Loops:
for &p in iter_of_i32_refs {
...
}
And probably more.
Note that the last two won't work for Option because they would panic if a None was found instead of a Some, but that can't happen with references because they only have one constructor, &.
does the meaning of & differ depending on context?
Hopefully, if you can interpret & as a constructor instead of an operator, then you'll see that its meaning doesn't change. It's a pretty cool feature of Rust that you can use constructors on the right hand side of an expression for creating values and on the left hand side for taking them apart (destructuring).
As apart from other languages (C++), &n in this case isn't a reference, but pattern matching, which means that this is expecting a reference.
The opposite of this would be ref n which would give you &&i32 as a type.
This is also the case for closures, e.g.
(0..).filter(|&idx| idx < 10)...
Please note, that this will move the variable, e.g. you cannot do this with types, that don't implement the Copy trait.
My confusion (and bear in mind I haven't covered patterns) is that I would expect that since n: &i32, then &n: &&i32 rather than it resolving to the value (if a double ref is even possible). Why does this happen, and does the meaning of & differ depending on context?
When you do pattern matching (for example when you write for &n in &number_list), you're not saying that n is an &i32, instead you are saying that &n (the pattern) is an &i32 (the expression) from which the compiler infers that n is an i32.
Similar things happen for all kinds of pattern, for example when pattern-matching in if let Some (x) = Some (42) { /* … */ } we are saying that Some (x) is Some (42), therefore x is 42.

FXML using variables in labels [duplicate]

I am getting an error with this constructor, and i have no idea how to fix? I am a beginner at java. This is from an example exercise that i was trying to learn:
/**
* Create an array of size n and store a copy of the contents of the
* input argument
* #param intArray array of elements to copy
*/
public IntArray11(int[] intArray)
{
int i = 0;
String [] Array = new String[intArray.length];
for(i=0; i<intArray.length; ++i)
{
Array[i] = intArray[i].toString();
}
}
int is not an object in java (it's a primitive), so you cannot invoke methods on it.
One simple way to solve it is using
Integer.toString(intArray[i])
I would write it more like this
public String[] convertToStrings(int... ints) {
String[] ret = new String[ints.length];
for(int i = 0; i < intArray.length; ++i)
ret[i] = "" + ints[i];
return ret;
}
Or in Java 8 you might write
public List<String> convertToStrings(int... ints) {
return IntStream.of(ints).mapToObj(Integer::toString).collect(toList());
}
This uses;
Java coding conventions,
limited scope for the variable i,
we do something with the String[],
give the method a meaningful name,
try to use consist formatting.
If we were worried about efficiency it is likely we could do away with the method entirely.
String.valueOf(int) is not faster than Integer.toString(int). From the code in String you can see that the String implementation just calls Integer.toString
/**
* Returns the string representation of the {#code int} argument.
* <p>
* The representation is exactly the one returned by the
* {#code Integer.toString} method of one argument.
*
* #param i an {#code int}.
* #return a string representation of the {#code int} argument.
* #see java.lang.Integer#toString(int, int)
*/
public static String valueOf(int i) {
return Integer.toString(i);
}
You code tries to call the toString() method of an int value. In Java, int is a primitive type and has no methods. Change the line:
Array[i] = intArray[i].toString();
to
Array[i] = String.valueOf(intArray[i]);
and the code should run. By the way, you should use lowerCamelCase for variables and fields.
Edit: For what it's worth, String.valueOf(int) is a bit faster than Integer.toString(int) on my system (Java 1.7).

Memory allocation of string literal in c

I am having a strange issue with memory allocation in c, the file is fairly complicated so I cannot include it all here but perhaps you can point me in the right direction as to why this may be happening.
I am trying to create a string literal as such:
char * p = "root"
But when i look at the value of this variable at runtime (at the line directly after the declaration) i get this:
$1 = 0x7001260c "me"
and when I look at the contents of the memory at 0x7001260c it indeed holds the string "me".
EDIT:
To give more context when I run the following code the value of p on the last line is "root".
create_directory("root/home");
char * p = "root";
char * q = "foo";
And when I run the following code the value of p is "io"
create_directory("io/home");
char * p = "root";
char * q = "foo";
The create_directory function:
void create_directory(char * path) {
directory d;
directory * dir = &d;
//Browse to closest directory
path = find_directory(path, dir);
//Create remaining directories
char component[20];
path = next_component(path, component);
while (strlen(component) > 0) {
add_dir_entry(dir, component, inode_next);
write_dir_entry(dir, inode_to_loc(dir->inode));
directory new;
new.type = DIRECTORY;
new.inode = inode_next;
write_dir_entry(&new, inode_to_loc(inode_next));
inode_next++;
dir = &new;
path = next_component(path, component);
}
}
Almost certainly, there's a bug somewhere in your program that causes a constant to be modified which is, of course, illegal. Perhaps you're doing something like this:
void to_lower(char *j)
{
while (*j != 0) { *j = tolower(*j); j++; }
}
...
bool is_yes(char *k)
{
to_lower(k);
return strcmp(k, "yes") == 0;
}
void someFunc(char *k)
{
if (is_yes(k)) // ...
...
}
someFunc("testing");
See what this does? We pass a pointer to a constant to sumeFunc, but it flows down to to_lower which modifies the thing it points to -- modifying a constant.
Somehow, your code probably does something like that.
Start by changing code like char * p = "root" to code like char const* p = "root". That will give you a better chance of catching this kind of problem at compile time.

How to check for a Not a Number (NaN) in Swift 2

The following method calculates the percentage using two variables.
func casePercentage() {
let percentage = Int(Double(cases) / Double(calls) * 100)
percentageLabel.stringValue = String(percentage) + "%"
}
The above method is functioning well except when cases = 1 and calls = 0.
This gives a fatal error: floating point value can not be converted to Int because it is either infinite or NaN
So I created this workaround:
func casePercentage() {
if calls != 0 {
let percentage = Int(Double(cases) / Double(calls) * 100)
percentageLabel.stringValue = String(percentage) + "%"
} else {
percentageLabel.stringValue = "0%"
}
}
This will give no errors but in other languages you can check a variable with an .isNaN() method. How does this work within Swift2?
You can "force unwrap" the optional type using the ! operator:
calls! //asserts that calls is NOT nil and gives a non-optional type
However, this will result in a runtime error if it is nil.
One option to prevent using nil or 0 is to do what you have done and check if it's 0.
The second is option is to nil-check
if calls != nil
The third (and most Swift-y) option is to use the if let structure:
if let nonNilCalls = calls {
//...
}
The inside of the if block won't run if calls is nil.
Note that nil-checking and if let will NOT protect you from dividing by 0. You will have to check for that separately.
Combining second and your method:
//calls can neither be nil nor <= 0
if calls != nil && calls > 0

Resources