Converting a postive integer to a negative - ruby

I am new to programming and was curious to know why the following returns a negative integer:
-num.abs
My understanding is that abs returns the value as a positive number yet this will return any positive number into a negative?

why the following returns a negative integer: -num.abs
Because it's -(num.abs), not (-num).abs. First, you take abs, then you negate that value, resulting in a negative number.

Related

How to compute random bfloat number in Maxima CAS

Maxima Cas random function takes as an input floating point number and gives floating point number as an output.
I need floating point number with more digits, so I use bfloat with increased precision.
I have tried :
random(1.0b0)
bfloat(random(1.0));
The best result was :
bfloat(%pi)/6.000000000000000000000000000000000000000000b0
5.235987755982988730771072305465838140328615665625176368291574320513027343810348331046724708903528447b-1
but it is not random.
One way to generate a random bigfloat is to generate an integer with the appropriate number of bits and then rescale it to get a number in the range 0 to 1.
Note that random(n) returns an integer in the range 0 to n - 1 when n is an integer, therefore: bfloat(random(10^fpprec) / 10^fpprec).

The result of Sum is floor(Col + Row + 1) is never an integer and I don't know why

I have to write a piece of prolog where I have to calculate which position in an array is used to store a value. However the result of these calculations should return an integer, so I use the floor/1 predicate to get myself the integer of the value but this doesn't work in my code. It keeps returning a number with decimal point, for example 3.0 instead of 3
The following is my code:
assign_value(El, NumberArray, RowNumber, I) :-
ground(El),
Number is NumberArray[El],
Col is I/3,
Row is RowNumber/3*3,
Sum is floor(Col + Row + 1),
subscript(Number, [Sum], El).
assign_value(_, _, _, _).
The result of the Sum is floor(Col + Row + 1) is never an integer and I don't know why. Can anyone help me with this?
In ISO Prolog, the evaluable functor floor/1 has as signature (9.1.1 in ISO/IEC 13211-1):
floorF→I
So it expects a float and returns an integer.
However, I do not believe that first creating floats out of integers and then flooring them back to integers is what you want, instead, consider to use (div)/2 in place of (/)/2 thereby staying with integers all the time.
From the documentation of floor/2 (http://www.eclipseclp.org/doc/bips/kernel/arithmetic/floor-2.html)
The result type is the same as the argument type. To convert the
type to integer, use integer/2.
For example:
...,
Floor is floor(Col+Row+1), Sum is integer(Floor).
Reading the documentation for floor/2, we see that
[floor/2] works on all numeric types. The result value is the largest integral
value that is smaller that Number (rounding down towards minus infinity).
The result type is the same as the argument type. To convert the type to integer,
use integer/2.
So you get the same type you supplied as the argument. Looking further at your predicate, we see the use of the / operator. Reading the documentation further, we see that
'/'/3 is used by the ECLiPSe compiler to expand evaluable arithmetic expressions.
So the call to /(Number1, Number2, Result) is equivalent to
Result is Number1 / Number2
which should be preferred for portability.
The result type of the division depends on the value of the global flag prefer_rationals.
When it is off, the result is a float, when it is on, the result is a rational.
Your division operation never returns an integer, meaning that things get upcast to floating point.
If you want to perform integer division, you should use the operators // or div.

Algorithm for 0 and any other x

I need to write an algorithm that takes a positive integer x. If integer x is 0, the algorithm returns 0. If it's any other number, the algorithm returns 1.
Here's the catch. I need to condense the algorithm into one equation. i.e. no conditionals. Basically, I need a single equation that equates to 0 if x is zero and 1 if x > 0.
EDIT: As per my comment below. I realize that I wasn't clear enough. I am entering the formula into a system that I don't have control over, hence they strange restrictions.
However, I learned a couple tricks that could be useful in the future!
In C and C++, you can use this trick:
!!x
In those languages, !x evaluates to 1 if x is zero and 0 otherwise. Therefore, !!x evaluates to 1 if x is nonzero and 0 otherwise.
Hope this helps!
Try return (int)(x > 0)
In every programming language I know, (int)(TRUE) == 1 and (int)(FALSE) == 0
Assuming 32-bit integers:
int negX = -x;
return negX >> 31;
Negating x puts a 1 in the highest bit. Shifting right by 31 places moves that 1 to the lowest bit, and fills with 0s. This does nothing to a 0, but converts all positive integers to 1.
This is basically the sign function, but since you specified a positive integer input, you can drop the part that converts negative numbers to -1.
Since virtually every system I know of uses IEEE-754 representation for floating-point numbers, you could just rely on its behavior (namely, that 0.0 / 0.0 is NaN, and NaN != NaN). Pseudo-C (-Java, ...) follows:
float oneOrNAN = (float)(x) / (float)(x);
return oneOrNAN == oneOrNAN;
Like I said, I wasn't clear enough in my problem description. When I said equation, I meant a purely algebraic equation.
I did find an acceptable solution: Y = X/(X - .001)
If it's zero you get 0/ -.001 which is just 0. Any other number, you get 5/4.999 which is close enough to 1 for my particular situation.
However, this is interesting:
!!x
Thanks for the tip!

How do i get Math.Sqrt to return a Bignum and not a Float?

I'm trying to calculate the square root of a really big number in Ruby. The problem I have is that the Math.sqrt function looks like this
sqrt(numeric) → float
If I feed it a really big number, it will give me FloatDomainError: Infinity.
What is the best way to get sqrt() to return a BigNum? Is there perhaps a gem for this or will I have to write my own function to calculate the square root?
In that case, what is the easiest way to go about doing this? Taylor series? The square roots of the numbers will always be integers.
There is a simple way to calculate the square root of an integer, which results in an integer:
To find the square root of a number, set M and P to that number.
Then calculate (M+P/M)/2, rounding each division down.
If M equals or is less than the result, use M as the square root;
otherwise, set M to the result and repeat this process at step 2.
This approach may be inefficient for big numbers, though, so try it and see.
EDIT:
Here's the Ruby implementation:
def mysqrt(x)
return 0 if x==0
m=x
p=x
loop do
r=(m+p/m)/2
return m if m<=r
m=r
end
end

Using bit manipulation to tell if an unsigned integer can be expressed in the form 2^n-1

To test if an unsigned integer is of the form 2^n-1 we use:
x&(x+1)
What is that supposed to equal? That is,
x&(x+1) == ?
A number of the form 2^n-1 will have all of the bits up to the nth bit set. For example, 2^3-1 (7) is:
0b0111
If we add one to this, we get 8:
0b1000
Then, performing a bitwise and, we see that we get zero, because no bit is set on in both numbers. If we start with a number not of the form 2^n+1, then the result will be nonzero.
In complement to the existing answers, here is a short explanation of why numbers x that are not of the form 0b00000 (zero) or 0b0111..11 (all lowest digits set, these are all the numbers 2^n-1 for n>0) do not have the property x&(x+1) == 0.
For a number x of the form 0b????1000..00, x+1 has the same digits as x except for the least significant bit, so x & (x+1) has at least one bit set, the bit that was displayed as being set in x. By way of shorter explanation:
x 0b????1000..00
x+1 0b????1000..01
x&(x+1) 0b????10000000
For a number x of the form 0b????10111..11:
x 0b????10111..11
x+1 0b????110000000
x&(x+1) 0b????10000..00
In conclusion, if x is not either zero or written in binary with all lowest digits set, then x&(x+1) is not zero.
Zero. If X is 2^N-1, it is an unbroken string of 1's in binary. One more than that is a 1 followed by a string of zeroes same length as X, so the two numbers have no 1 bits in common in any location, so the AND of the two is zero.

Resources