How to represent -infinity in programming - algorithm

How can I represent -infinity in C++, Java, etc.?
In my exercise, I need to initialize a variable with -infinity to show that it's a very small number.
When computing -infinity - 3, or -infinity + 5 it should also result -infinity.
I tried initializing it with INT_MIN, but when I compute INT_MIN - 1 I get the upper limit, so I can't make a test like: if(value < INT_MIN) var = INT_MIN;
So how can I do that?

You cannot represent infinity with integers[1]. However, you can do so with floating point numbers, i.e., float and double.
You list several languages in the tags, and they all have different ways of obtaining the infinity value (e.g., C99 defines INFINITY in math.h, if infinity is available with that implementation, while Java has POSITIVE_INFINITY and NEGATIVE_INFINITY in Float and Double classes). It is also often (but not always) possible to obtain infinity values by dividing floating point numbers by zero.
[1] Excepting the possibility that you could wrap every arithmetic operation on your integers with code that checks for a special value that you treat as infinity. I wouldn't recommend this.

You can have -Infinity as a floating point literal (at least in Java):
double negInf = Double.NEGATIVE_INFINITY;
It is implemented according to the IEEE754 floating point spec.

If there was the possibility that a number was not there, instead of picking a number from its domain to represent 'not there', I would pick a type with both every integer I care about, and a 'not there' state.
A (deferred) C++1y proposal for optional is an example of that: an optional<int> is either absent, or an integer. To access the integer, you first ask if it is there, and if it is you 'dereference' the optional to get it.
Making infectious optionals: ones that, on almost any binary operation, infect the result if either value is absent, should be an easy extension of this idea.

You could define a number as -infinite and, when adding or substracting something from a number, you do first check if the variable was equal to that pseudo-number. If so you just leave it as it was.
But you might find library functions giving you that functionality implemented, e.g Double in Java or std in c++.

If you want to represent the minimum value you can use, for example
for integers
int a = Integer.MIN_VALUE;
and floats, doubles, etc take each object wrapper min value

You can't truly represent an infinite value because you've got to store it in a finite number of bits. There are symbolic versions of infinity in certain types (e.g. in the typical floating point specification), but it won't behave exactly like infinity in the strict sense. You'll need to include some additional logic in your program to emulate the behaviour you need.

Related

Generate random 128 bit decimal in given range in go

Let's say that we have a random number generator that can generate random 32 or 64 bit integers (like rand.Rand in the standard library)
Generating a random int64 in a given range [a,b] is fairly easy:
rand.Seed(time.Now().UnixNano())
n := rand.Int63n(b-a) + a
Is it possible to generate random 128 bit decimal (as defined in specification IEEE 754-2008) in a given range from a combination of 32 or 64 bit random integers?
It is possible, but the solution is far from trivial. For a correct solution, there are several things to consider.
For one thing, values with exponent E are 10 times more likely than values with exponent E - 1.
Other issues include subnormal numbers and ranges that straddle zero.
I am aware of the Rademacher Floating-Point Library, which tackled this problem for binary floating-point numbers, but the solution there is complicated and its author has not yet written up how his algorithm works.
EDIT (May 11):
I have now specified an algorithm for generating random "uniform" floating-point numbers—
In any range,
with full coverage, and
regardless of the digit base (such as binary or decimal).
Possible, but by no means easy. Here is a sketch of a solution that might be acceptable — writing and debugging it would probably be at least a day of concerted effort.
Let min and max be primitive.Decimal128 objects from go.mongodb.org/mongo-driver/bson. Let MAXBITS be a multiple of 32; 128 is likely to be adequate.
Get the significand (as big.Int) and exponent (as int) of min and max using the BigInt method.
Align min and max so that they have the same exponent. As far as possible, left-justify the value with the larger exponent by decreasing its exponent and adding a corresponding number of zeroes to the right side of its significand. If this would cause the absolute value of the significand to become >= 2**(MAXBITS-1), then either
(a) Right-shift the value with the smaller exponent by dropping digits from the right side of its significand and increasing its exponent, causing precision loss.
(b) Dynamically increase MAXBITS.
(c) Throw an error.
At this point both exponents will be the same, and both significands will be aligned big integers. Set aside the exponents for now, and let range (a new big.Int) be maxSignificand - minSignificand. It will be between 0 and 2**MAXBITS.
Turn range into MAXBITS/32 uint32s using the Bytes or DivMod methods, whatever is easier.
If the highest word of range is equal to math.MaxUint32 then set a flag limit to false, otherwise true.
For n from 0 to MAXBITS/32:
if limit is true, use rand.Int63n (!, not rand.Int31n or rand.Uint32) to generate a value between 0 and the nth word of range, inclusive, cast it to uint32, and store it as the nth word of the output. If the value generated is equal to the nth word of range (i.e. if we generated the maximum possible random value for this word) then let limit remain true, otherwise set it false.
If limit is false, use rand.Uint32 to generate the nth word of the output. limit remains false regardless of the generated value.
Combine the generated words into a big.Int by building a []byte and using big/Int.SetBytes or multiplication and addition, as convenient.
Add the generated value to minSignificand to obtain the significand of the result.
Use ParseDecimal128FromBigInt with the result significand and the exponent from steps 2-3 to obtain the result.
The heart of the algorithm is step 6, which generates a uniform random unsigned integer of arbitrary length 32 bits at a time. The alignment in step 2 reduces the problem from a floating-point to an integer one, and the subtraction in step 3 reduces it to an unsigned one, so that we only have to think about one bound instead of 2. The limit flag records whether we're still dealing with that bound, or whether we've already narrowed the result down to an interval that doesn't include it.
Caveats:
I haven't written this, let alone tested it. I may have gotten it quite wrong. A sanity check by someone who does more numerical computation work than me would be welcome.
Generating numbers across a large dynamic range (including crossing zero) will lose some precision and omit some possible output values with smaller exponents unless a ludicrously large MAXBITS is used; however, 128 bits should give a result at least as good as a naive algorithm implemented in terms of decimal128.
The performance is probably pretty bad.
Go has a large number package that can do arbitrary length integers: https://golang.org/pkg/math/big/
It has a pseudo random number generator https://golang.org/pkg/math/big/#Int.Rand, and the crypto package also has https://golang.org/pkg/crypto/rand/#Int
You'd want to specify the max using https://golang.org/pkg/math/big/#Int.Exp as 2^128.
Can't speak to performance, though, or whether this is compliant if the IEEE standard, but large random numbers like what you'd use for UUIDs are possible.
It depends how many values you want to generate. If it's enough to have no more 10^34 values in a specified range - it's quite simple.
As I see the problem, a random value in the range min..max can be calculated as random(0..1)*(max-min)+min
Look like we need to generate only decimal128 value in range 0..1. So it's a random value in range 0..10^34-1 with exponent -34. This value can be generated with a golang standard random package.
To multiply, add and substruct float128 values can be used golang math/big package with values normalization.
This is definitely what you are looking for.

Random String generation from a given string, and inverse transform

I am working on a requirement where a function f will use string s as a seed and generate n no of strings y0..n , I can easily do this, but I also want to do inverse ie, f-1(yi) of generated strings will give me back s.
y0 = f(s) # first time I call f(s) it gives me y0
y1 = f(s) # second time I call f(s) it gives me y1
...
yi = f(s) # ith time I call f(s) it gives me yi
and so on.
The inverse function,
s = f-1(yi)
How can find the functions f and f-1, the other constraint the character size cannot to be too large for these strings, say max 20-25 characters.
Any suggestions please ?
Ok, this will get too channel-coding specific if I do it in broadness, here, but:
These are mathematical concepts, so let's map strings to numbers and look at them algebraically:
Your 20-character string space, assuming we're just using the 128 common ASCII characters, has 27 * 20 elements. That's pretty many elements.
However, communication technology has a method called scrambling which is a reversible process of mingling the bits in a sequence in a way that spreads the per-bit energy over the whole sequence. That leads to pretty randomly looking bit streams. It's typically implemented using feedback shift registers.
It's possible to find a 2140 state LFSR that fulfills your scrambling needs, and you can interpret the output of a multiplicative scrambler as the next element in your sequence.
However, please be aware that your problem is a hard one, which I hope I've illustrated sufficiently -- getting something that has good random properties is a harsh thing, and I can't recommend implementing something like that yourself -- it's going to make problems as soon as you need to rely on mathematical properties of your pseudorandom string.

Impossible to achieve certain float value

I am testing my code. I have boolean conditions with float numbers.
I assign a float value to test my code:
float testFloatValue = 0.9;
And the boolean condition is not met:
if (testFloatValue == 0.9) {
}
Because when I debugging, my float number has changed from 0.9 to 0.899999976
I do not understand anything!
Due to the nature of floating point numbers certain values are not possible to be represented exactly. So you NEVER want to do a direct check for equality... there are numerous articles about this on the web and if you search you will find many. However here is a quick routine you can use in Objective C to check for ALMOST equal.
bool areAlmostEqual(double result, double expectedResult)
{
return fabs(result - expectedResult) < .0000000001;
}
You would use it like so as per the values in your original question:
if(areAlmostEqual(testFloatValue, 0.9) {
// Do something
}
This is a very common misconception. A floating point number is an approximate representation of a real number. The most common standard for floating point (IEEE 754) uses base 2, and base 2 cannot directly represent all base 10 numbers.
This is nothing to do with Xcode
When you wrote 0.9 which is 9 * 10^-1, the computer stored it as the closest binary equivalent, expressed in base 2. When this binary (base 2) approximation is converted back to decimal (base 10) for display, you get 0.899999976 which is as close as floating point could represent your number.
The standard way to compare floating point numbers is to choose a precision or tolerance, often called epsilon, which is how close two numbers are to be considered equal (ie. "close enough"). And because the closest approximation might be slightly lower or slightly higher than your number, you would take the absolute difference and compare to the tolerance. Thus:
const float eps = 0.00001f;
if (fabs(a - b) < eps)
{
// a and b are approximately equal
}
Floating point is a large and complicated topic, and is definitely worth researching to get a good grasp. Start here:
Floating Point on Wikipedia
You should definitely read this fantastic introduction to floating point:
What every computer programmer should know about floating point

Is there a better way to generate all equal arithmetic sequences using numbers 1 to 10?

Problem:
The numbers from 1 to 10 are given. Put the equal sign(somewhere between
them) and any arithmetic operator {+ - * /} so that a perfect integer
equality is obtained(both the final result and the partial results must be
integer)
Example:
1*2*3*4*5/6+7=8+9+10
1*2*3*4*5/6+7-8=9+10
My first idea to resolve this was using backtracking:
Generate all possibilities of putting operators between the numbers
For one such possibility replace all the operators, one by one, with the equal sign and check if we have two equal results
But this solution takes a lot of time.
So, my question is: Is there a faster solution, maybe something that uses the operator properties or some other cool math trick ?
I'd start with the equals sign. Pick a possible location for that, and split your sequence there. For left and right side independently, find all possible results you could get for each, and store them in a dict. Then match them up later on.
Finding all 226 solutions took my Python program, based on this approach, less than 0.15 seconds. So there certainly is no need to optimize further, is there? Along the way, I computed a total of 20683 subexpressions for a single side of one equation. They are fairly well balenced: 10327 expressions for left hand sides and 10356 expressions for right hand sides.
If you want to be a bit more clever, you can try reduce the places where you even attempt division. In order to allov for division without remainder, the prime factors of the divisor must be contained in those of the dividend. So the dividend must be some product and that product must contain the factors of number by which you divide. 2, 3, 5 and 7 are prime numbers, so they can never be such divisors. 4 will never have two even numbers before it. So the only possible ways are 2*3*4*5/6, 4*5*6*7/8 and 3*4*5*6*7*8/9. But I'd say it's far easier to check whether a given division is possible as you go, without any need for cleverness.

Map two numbers to one to achieve a particular sort?

I have a list in which each item has 2 integer attributes, n and m. I would like to map these two integer attributes to a single new attribute so that when the list is sorted on the new attribute, it is sorted on n first and then ties are broken with m.
I came up with n - 1/m. So the two integers are mapped to a single real number. I think this works. Any better ideas?
That's clever, so I hate to break it to you, but it won't work. Try it (with a computer) using the n=1,000,000,000 and values of m between 999,999,990 and 1,000,000,010. You'll find that n-1/m is the same value for all of those cases.
It would work if floating point numbers had infinite precision, or even if they had twice as much precision as an int (although even there you might run into some issues), but they don't: a double precision floating point number has 53 bits of precision. An integer is (probably) 32 bits, so you'd need at least 64 bits to encode two of them. But then, you could just use a 64-bit (long long) integer, encoding the pair as n*2^32 + m.

Resources