Linear equation, incompatible types BOOLEAN/LONGINT - pascal

I've got exercise about linear equation in Pascal and I've created simple code for comparison input numbers but when I try to run it. I have problem about incompatible types, got BOOLEAN and expected LONGINT.
program LinearEquation;
var
a, b: real;
begin
readln(a, b);
if (b = 0 and a = 0) then
writeln('INFINITY')
else if (b = 0 and a <> 0) then
writeln(1)
else if (a = 0 and b <> 0) then
writeln(0)
else if(b mod a = 0) then
writeln(1);
readln;
end.
and
13 / 9 rownan~1.pas
Error: Incompatible types: got "BOOLEAN" expected "LONGINT"
15 / 14 rownan~1.pas
Error: Incompatible types: got "BOOLEAN" expected "LONGINT"
17 / 14 rownan~1.pas
Error: Incompatible types: got "BOOLEAN" expected "LONGINT"
17 / 14 rownan~1.pas
Error: Incompatible types: got "BOOLEAN" expected "LONGINT"

At least in modern Delphi, and has higher precedence than =, so
a = 0 and b = 0
is interpreted as
(a = (0 and b)) = 0.
But the and operator cannot accept an integer and a floating-point value as operands (two integers would have been OK, though). Hence the error.
Had a and b been integers, 0 and b would have been the bitwise conjunction of 0 and b, that is, 0. Thus, we would have had
(a = 0) = 0.
This reads either true = 0 (if a is equal to 0) or false = 0 (if a is different from 0). But a boolean cannot be compared to an integer, so the compiler would have complained about that.
Still, this was just an academic exercise. Clearly, your intension was
(a = 0) and (b = 0).
Just add the parentheses:
if (b = 0) and (a = 0) then
writeln('INFINITY')

Related

How to coerce math.Inf to an integer?

I've got some code I'm using to do comparisons, and I want to start with infinite values. Here's a snippet of my code.
import (
"fmt"
"math"
)
func snippet(arr []int) {
least := int(math.Inf(1))
greatest := int(math.Inf(-1))
fmt.Println("least", math.Inf(1), least)
fmt.Println("greatest", math.Inf(-1), greatest)
}
and here's the output I get from the console
least +Inf -9223372036854775808
greatest -Inf -9223372036854775808
why is +Inf coerced into a negative int ?
Infinity is not representable by int.
According to the go spec,
In all non-constant conversions involving floating-point or complex values, if the result type cannot represent the value the conversion succeeds but the result value is implementation-dependent.
Maybe you are looking for the largest representable int? How to get it is explained here.
math.Inf() returns an IEEE double-precision float representing positive infinity if the sign of the argument is >= 0, and negative infinity if the sign is < 0, so your code is incorrect.
But, the Go language specifiction (always good to read the specifications) says this:
Conversions between numeric types
.
.
.
In all non-constant conversions involving floating-point or complex values,
if the result type cannot represent the value the conversion succeeds but
the result value is implementation-dependent.
Two's complement integer values don't have the concept of infinity, so the result is implementation dependent.
Myself, I'd have expected to get the largest or smallest integer value for the integer type the cast is targeting, but apparently that's not the case.
This looks to the runtime source file responsible for the conversion, https://go.dev/src/runtime/softfloat64.go
And this is the actual source code.
Note that an IEEE-754 double-precision float is a 64-bit double word, consisting of
a sign bit, the high-order (most significant/leftmost bit), 0 indicating positive, 1 indicating negative.
an exponent (biased), consisting of the next 11 bits, and
a mantissa, consisting of the remaining 52 bits, which can be denormalized.
Positive Infinity is a special value with a sign bit of 0, a exponent of all 1 bits, and a mantissa of all 0 bits:
0 11111111111 0000000000000000000000000000000000000000000000000000
or 0x7FF0000000000000.
Negative infinity is the same, with the exception that the sign bit is 1:
1 11111111111 0000000000000000000000000000000000000000000000000000
or 0xFFF0000000000000.
Looks like `funpack64() returns 5 values:
a uint64 representing the sign (0 or the very large non-zero value 0x8000000000000000),
a uint64 representing the normalized mantissa,
an int representing the exponent,
a bool indicating whether or not this is +/- infinity, and
a bool indicating whether or not this is NaN.
From that, you should be able to figure out why it returns the value it does.
[Frankly, I'm surprised that f64toint() doesn't short-circuit when funpack64() returns fi = true.]
const mantbits64 uint = 52
const expbits64 uint = 11
const bias64 = -1<<(expbits64-1) + 1
func f64toint(f uint64) (val int64, ok bool) {
fs, fm, fe, fi, fn := funpack64(f)
switch {
case fi, fn: // NaN
return 0, false
case fe < -1: // f < 0.5
return 0, false
case fe > 63: // f >= 2^63
if fs != 0 && fm == 0 { // f == -2^63
return -1 << 63, true
}
if fs != 0 {
return 0, false
}
return 0, false
}
for fe > int(mantbits64) {
fe--
fm <<= 1
}
for fe < int(mantbits64) {
fe++
fm >>= 1
}
val = int64(fm)
if fs != 0 {
val = -val
}
return val, true
}
func funpack64(f uint64) (sign, mant uint64, exp int, inf, nan bool) {
sign = f & (1 << (mantbits64 + expbits64))
mant = f & (1<<mantbits64 - 1)
exp = int(f>>mantbits64) & (1<<expbits64 - 1)
switch exp {
case 1<<expbits64 - 1:
if mant != 0 {
nan = true
return
}
inf = true
return
case 0:
// denormalized
if mant != 0 {
exp += bias64 + 1
for mant < 1<<mantbits64 {
mant <<= 1
exp--
}
}
default:
// add implicit top bit
mant |= 1 << mantbits64
exp += bias64
}
return
}

Different behavior of len() with const or non-const value

The code below
const s = "golang.go"
var a byte = 1 << len(s) / 128
The result of a is 4. However, after changing const s to var s as following
var s = "golang.go"
var a byte = 1 << len(s) / 128
The result of a is 0 now.
Also other test codes as below
const s = "golang.go"
var a byte = 1 << len(s) / 128 // the result of a is 4
var b byte = 1 << len(s[:]) / 128 // the result of b is 0
var ss = "golang.go"
var aa byte = 1 << len(ss) / 128 // the result of aa is 0
var bb byte = 1 << len(ss[:]) / 128 // the result of bb is 0
It is weird that b is 0 with evaluating the length of s[:]
I try to understand it per golang spec
The expression len(s) is constant if s is a string constant. The expressions len(s) and cap(s) are constants if the type of s is an array or pointer to an array and the expression s does not contain channel receives or (non-constant) function calls
But I failed. Could someone explain it more clearly to me?
The difference is that when s is constant, the expression is interpreted and executed as a constant expression, using untyped integer type and resulting in int type. When s is a variable, the expression is interpreted and executed as a non-constant expression, using byte type.
Spec: Operators:
The right operand in a shift expression must have integer type or be an untyped constant representable by a value of type uint. If the left operand of a non-constant shift expression is an untyped constant, it is first implicitly converted to the type it would assume if the shift expression were replaced by its left operand alone.
The quoted part applies when s is a variable. The expression is a non-constant shift expression (1 << len(s)) because s is a variable (so len(s) is non-constant), and the left operand is an untyped constant (1). So 1 is converted to a type it would assume if the shift expression were replaced by its left operand alone:
var a byte = 1 << len(s) / 128
replaced to
var a byte = 1 / 128
In this variable declaration byte type will be used because that type is used for the variable a. So back to the original: byte(1) shifted left by 9 will be 0, dividing it by 128 will also be 0.
And when s is constant, int will be used because Spec: Constant expressions:
If the left operand of a constant shift expression is an untyped constant, the result is an integer constant; otherwise it is a constant of the same type as the left operand, which must be of integer type.
Here 1 will not be converted to byte but 1 << len(s) => 1 << 9 will be 512, divided by 128 will be 4.
Constant in Go behave differently than you might expect. They are "arbitrary precision and _un_typed".
With const consts = "golang.go" the expression 1 << len(consts) / 128 is a constant expression and evaluated as a constant expression with arbitrary precision resulting in an untyped integer 4 which can be assigned to a byte resulting in a == 4.
With var vars = "golang.go" the expression 1 << len(vars) / 128 no longer is a constant expression but has to be evaluated as some typed int. How is defined in https://go.dev/ref/spec#Operators
The right operand in a shift expression must have integer type or be an untyped constant representable by a value of type uint. If the left operand of a non-constant shift expression is an untyped constant, it is first implicitly converted to the type it would assume if the shift expression were replaced by its left operand alone.
The second sentence applies to your problem. The 1 is converted to "the type it would [read will] assume". Spelled out this is byte(1) << len(vars) which is 0.
https://go.dev/blog/constants

Swift: Why does this valid expression fail when put in ternary form?

I've written an expression that finds the largest of two variables and returns true if it is equal to or above a limit variable. They are all integers.
if max(num1, num2) >= limit {
result = 3
} else {
result = 2
}
this works fine, but if I try to put it in the more compact ternary format it is rejected with: 'could not find an overload for '>=' that accepts the supplied arguments'.
max(num1, num2) >= limit ? result = 3 : result = 2
I've tried putting the conditional in various bracket configurations but it still fails. Any ideas?
Further experimentation reveals that the problem is related to the limit var being set as implicitly unwrapped variable, although this is set in the init block:
limit: Int!
init (num: Int) {
limit = num
}
Many thanks.
Kw
The problem here is precedence, it should "work" if you have written
max(num1, num2) >= limit ? (result = 3) : (result = 2)
// ^ ^
Note the precedence of each operator:
?:: associativity right precedence 100 (higherThan: AssignmentPrecedence)
=: associativity right precedence 90
so a ? b : c = d will be evaluated as (a ? b : c) = d since the ternary conditional operator has higher precedence than assignment.
Here in max(num1, num2) >= limit ? result = 3 : result, the type of result = 3 is () and result is Int, so there is a type mismatch between the two arms and caused error.
(However I'm not sure why the message talks about >=, please check if your num1 and num2 are also Int.)
That said, it is very unconventional (i.e. bad style) to put an assignment inside a ?:, typically you want to write this instead:
result = max(num1, num2) >= limit ? 3 : 2

Rational number denominator

public Rational(long numerator, long denominator) {
long gcd = gcd(numerator, denominator);
this.numerator = ((denominator > 0) ? 1 : -1) * numerator / gcd;
this.denominator = Math.abs(denominator) / gcd;
Hello I'm wondering about the 3rd line where it says ((denominator > 0) ? 1 : -1) * numerator / gcd. What is the argument?
This format:
x = denominator > 0 ? 1 : -1
Is similar to an if statement.
If denominator is greater than zero x will be set to 1
otherwise x will be set to -1
A more generalized form would be
𝒄𝒐𝒏𝒅𝒊𝒕𝒊𝒐𝒏 ? 𝒗𝒂𝒍𝒖𝒆_𝒊𝒇_𝒕𝒓𝒖𝒆 : 𝒗𝒂𝒍𝒖𝒆_𝒊𝒇_𝒇𝒂𝒍𝒔𝒆
This form of expression is in several languages such as C, Java, Swift, Objective C, ...

The opposite of 2 ^ n

The function a = 2 ^ b can quickly be calculated for any b by doing a = 1 << b.
What about the other way round, getting the value of b for any given a? It should be relatively fast, so logs are out of the question. Anything that's not O(1) is also bad.
I'd be happy with can't be done too if its simply not possible to do without logs or a search type thing.
Build a look-up table. For 32-bit integers, there are only 32 entries so it is O(1).
Most architectures also have an instruction to find the position of the most significant bit of a number a, which is the value b. (gcc provides the __builtin_clz function for this.)
For a BigInt, it can be computed in O(log a) by repeatedly dividing by 2.
int b = -1;
while (a != 0) {
a >>= 1;
++ b;
}
For this sort of thing I usually refer to this page with bit hacks:
Bit Twiddling Hacks
For example:
Find the log base 2 of an integer with a lookup table:
static const char LogTable256[256] =
{
#define LT(n) n, n, n, n, n, n, n, n, n, n, n, n, n, n, n, n
-1, 0, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3,
LT(4), LT(5), LT(5), LT(6), LT(6), LT(6), LT(6),
LT(7), LT(7), LT(7), LT(7), LT(7), LT(7), LT(7), LT(7)
};
unsigned int v; // 32-bit word to find the log of
unsigned r; // r will be lg(v)
register unsigned int t, tt; // temporaries
if (tt = v >> 16)
{
r = (t = tt >> 8) ? 24 + LogTable256[t] : 16 + LogTable256[tt];
}
else
{
r = (t = v >> 8) ? 8 + LogTable256[t] : LogTable256[v];
}
There are also a couple of O(log(n)) algorithms given on that page.
Some architectures have a "count leading zeros" instruction. For example, on ARM:
MOV R0,#0x80 # load R0 with (binary) 10000000
CLZ R1,R0 # R1 = number of leading zeros in R0, i.e. 7
This is O(1).
Or you can write:
while ((a >>= 1) > 0) b++;
This is O(1). One could imagine this to be expanded to:
b = (((a >> 1) > 0) ? 1 : 0) + (((a >> 2) > 0) ? 1 : 0) + ... + (((a >> 31) > 0) ? 1 : 0);
With a complier optimization, that once (a >> x) > 0) returns false, rest won't be calculated. Also comparing with 0 is faster then any other comparison. Also:
, where k is maximum of 32 and g is 1.
Reference: Big O notation
But in case you where using BigInteger, then my code example would look like:
int b = 0;
String numberS = "306180206916083902309240650087602475282639486413"
+ "866622577088471913520022894784390350900738050555138105"
+ "234536857820245071373614031482942161565170086143298589"
+ "738273508330367307539078392896587187265470464";
BigInteger a = new BigInteger(numberS);
while ((a = a.shiftRight(1)).compareTo(BigInteger.ZERO) > 0) b++;
System.out.println("b is: " + b);
If a is a double rather than an int then it will be represented as mantissa and exponent. The exponent is the part you are looking for, as this is the logarithm of the number.
If you can hack the binary representation then you can get the exponent out. Look up the IEEE standard to see where and how the exponent is stored.
For an integral value, if some method of getting the most significant bit position is not available then you can binary-search the bits for the upper-most 1 which is therefore O(log numbits). Doing this may well actually perform faster than converting to a double first.
In Java you can use Integer.numberOfLeadingZeros to compute the binary logarithm. It returns the number of leading zeros in the binary representation, so
floor(log2(x)) = 31 - numberOfLeadingZeros(x)
ceil(log2(x)) = 32 - numberOfLeadingZeros(x - 1)
It can't be done without testing the high bit, but most modern FPUs support log2 so all is not lost.

Resources