I am trying to make a program that generates random numbers from 0 to 255 inclusively. It seems so simple! I did this:
extern crate rand;
use rand::Rng;
fn main() {
println!("Guess the number!");
let random_number: u8 = rand::thread_rng().gen_range(0, 255);
println!("Your random number is {}", random_number);
}
This works fine, but the problem with this approach is that the number 255 will not be included:
The gen_range method takes two numbers as arguments and generates a random number between them. It’s inclusive on the lower bound but exclusive on the upper bound.
When I try to do this :
let random_number: u8 = rand::thread_rng().gen_range(0, 256);
Rust will generate a warning because u8 only accepts values from 0 to 255.
warning: literal out of range for u8
--> src/main.rs:6:61
|
6 | let random_number: u8 = rand::thread_rng().gen_range(0, 256);
| ^^^
|
= note: #[warn(overflowing_literals)] on by default
How do I work around this without having to change the type of the random_number variable?
Use the gen method instead. This method will generate a random value from the whole set of possible values for the specified type.
extern crate rand;
use rand::Rng;
fn main() {
println!("Guess the number!");
let random_number: u8 = rand::thread_rng().gen();
println!("Your random number is {}", random_number);
}
I stumbled on this same issue and wondered why I could not use .gen_range(0, 256); for a u8.
The .gen_range() function signature looks like this (docs)
fn gen_range<T: SampleUniform, B1, B2>(&mut self, low: B1, high: B2) -> T where
B1: SampleBorrow<T> + Sized,
B2: SampleBorrow<T> + Sized,
The arguments we are trying to use are 0 and 256 which enter the function as integers and the default integer type is i32. However, this line
let random_number: u8 = rand::thread_rng().gen_range(0, 256);
reassigns the result of .gen_range() to a u8, the compiler tries to resolve B1 and B2 both as u8, making the generics in the function call evaluate like this
gen_range<u8>(B1, B2) -> u8 where
B1: SampleBorrow<u8> + Sized,
B2: SampleBorrow<u8> + Sized,
256 will not fit into a u8 causing the compiler to throw the overflowing_literals warning.
The correct way to calculate a random number with a maximum range of a primitive is pointed out here, by letting the explicit type declaration define the generic constraints.
Ex:
use rand::Rng;
let random_with_range: u8 = rand::thread_rng().gen();
Related
Why does below code fail to compile?
package main
import (
"fmt"
"unsafe"
)
var x int = 1
const (
ONE int = 1
MIN_INT int = ONE << (unsafe.Sizeof(x)*8 - 1)
)
func main() {
fmt.Println(MIN_INT)
}
I get an error
main.go:12: constant 2147483648 overflows int
Above statement is correct. Yes, 2147483648 overflows int (In 32 bit architecture). But the shift operation should result in a negative value ie -2147483648.
But the same code works, If I change the constants into variables and I get the expected output.
package main
import (
"fmt"
"unsafe"
)
var x int = 1
var (
ONE int = 1
MIN_INT int = ONE << (unsafe.Sizeof(x)*8 - 1)
)
func main() {
fmt.Println(MIN_INT)
}
There is a difference in evaluation between constant and non-constant expression that arises from constants being precise:
Numeric constants represent exact values of arbitrary precision and do not overflow.
Typed constant expressions cannot overflow; if the result cannot be represented by its type, it's a compile-time error (this can be detected at compile-time).
The same thing does not apply to non-constant expressions, as this can't be detected at compile-time (it could only be detected at runtime). Operations on variables can overflow.
In your first example ONE is a typed constant with type int. This constant expression:
ONE << (unsafe.Sizeof(x)*8 - 1)
Is a constant shift expression, the following applies: Spec: Constant expressions:
If the left operand of a constant shift expression is an untyped constant, the result is an integer constant; otherwise it is a constant of the same type as the left operand, which must be of integer type.
So the result of the shift expression must fit into an int because this is a constant expression; but since it doesn't, it's a compile-time error.
In your second example ONE is not a constant, it's a variable of type int. So the shift expression here may –and will– overflow, resulting in the expected negative value.
Notes:
Should you change ONE in the 2nd example to a constant instead of a variable, you'd get the same error (as the expression in the initializer would be a constant expression). Should you change ONE to a variable in the first example, it wouldn't work as variables cannot be used in constant expressions (it must be a constant expression because it initializes a constant).
Constant expressions to find min-max values
You may use the following solution which yields the max and min values of uint and int types:
const (
MaxUint = ^uint(0)
MinUint = 0
MaxInt = int(MaxUint >> 1)
MinInt = -MaxInt - 1
)
func main() {
fmt.Printf("uint: %d..%d\n", MinUint, MaxUint)
fmt.Printf("int: %d..%d\n", MinInt, MaxInt)
}
Output (try it on the Go Playground):
uint: 0..4294967295
int: -2147483648..2147483647
The logic behind it lies in the Spec: Constant expressions:
The mask used by the unary bitwise complement operator ^ matches the rule for non-constants: the mask is all 1s for unsigned constants and -1 for signed and untyped constants.
So the typed constant expression ^uint(0) is of type uint and is the max value of uint: it has all its bits set to 1. Given that integers are represented using 2's complement: shifting this to the left by 1 you'll get the value of max int, from which the min int value is -MaxInt - 1 (-1 due to the 0 value).
Reasoning for the different behavior
Why is there no overflow for constant expressions and overflow for non-constant expressions?
The latter is easy: in most other (programming) languages there is overflow. So this behavior is consistent with other languages and it has its benefits.
The real question is the first: why isn't overflow allowed for constant expressions?
Constants in Go are more than values of typed variables: they represent exact values of arbitrary precision. Staying at the word exact, if you have a value that you want to assign to a typed constant, allowing overflow and assigning a completely different value doesn't really live up to exact.
Going forward, this type checking and disallowing overflow can catch mistakes like this one:
type Char byte
var c1 Char = 'a' // OK
var c2 Char = '世' // Compile-time error: constant 19990 overflows Char
What happens here? c1 Char = 'a' works because 'a' is a rune constant, and rune is alias for int32, and 'a' has numeric value 97 which fits into byte's valid range (which is 0..255).
But c2 Char = '世' results in a compile-time error because the rune '世' has numeric value 19990 which doesn't fit into a byte. If overflow would be allowed, your code would compile and assign 22 numeric value ('\x16') to c2 but obviously this wasn't your intent. By disallowing overflow this mistake is easily caught, and at compile-time.
To verify the results:
var c1 Char = 'a'
fmt.Printf("%d %q %c\n", c1, c1, c1)
// var c2 Char = '世' // Compile-time error: constant 19990 overflows Char
r := '世'
var c2 Char = Char(r)
fmt.Printf("%d %q %c\n", c2, c2, c2)
Output (try it on the Go Playground):
97 'a' a
22 '\x16'
To read more about constants and their philosophy, read the blog post: The Go Blog: Constants
And a couple more questions (+answers) that relate and / or are interesting:
Golang: on-purpose int overflow
How does Go perform arithmetic on constants?
Find address of constant in go
Why do these two float64s have different values?
How to change a float64 number to uint64 in a right way?
Writing powers of 10 as constants compactly
I can not understand in golang how 1<<s return 0 if var s uint = 33.
But 1<<33 return 8589934592.
How a shift operator conversion end up with a value of 0.
I'm reading the language specification and stuck in this section:
https://golang.org/ref/spec#Operators
Specifically this paragraph from docs:
"The right operand in a shift expression must have unsigned integer
type or be an untyped constant representable by a value of type uint.
If the left operand of a non-constant shift expression is an untyped
constant, it is first implicitly converted to the type it would assume
if the shift expression were replaced by its left operand alone."
Some example from official Golang docs:
var s uint = 33
var i = 1<<s // 1 has type int
var j int32 = 1<<s // 1 has type int32; j == 0
var k = uint64(1<<s) // 1 has type uint64; k == 1<<33
Update:
Another very related question, with an example:
package main
import (
"fmt"
)
func main() {
v := int16(4336)
fmt.Println(int8(v))
}
This program return -16
How does the number 4336 become -16 in converting int16 to int8
If you have this:
var s uint = 33
fmt.Println(1 << s)
Then the quoted part applies:
If the left operand of a non-constant shift expression is an untyped constant, it is first implicitly converted to the type it would assume if the shift expression were replaced by its left operand alone.
Because s is not a constant (it's a variable), therefore 1 >> s is a non-constant shift expression. And the left operand is 1 which is an untyped constant (e.g. int(1) would be a typed constant), so it is converted to a type that it would get if the expression would be simply 1 instead of 1 << s:
fmt.Println(1)
In the above, the untyped constant 1 would be converted to int, because that is its default type. Default type of constants is in Spec: Constants:
An untyped constant has a default type which is the type to which the constant is implicitly converted in contexts where a typed value is required, for instance, in a short variable declaration such as i := 0 where there is no explicit type. The default type of an untyped constant is bool, rune, int, float64, complex128 or string respectively, depending on whether it is a boolean, rune, integer, floating-point, complex, or string constant.
And the result of the above is architecture dependent. If int is 32 bits, it will be 0. If int is 64 bits, it will be 8589934592 (because shifting a 1 bit 33 times will shift it out of a 32-bit int number).
On the Go playground, size of int is 32 bits (4 bytes). See this example:
fmt.Println("int size:", unsafe.Sizeof(int(0)))
var s uint = 33
fmt.Println(1 << s)
fmt.Println(int32(1) << s)
fmt.Println(int64(1) << s)
The above outputs (try it on the Go Playground):
int size: 4
0
0
8589934592
If I run the above app on my 64-bit computer, the output is:
int size: 8
8589934592
0
8589934592
Also see The Go Blog: Constants for how constants work in Go.
Note that if you write 1 << 33, that is not the same, that is not a non-constant shift expression, which your quote applies to: "the left operand of a non-constant shift expression". 1<<33 is a constant shift expression, which is evaluated at "constant space", and the result would be converted to int which does not fit into a 32-bit int, hence the compile-time error. It works with variables, because variables can overflow. Constants do not overflow:
Numeric constants represent exact values of arbitrary precision and do not overflow.
See How does Go perform arithmetic on constants?
Update:
Answering your addition: converting from int16 to int8 simply keeps the lowest 8 bits. And integers are represented using the 2's complement format, where the highest bit is 1 if the number is negative.
This is detailed in Spec: Conversions:
When converting between integer types, if the value is a signed integer, it is sign extended to implicit infinite precision; otherwise it is zero extended. It is then truncated to fit in the result type's size. For example, if v := uint16(0x10F0), then uint32(int8(v)) == 0xFFFFFFF0. The conversion always yields a valid value; there is no indication of overflow.
So when you convert a int16 value to int8, if source number has a 1 in bit position 7 (8th bit), the result will be negative, even if the source wasn't negative. Similarly, if the source has 0 at bit position 7, the result will be positive, even if the source is negative.
See this example:
for _, v := range []int16{4336, -129, 8079} {
fmt.Printf("Source : %v\n", v)
fmt.Printf("Source hex: %4x\n", uint16(v))
fmt.Printf("Result hex: %4x\n", uint8(int8(v)))
fmt.Printf("Result : %4v\n", uint8(int8(v)))
fmt.Println()
}
Output (try it on the Go Playground):
Source : 4336
Source hex: 10f0
Result hex: f0
Result : -16
Source : -129
Source hex: ff7f
Result hex: 7f
Result : 127
Source : 8079
Source hex: 1f8f
Result hex: 8f
Result : -113
See related questions:
When casting an int64 to uint64, is the sign retained?
Format printing the 64bit integer -1 as hexadecimal deviates between golang and C
You're building and running the program in 32bit mode (go playground?). In it, int is 32-bit wide and behaves the same as int32.
In the golang color package, there is a method to get r,g,b,a values from an RGBA object:
func (c RGBA) RGBA() (r, g, b, a uint32) {
r = uint32(c.R)
r |= r << 8
g = uint32(c.G)
g |= g << 8
b = uint32(c.B)
b |= b << 8
a = uint32(c.A)
a |= a << 8
return
}
If I were to implement this simple function, I would just write this
func (c RGBA) RGBA() (r, g, b, a uint32) {
r = uint32(c.R)
g = uint32(c.G)
b = uint32(c.B)
a = uint32(c.A)
return
}
What's the reason r |= r << 8 is used?
From the the excellent "The Go image package" blogpost:
[...] the channels have a 16-bit effective range: 100% red is represented by
RGBA returning an r of 65535, not 255, so that converting from CMYK or
YCbCr is not as lossy. Third, the type returned is uint32, even though
the maximum value is 65535, to guarantee that multiplying two values
together won't overflow.
and
Note that the R field of an RGBA is an 8-bit alpha-premultiplied color in the range [0, 255]. RGBA satisfies the Color interface by multiplying that value by 0x101 to generate a 16-bit alpha-premultiplied color in the range [0, 65535]
So if we look at the bit representation of a color with the value c.R = 10101010 then this operation
r = uint32(c.R)
r |= r << 8
effectively copies the first byte to the second byte.
00000000000000000000000010101010 (r)
| 00000000000000001010101000000000 (r << 8)
--------------------------------------
00000000000000001010101010101010 (r |= r << 8)
This is equivalent to a multiplication with the factor 0x101 and distributes all 256 possible values evenly across the range [0, 65535].
The color.RGBA type implements the RGBA method to satisfy the color.Color interface:
type Color interface {
// RGBA returns the alpha-premultiplied red, green, blue and alpha values
// for the color. Each value ranges within [0, 0xffff], but is represented
// by a uint32 so that multiplying by a blend factor up to 0xffff will not
// overflow.
//
// An alpha-premultiplied color component c has been scaled by alpha (a),
// so has valid values 0 <= c <= a.
RGBA() (r, g, b, a uint32)
}
Now the RGBA type represents the colour channels with the uint8 type, giving a range of [0, 0xff]. Simply converting these values to uint32 would not extend the range up to [0, 0xffff].
An appropriate conversion would be something like:
r = uint32((float64(c.R) / 0xff) * 0xffff)
However, they want to avoid the floating point arithmetic. Luckily 0xffff / 0xff is 0x0101, so we can simplify the expression (ignoring the type conversions for now):
r = c.R * 0x0101
= c.R * 0x0100 + c.R
= (c.R << 8) + c.R # multiply by power of 2 is equivalent to shift
= (c.R << 8) | c.R # equivalent, since bottom 8 bits of first operand are 0
And that's essentially what the code in the standard library is doing.
Converting a value in the range 0 to 255 (an 8-bit RGB component) to a value in the range 0 to 65535 (a 16-bit RGB component) would be done by multiplying the 8-bit value by 65535/255; 65535/255 is exactly 257, which is hex 101, so multiplying a one-byte by 65535/255 can be done by shifting that byte value left 8 bits and ORing it with the original value.
(There's nothing Go-specific about this; similar tricks are done elsewhere, in other languages, when converting 8-bit RGB/RGBA components to 16-bit RGB/RGBA components.)
To convert from 8- to 16-bits per RGB component, copy the byte into the high byte of the 16-bit value. e.g., 0x03 becomes 0x0303, 0xFE becomes 0xFEFE, so that the 8-bit values 0 through 255 (0xFF) produce 16-bit values 0 to 65,535 (0xFFFF) with an even distribution of values.
I'm new to Rust and looking to understand concepts like borrowing. I'm trying to create a simple two dimensional array using standard input. The code:
use std::io;
fn main() {
let mut values = [["0"; 6]; 6]; // 6 * 6 array
// iterate 6 times for user input
for i in 0..6 {
let mut outputs = String::new();
io::stdin().read_line(&mut outputs).expect(
"failed to read line",
);
// read space separated list 6 numbers. Eg: 5 7 8 4 3 9
let values_itr = outputs.trim().split(' ');
let mut j = 0;
for (_, value) in values_itr.enumerate() {
values[i][j] = value;
j += 1;
}
}
}
This won't compile because the outputs variable lifetime is not long enough:
error[E0597]: `outputs` does not live long enough
--> src/main.rs:20:5
|
14 | let values_itr = outputs.trim().split(' ');
| ------- borrow occurs here
...
20 | }
| ^ `outputs` dropped here while still borrowed
21 | }
| - borrowed value needs to live until here
How can I get the iterated values out of the block into values array?
split() gives you substrings (string slices) borrowed from the original string, and the original string is outputs from line 6.
The string slices can't outlive the scope of outputs: when a loop iteration ends, outputs is deallocated.
Since values is longer lived, the slices can't be stored there.
We can't borrow slices of outputs across a modification of outputs. So even if the String outputs itself was defined before values, we couldn't easily put the string slices from .split() into values; modifying the string (reading into it) invalidates the slices.
A solution needs to either
Use a nested array of String, and when you assign an element from the split iterator, make a String from the &str using .to_string(). I would recommend this solution. (However an array of String is not at as easy to work with, maybe already this requires using Vec instead.) 1
Read all input before constructing a nested array of &str that borrows from the input String. This is good if the nested array is something that you only need temporarily.
1: You can use something like vec![vec![String::new(); 6]; 6] instead
This answer was moved from the question, where it solved the OPs needs.
use std::io;
fn main() {
let mut values = vec![vec![String::new(); 6]; 6];
for i in 0..6 {
let mut outputs = String::new();
io::stdin().read_line(&mut outputs)
.expect("failed to read line");
let values_itr = outputs.trim().split(' ');
let mut j = 0;
for (_, value) in values_itr.enumerate() {
values[i][j] = value.to_string();
j += 1;
}
}
}
I have just started to learn Rust. During my first steps with this language, I found a strange behaviour, when an iteration is performed inside main or in another function as in following example:
fn myfunc(x: &Vec<f64>) {
let n = x.len();
println!(" n: {:?}", n);
for i in -1 .. n {
println!(" i: {}", i);
}
}
fn main() {
for j in -1 .. 6 {
println!("j: {}", j);
}
let field = vec![1.; 6];
myfunc(&field);
}
While the loop in main is correctly displayed, nothing is printed for the loop inside myfunc and I get following output:
j: -1
j: 0
j: 1
j: 2
j: 3
j: 4
j: 5
n: 6
What is the cause of this behaviour?
Type inference is causing both of the numbers in your range to be usize, which cannot represent negative numbers. Thus, the range is from usize::MAX to n, which never has any members.
To find this out, I used a trick to print out the types of things:
let () = -1 .. x.len();
Which has this error:
error: mismatched types:
expected `core::ops::Range<usize>`,
found `()`
(expected struct `core::ops::Range`,
found ()) [E0308]
let () = -1 .. x.len();
^~
Diving into the details, slice::len returns a usize. Your -1 is an untyped integral value, which will conform to fit whatever context it needs (if there's nothing for it to conform to, it will fall back to an i32).
In this case, it's as if you actually typed (-1 as usize)..x.len().
The good news is that you probably don't want to start at -1 anyway. Slices are zero-indexed:
fn myfunc(x: &[f64]) {
let n = x.len();
println!(" n: {:?}", n);
for i in 0..n {
println!(" i: {}", i);
}
}
Extra good news is that this annoyance was fixed in the newest versions of Rust. It will cause a warning and then eventually an error:
warning: unary negation of unsigned integers will be feature gated in the future
for i in -1 .. n {
^~
Also note that you should never accept a &Vec<T> as a parameter. Always use a &[T] as it's more flexible and you lose nothing.