Method bounded by a const generic expression does not satisfy trait bound - matrix

I'm currently implementing a ray tracer following along the book "The Ray Tracer Challenge" by Jamis Buck.
I've arrived at the part where I have to implement a few methods on matrices, and since these matrices have a compile time known size, I chose to implement them using const generics expressions (which are still only available on the nightly channel).
#![feature(generic_const_exprs)]
The Matrix type is defined as the following:
#[derive(Debug)]
pub struct Matrix<const N: usize>(pub [[f64; N]; N]);
I implemented the following methods:
impl<const N: usize> Matrix<N> {
fn submatrix(&self, index: Idx) -> Matrix<{ N - 1 }> {
//...
}
fn minor(&self, index: Idx) -> f64
where
[(); N - 1]:,
{
let submatrix = self.submatrix(index);
submatrix.determinant()
}
fn cofactor(&self, index: Idx) -> f64
where
[(); N - 1]:,
{
let minor = self.minor(index);
//...
}
fn determinant(&self) -> f64
where
[(); N - 1]:,
{
//...
}
}
... but I'm running into an issue with the submatrix method, which returns a Matrix<{ N - 1 }>.
When calling the determinant method on a submatrix, which is what we do in the minor method:
fn minor(&self, index: Idx) -> f64
where
[(); N - 1]:,
{
let submatrix: Matrix<{ N - 1 }> = self.submatrix(index);
submatrix.determinant()
}
... even though the minor method is bounded by the following constraint:
where [(); N - 1]:,
... the compiler still complains and suggests to add the same where bound using this expression:
error: unconstrained generic constant
--> src/rt/matrix.rs:109:19
|
109 | submatrix.determinant()
| ^^^^^^^^^^^
|
= help: try adding a `where` bound using this expression: `where [(); N - 1]:`
note: required by a bound in `Matrix::<N>::determinant`
--> src/rt/matrix.rs:128:14
|
126 | fn determinant(&self) -> f64
| ----------- required by a bound in this
127 | where
128 | [(); N - 1]:,
| ^^^^^ required by this bound in `Matrix::<N>::determinant`
I tried to implement the minor method for Matrix<{ N - 1 }> but it does not seem to work either (or rather I have no idea how to do that, or if that's even possible):
impl<const N: usize> Matrix<{ N - 1 }> {
fn minor(&self, index: Idx) -> f64 {
let submatrix = self.submatrix(index);
submatrix.determinant()
}
... which gives the following error:
error[E0207]: the const parameter `N` is not constrained by the impl trait, self type, or predicates
--> src/rt/matrix.rs:138:12
|
138 | impl<const N: usize> Matrix<{ N - 1 }> {
| ^ unconstrained const parameter
|
= note: expressions using a const parameter must map each value to a distinct output value
= note: proving the result of expressions other than the parameter are unique is not supported
I'm wondering if what I'm trying to do here with the submatrix is even possible. It's not a big deal since I can define these methods for each possible N, which in my case are limited to 2_usize, 3_usize and 4_usize, but implementing these methods only for Matrix<N> would be a lot cleaner!

Warning: The generic_const_exprs feature is extremely unstable! Do not use it in production!
You call determinant() on submatrix, which is already Matrix<{ N - 1 }> (returned from submatrix()). So you can to also restrict it to where [(); N - 1 - 1]:, (note that the Rust compiler is not smart enough to understand this is the same as where [(); N - 2]:,, nor it is able to conclude that if this holds then also where [(); N - 1]:,. You have to write both: where [(); N - 1]:, [(); N - 1 - 1]:,):
fn minor(&self, index: Idx) -> f64
where
[(); N - 1]:,
[(); N - 1 - 1]:,
{
let submatrix = self.submatrix(index);
submatrix.determinant()
}
fn cofactor(&self, index: Idx) -> f64
where
[(); N - 1]:,
[(); N - 1 - 1]:,
{
let minor = self.minor(index);
//...
}

Related

Rust sorting uses surprisingly few comparaisons

I am currently learning Rust (using the Rust book), and one page mentions counting the number of times the sorting key was used while sorting an array. I modified the code in order to count this for arbitrary sizes, and here is the code :
fn main() {
const MAX: i32 = 10000;
for n in 1..MAX {
let mut v: Vec<i32> = (1..n).collect();
let mut ops = 0;
v.sort_by(|x, y| {
ops += 1;
x.cmp(y)
});
if n-2 >= 0 {
assert_eq!(n-2, ops);
}
// println!("A list of {n} elements is sorted in {ops} operations");
}
}
However, it seems that in order to sort a vector of n elements, Rust only needs n-2 comparaisons (the code above runs without panicking).
How can this be possible ? Aren't sorts supposed to be in O(n*log(n)) ?
Is it because Rust somehow "noticed" that my input vector was already sorted ?
Even in that case, how can a vector of length 2 can be sorted without any comparaisons ? Shouldn't it at least be n-1 ?
The biggest misconseption you have, I think, is:
fn main() {
const SIZE: i32 = 5;
let v: Vec<i32> = (1..SIZE).collect();
println!("{}", v.len());
}
4
The range 1..SIZE does not include SIZE and contains SIZE-1 elements.
Further, it will already be sorted, so it's as simple as iterating through it once.
See here:
fn main() {
const SIZE: i32 = 5;
let mut v: Vec<i32> = (1..SIZE).collect();
let mut ops = 0;
v.sort_by(|x, y| {
ops += 1;
let result = x.cmp(y);
println!(" - cmp {} vs {} => {:?}", x, y, result);
result
});
println!("Total comparisons: {}", ops);
}
- cmp 4 vs 3 => Greater
- cmp 3 vs 2 => Greater
- cmp 2 vs 1 => Greater
Total comparisons: 3
it seems that in order to sort a vector of n elements, Rust only needs n-2 comparaisons
That is incorrect. In order to sort an already sorted vector (which yours are), Rust needs n-1 comparisons. It doesn't detect that, that's just an inherent property of the mergesort implementation that Rust uses.
If it isn't already sorted, it will be more:
fn main() {
let mut v: Vec<i32> = vec![2, 4, 1, 3];
let mut ops = 0;
v.sort_by(|x, y| {
ops += 1;
let result = x.cmp(y);
println!(" - cmp {} vs {} => {:?}", x, y, result);
result
});
println!("Total comparisons: {}", ops);
}
- cmp 3 vs 1 => Greater
- cmp 1 vs 4 => Less
- cmp 3 vs 4 => Less
- cmp 1 vs 2 => Less
- cmp 3 vs 2 => Greater
Total comparisons: 5
FYI sort_by:
pub fn sort_by<F>(&mut self, mut compare: F)
where
F: FnMut(&T, &T) -> Ordering,
{
merge_sort(self, |a, b| compare(a, b) == Less);
}
and it actually invokes merge_sort:
/// This merge sort borrows some (but not all) ideas from TimSort, which is described in detail
/// [here](https://github.com/python/cpython/blob/main/Objects/listsort.txt).
///
/// The algorithm identifies strictly descending and non-descending subsequences, which are called
/// natural runs. There is a stack of pending runs yet to be merged. Each newly found run is pushed
/// onto the stack, and then some pairs of adjacent runs are merged until these two invariants are
/// satisfied:
///
/// 1. for every `i` in `1..runs.len()`: `runs[i - 1].len > runs[i].len`
/// 2. for every `i` in `2..runs.len()`: `runs[i - 2].len > runs[i - 1].len + runs[i].len`
///
/// The invariants ensure that the total running time is *O*(*n* \* log(*n*)) worst-case.
#[cfg(not(no_global_oom_handling))]
fn merge_sort<T, F>(v: &mut [T], mut is_less: F)
how can a vector of length 2 be sorted without any comparisons? Shouldn't it at least be n-1?
(1..2) returns a slice of length 1 (start from 1, but less than 2). So, when n == 2 in your code, please note that the length of the vector is one.
Let me demonstrate how it will actually go in the merge_sort if the input is a slice shorter than or equal to 2.
// MAX_INSERTION: 20
if len <= MAX_INSERTION {
// if the len is less than 1, it won't use `is_less` closure to let you count the cmp.
if len >= 2 {
for i in (0..len - 1).rev() {
insert_head(&mut v[i..], &mut is_less); // <- go into `insert_head`.
}
}
return;
}
fn insert_head<T, F>(v: &mut [T], is_less: &mut F)
where
F: FnMut(&T, &T) -> bool,
{
if v.len() >= 2 && is_less(&v[1], &v[0]) // <- here it uses the closure to make comparison.
So if your input is less than 21, short arrays will get sorted in place via insertion sort to avoid allocations.

Calculate sum of chars in parallel [duplicate]

This question already has answers here:
How can I pass a reference to a stack variable to a thread?
(1 answer)
How can I sum up using concurrency from 1 to 1000000 with Rust?
(1 answer)
Closed 3 years ago.
I have an array of strings. I would like to count the total chars but using threads for parallelisation (the original problem is not this but is similar).
use std::thread;
pub fn frequency<'a>(input: &'a [&'a str], worker_count: usize) -> usize {
let handlers: Vec<thread::JoinHandle<usize>> = input
.chunks(worker_count)
.map(|chunk| thread::spawn(calculate(chunk)))
.collect();
let hashes = handlers.into_iter().map(|handler| handler.join().unwrap());
let mut sum = 0;
for h in hashes {
sum += h
}
sum
}
fn calculate<'a>(input: &'a [&'a str]) -> impl Fn() -> usize + 'a {
move || input.iter().map(|s| s.len()).sum()
}
The compiler tells me this:
error[E0495]: cannot infer an appropriate lifetime for autoref due to conflicting requirements
--> src/lib.rs:5:10
|
5 | .chunks(worker_count)
| ^^^^^^
|
note: first, the lifetime cannot outlive the lifetime 'a as defined on the function body at 3:18...
--> src/lib.rs:3:18
|
3 | pub fn frequency<'a>(input: &'a [&'a str], worker_count: usize) -> usize {
| ^^
note: ...so that reference does not outlive borrowed content
--> src/lib.rs:4:52
|
4 | let handlers: Vec<thread::JoinHandle<usize>> = input
| ^^^^^
= note: but, the lifetime must be valid for the static lifetime...
note: ...so that the type `impl std::ops::Fn<()>` will meet its required lifetime bounds
--> src/lib.rs:6:22
|
6 | .map(|chunk| thread::spawn(calculate(chunk)))
| ^^^^^^^^^^^^^
I've tried to remove all lifetimes, use different lifetimes for str and the slice, and explicitly invoke calculate::<'a> but none of those solutions compile.
The input lifetime is the same everywhere: frequency declares 'a that is used in calculate, so the closure is bound to 'a because the captured variables live for 'a.
Where am I wrong?
NB: I would like not to use 'static.

Why does a range that starts at a negative number not iterate?

I have just started to learn Rust. During my first steps with this language, I found a strange behaviour, when an iteration is performed inside main or in another function as in following example:
fn myfunc(x: &Vec<f64>) {
let n = x.len();
println!(" n: {:?}", n);
for i in -1 .. n {
println!(" i: {}", i);
}
}
fn main() {
for j in -1 .. 6 {
println!("j: {}", j);
}
let field = vec![1.; 6];
myfunc(&field);
}
While the loop in main is correctly displayed, nothing is printed for the loop inside myfunc and I get following output:
j: -1
j: 0
j: 1
j: 2
j: 3
j: 4
j: 5
n: 6
What is the cause of this behaviour?
Type inference is causing both of the numbers in your range to be usize, which cannot represent negative numbers. Thus, the range is from usize::MAX to n, which never has any members.
To find this out, I used a trick to print out the types of things:
let () = -1 .. x.len();
Which has this error:
error: mismatched types:
expected `core::ops::Range<usize>`,
found `()`
(expected struct `core::ops::Range`,
found ()) [E0308]
let () = -1 .. x.len();
^~
Diving into the details, slice::len returns a usize. Your -1 is an untyped integral value, which will conform to fit whatever context it needs (if there's nothing for it to conform to, it will fall back to an i32).
In this case, it's as if you actually typed (-1 as usize)..x.len().
The good news is that you probably don't want to start at -1 anyway. Slices are zero-indexed:
fn myfunc(x: &[f64]) {
let n = x.len();
println!(" n: {:?}", n);
for i in 0..n {
println!(" i: {}", i);
}
}
Extra good news is that this annoyance was fixed in the newest versions of Rust. It will cause a warning and then eventually an error:
warning: unary negation of unsigned integers will be feature gated in the future
for i in -1 .. n {
^~
Also note that you should never accept a &Vec<T> as a parameter. Always use a &[T] as it's more flexible and you lose nothing.

What is the exact definition of the for loop in Rust?

I'm coming from a C (and to a lesser extent, C++) background. I wrote the following code snippet:
fn main() {
let my_array = [1, 2, 3];
let print_me = |j| println!("= {}", j);
for k in my_array.iter() {
print_me(k);
}
}
This compiled and ran as expected, but then I specified the type of the argument passed to the closure print_me thus:
fn main() {
let my_array = [1, 2, 3];
let print_me = |j: i32| println!("= {}", j);
for k in my_array.iter() {
print_me(k);
}
}
I got a compilation error:
error[E0308]: mismatched types
--> src/main.rs:6:22
|
6 | print_me(k);
| ^
| |
| expected i32, found &{integer}
| help: consider dereferencing the borrow: `*k`
|
= note: expected type `i32`
found type `&{integer}`
Now this confused me until I changed k to &k in the for statement, which worked fine:
fn main() {
let my_array = [1, 2, 3];
let print_me = |j: i32| println!("= {}", j);
for &k in my_array.iter() {
print_me(k);
}
}
It seems that I misunderstood the for syntax itself -- or maybe the exact workings of an iterator -- or maybe the usage syntax of a reference vis-a-vis a pointer [which are related but distinct in C++].
In the construct for A in B { C1; C2; ... Cn }, what exactly are A and B supposed to be?
First of all, here's a link to the definition of for in the reference.
To summarise, B is any expression which evaluates to something that can be converted into a value that implements the Iterator<T> trait, whilst A is a irrefutable pattern that binds values of type T.
In your specific case, slice::iter returns an Iter<i32>, which implements Iterator<Item = &i32>. That is, it doesn't yield i32s, it yields &i32s.
Thus, in both the first and second examples, k is actually binding to &i32s, not i32s. When you specified the type of the closure, you were actually specifying the wrong type. The reason the final example works is because A is a pattern, not a variable name. What &k is actually doing is "de-structuring" the &i32, binding the i32 part to a variable named k.
The "irrefutable" part simply means that the pattern must always work. For example, you can't do for Some(x) in thingy where thingy implements Iterator<Option<_>>; Some(x) would not necessarily be valid for every element in the iterator; thus, it's a refutable pattern.
Many iterators actually return a reference rather than a value. To be sure, you have to check the return type of .iter(), which should be of the form Iterator<Item = X>: X will be the type of the variable returned.
So here:
fn main() {
let my_array = [1, 2, 3];
let print_me = |j: i32| println!("= {}", j);
for k in my_array.iter() {
print_me(k);
}
}
This X is &i32 (a reference to i32), and therefore k has type &i32.
This is why, when calling print_me, there is an error: &i32 is passed where i32 is expected.
There are multiple possible fixes here:
specify a different type to print_me:
let print_me = |j: &i32| println!("= {}", j);
dereference the value of k:
print_me(*k);
change the type of k by destructuring in the loop:
for &k in my_array.iter() { ... }
The destructuring occurs because for .. in accepts an irrefutable pattern, so you can pattern match like you would do in a match expression, except that the variable's type has to match (otherwise you get a compiler time error).
To better illustrate it, we can use a slightly more complicated example:
fn main() {
let my_array = [(1, 2), (2, 3), (3, 4)];
let print_me = |a: i32, b: i32| println!("= {} {}", a, b);
for &(j, k) in my_array.iter() {
print_me(j, k)
}
}
The type of my_array is [(i32, i32)]: an array of tuples of 2 i32. The result of .iter() is therefore of type Iterator<Item = &(i32, i32)>: an iterator to a reference to a tuple of 2 i32 aka &(i32, i32).
When we use the irrefutable pattern &(j, k) what happens is that we destructure the tuple so that:
the first element binds to j (inferred to be of type i32, only works because i32 is Copy)
the second element binds to k ((inferred to be of type i32)
j and k thus become temporary copies of the i32 inside this element.

SML finding a sum of squares/halves using other functions

So I want to find the sum of applying function f to the first n integers, when the sum of applying f to the first n integers is f applied to n plus the sum of applying f to the first n-1 integers, as in:
- fun inc x = x+1;
> val inc = fn : inet -> int
- sumF inc 3;
> 9 : int i.e. inc 3+inc 2+ inc 1+ 0
and my sumF looks like this now:
- fun sumF 0 = 0 | sumF n = sumF(n) + sumF(n-1);
val sumF = fn : int -> int
- sumF inc 3;
stdIn:15.1-15.11 Error: operator and operand don't agree [tycon mismatch]
operator domain: int
operand: int -> int
in expression:
sumF inc
As you can see I get an error, so maybe anyone knows how to fix it? Because later on, I want to write sumSq (sumSq n = (nn) + sumSq(n-1)) and sumHalf (sumSq n = (nn) + sumSq(n-1)), using sumF.
Thanks!
Here is a possible solution:
fun sumF f 0 = 0
| sumF f n = f n + sumF f (n - 1)
Your problem is, that you don't actually use the inc function in the definition of sumF function. You define the sumF function with only one argument of type int, but then you call this function with "two arguments". Your function call sumF inc 3 is interpreted as ((sumF inc) 3), and since the type of inc is int -> int, but the function expects the argument of type int, you get the error. And, besides this, your function has an additional problem - it creates infinite loop, since you call sumF n in the definition of sumF n.
You can call the function above as
- fun inc x = x + 1;
val inc = fn : int -> int
- sumF inc 3;
val it = 9 : int
or a bit shorter with anonymous function:
- sumF (fn x => x + 1) 3;
val it = 9 : int
Regards, Špela

Resources