What does :: mean in Rust? - syntax

What does the :: syntax in Rust, as seen here, mean:
fn chunk(n: uint, idx: uint) -> uint {
let sh = uint::BITS - (SHIFT * (idx + 1));
(n >> sh) & MASK
}
In languages like Haskell it means a type hint, but here the compiler already has an annotation of that values type, so it seems it's likely type casting.

Please review Appendix B: Operators and Symbols of The Rust Programming Language.
In this case, the double colon (::) is the path separator. Paths are comprised of crates, modules, and items.
The full path for your example item, updated for 1.0 is:
std::usize::BITS
Here, std is the crate, usize is a module, and BITS is the specific item — in this case a constant.
If you scroll up in your file, you'll see use core::usize. use adds the path to the set of items to look in. That's how you can get away with just saying usize::BITS. The core crate is an implementation detail of the façade that is the std crate, so you can just substitute std for core in normal code.
:: can also be used as a way to specify generic types when they cannot otherwise be inferred; this is called the turbofish.
See also:
What is the syntax: `instance.method::<SomeThing>()`?

Oops. I wasn't reading very clearly. In this case, it's just the normal way of referring to anything under a module. uint::BITS is a constant, it seems.

Related

Is F# Constructed Type syntax special?

I was curious about F#'s "constructed type" syntax. It's documented here.
type-argument generic-type-name
or
generic-type-name
With the following examples:
int option
string list
int ref
option<int>
list<string>
ref<int>
Dictionary<int, string>
I was curious if there's anything special about the "backwards" syntax, with the parameter before the type, or if it's just sugar for generic types with one parameter. The following is valid:
type 'a MyOption = // MyOption<'a> also works
| MySome of 'a
| MyNone
But I could not get it to work with multiple type parameters. Why do F# developers prefer this syntax for types with one parameter? Is it possible or desirable to make it work with two?
The backwards syntax is a legacy from OCaml. Personally, I never use it. If you really want to, you can make it work with multiple type arguments like this:
type MyMap = (int, string) Map
However, this generates a pointed warning (that might soon become an error):
This construct is for ML compatibility. The syntax '(typ,...,typ) ident' is not used in F# code. Consider using 'ident<typ,...,typ>' instead. You can disable this warning by using '--mlcompatibility' or '--nowarn:62'.
Bottom line, I would recommend always using .NET syntax instead: MyOption<'a> instead of 'a MyOption.
Why do F# developers prefer this syntax for types with one parameter?
Not all of us do. I love F# and am in awe of it, but find the OCaml style distracting.
It gets especially confusing when the two styles are mixed - compare the readability of Async<Result<int,string list>> list with that of List<Async<Result<int,List<string>>>>.
Here is a thread with some arguments from both sides from fslang suggestions, which I think led to the deprecation of OCaml-style for everything but list, option and a few others.
I find it regrettable that the OCaml style is specified as the preferred option (for these types) in the various style guides, and used throughout the core libraries, while there is such a strong drive to make the language more accessible to newcomers. It definitely adds to the learning curve, as documented in this question,
and here,
here,
here,
here,
here.
Is it possible or desirable to make [OCaml style naming] work with two [type parameters]?
I think a better question is: "Is it possible to only use .NET style?".
Unfortunately the tooling shows types the way they are declared, and the core libraries consistently use OCaml style. I have asked Rider about always showing declarations .NET style in code vision, who referred me to FSharp compiler services. I have not (yet) investigated that avenue further.
In our own code we have taken to overriding the OCaml signatures of functions that ship with F# and other libraries as we come across them, for example:
[<AutoOpen>]
module NoCaml =
module List =
/// Returns a new collection containing only the elements of the collection for which the given predicate returns "true"
let filter = List.filter : ('a -> bool) -> List<'a> -> List<'a>
/// val groupBy : projection:('T -> 'Key) -> list:'T list -> ('Key * 'T list) list (requires equality and equality) 'T is 'a 'Key is 'b Applies a key-generating function to each element of a list and yields a list of unique keys. Each unique key contains a list of all elements that match to this key.
let groupBy = List.groupBy : ('a -> 'b) -> List<'a> -> List<'b * List<'a>>
// etc.
This solves the problem in almost all cases (some exceptions like list construction using [] remain, and need to be overridden at the point of declaration).
I'm not sure what influence this has on performance at runtime - hopefully the extra function calls are optimised away.

Does CLion possible evaluate a function when debugging Rust code?

A snip of Rust code:
pub fn main() {
let a = "hello";
let b = a.len();
let c =b;
println!("len:{}",c)
}
When debugging in CLion, Is it possible to evaluate a function? For example, debug the code step by step, now the code is running to the last line println!... and the current step stops here, by adding the expression a.len() to the watch a variable window, the IDE can't evaluate the a.len(). It says: error: no field named len
This is the same reason you can't make conditional breakpoints for Rust code:
Can't create a conditional breakpoint in VSCode-LLDB with Rust
I hope, I'm not too late to answer this, but with both lldb and gdb, Rust debugging capability is currently rather constrained.
Expressions that are straightforward work; anything complex is likely to produce issues.
My observations from rust-lldb trying this, are that only a small portion of Rust is understood by the expression parser.
There is no support for macros.
Non-used functions are not included in the final binary.
For instance, since that method is not included in the binary, you are unable to execute capacity() on the HashMap in the debugger.
Methods must be named as follows:
struct value.method(&struct value)
There is no technique that I've discovered to call monomorphized functions on generic structs (like HashMap).
For example, "hello" is a const char [5] including the trailing NUL byte. String constants "..." in lldb expressions are produced as C-style string constants.
Therefore, they are not valid functions

Why can't enum variants be inferred in match arms?

This example fails to compile:
extern crate nix;
use std::os::unix::io::RawFd;
fn func(fd: RawFd, buf: &mut [u8]) -> Result<(), nix::Error> {
let (size, nix_addr) = nix::sys::socket::recvfrom(
fd, buf
)?;
let addr = match nix_addr {
//nix::sys::socket::SockAddr::Inet(addr) => addr.to_std(),
Inet(addr) => addr.to_std(),
_ => panic!(),
};
Ok(())
}
fn main() {}
The error, in this version is:
error[E0531]: unresolved tuple struct/variant `Inet`
--> match_arms.rs:14:3
|
14 | Inet(addr) => addr.to_std(),
| ^^^^
Swapping the Inet line for the commented out one successfully compiles.
The compiler seems to be requiring me to specify the enum type itself, I suppose so that it knows that the variant I'm specifying in the match arm is legit. But why? Can't the enum be inferred? Doesn't the compiler have enough information here to realize that nix_addr is a nix::…::SocketAddr, and thus, that Inet is a valid variant (and one with data)?
Why do I have to type the whole thing out, or drag the name into the current scope with a use?
I also tried _::Inet, which also failed.
But why? Can't the enum be inferred?
According to RFC 390 which introduced enum namespacing, this inference is considered a hack and not having it is better designwise. From the RFC's alternatives section:
We can implement enum namespacing after 1.0 by adding a "fallback" case to resolve, where variants can be referenced from their "flat" definition location if no other definition would conflict in that namespace. In the grand scheme of hacks to preserve backwards compatibility, this is not that bad, but still decidedly worse than not having to worry about fallback at all.
The official reason why inference is not considered after RFC 390, is no one really cared enough to propose the change:
#sfackler:
#netvl Java's an interesting case in that you can only refer to variants in the "bare" form (FOO, not MyEnum.FOO) in switch statements. The situation in Rust is a bit more complex as match allows more powerful pattern matching than traditional switch statements. The closest analogue would probably be to implicitly treat all the relevant stuff as imported in a pattern. That seems like something sufficiently orthogonal to this proposal that it'd probably deserve its own RFC.
(and no one has written an RFC for this since then.)
The chance such an RFC will pass is slim, though. After all, you just need to add one line use nix::sys::socket::SockAddr::* somewhere to make it work. Adding a feature to the language requires so many consideration about the proper specification and corner cases (e.g. what happens when you use nix::sys::socket::SockAddr::Unix as Inet) that it may not worth the time.

Swift Generics: More specialized than generic?

XCode 6: Beta 5:
Goal:
I am trying to write generic code for types that are semantically compatible but do not share (or appear to share) sufficient protocols to base my generics on a subset of shared protocols. So far, I have not been able to find a solution, and am wondering I am missing something or if it is a limitation of the language - any insight is appreciated.
Problem:
I have some functions that differ only by type and not by semantics and seem like a natural fit for generics. The problem that I am having, is that from what I can tell, Swift does what seems like parse-time binding of generics, failing if there could conceivably be a problem, and not when there actually is one.
Example:
Consider the following generic functions in a contrived example:
func defVal<T where T:FloatingPointType, T:FloatLiteralConvertible>(T.Type) -> T {
return 0.0
}
func defVal<T:IntegerLiteralConvertible>(T.Type) -> T {
return 0
}
Note that I have provided functions that should span the cases of integers and floats, and intentionally did not want to provide an exhaustive list of all possible variations that are of no relevance to me.
I then want to define generic code that spans types - in this example, int and float types. Note that this code fails to compile even in the absence of any code that calls it:
func doSomethingGeneric<T>(t:T) -> [T]
{
let a = defVal(T) // Type 'T' does not conform to protocol FloatLiteralConvertible
let b = a * a // works
return [t]
}
In my recollection, this would compile in C++ until you called it with an incompatible type, at which point the compiler would catch it.
I also tried other variants of diminished utility:
func doSomethingWithFloats<T
where T:FloatingPointType, T:FloatLiteralConvertible>(t:T) -> [T]
{
let a = defVal(T) // works
let b = a * a // T is not convertible to UInt8
// - need a floating point arithmetic type?
let c = -a // T is not convertible to Float
let f:Float = -a // T is not convertible to Float
return [t]
}
Given the sense that Swift provides protocols as a way of grouping concrete instances (specialized, not generic), I concocted a protocludge:
protocol Initializable {}
extension Float : Initializable {}
extension Double : Initializable {}
extension CGFloat : Initializable {}
func doSomethingWithInitializable<T:Initializable>(t:T) -> [T]
{
let a = defVal(T) // Type 'T' does not conform to protocol FloatLiteralConvertible
let b = a * a // works
return [t]
}
Note that this fails even though FloatLiteralConvertible is implemented across the set of all Initializable types. Put another way, Swift seems to be binding the generic types too early, and treating generic types as if they were specialized concrete instances instead of a greater pattern that would compile out further down the chain. Furthermore, note that while I could derive from FloatLiteralConvertible, this would preclude me from supporting int types etc. If there was a common ArithmeticType protocol, that could conceivably work, but I do not see anything of the sort. And this is the crux of the problem - there is no common protocol that works for both, even though both ints and floating types are semantically compatible (have the same rules of math).
So in summary, how do you write generic functions for which the types are semantically compatible, but for which there are not enough spanning protocols to filter by protocol (in this case - FloatingPointType does not implement IntegerArithmeticType, but is semantically capable of arithmetic).
Thanks in advance.
Unlike C++, Swift does not deal with generics by substituting the concrete types at the call site and making a non-generic copy (at least in general it doesn't - as an optimization that's allowed, but I digress)
Swift deals with genericity by passing in metadata information describing the actual types at each invocation into one master function, which then uses metadata-provided entry points to manipulate your objects
In your example, Initializable does not provide any operations, so when the compiler tries to execute defVal(T) it has no clue what to do (how can it ensure that there is an overload of defVal for your type?)
What you want to do is actually define defVal as a static function on the Initializable protocol, and then implement it in the extensions, then Swift will know that T.defVal() means something akin to
metadata[Initializable]->defVal(metadata)
Oh, since you're trying to execute a *(T,T), you might also want to make a Multipliable protocol and then your T will be typed as
<T: protocol<Initializable,Multipliable>>

Is there a builtin func named "int32"?

The below snippet works fine.
In this case, what "int32" is? A func?
I know there is a type named "int32"
This could be a stupid question. I've just finished A Tour of Go but I could not find the answer.(it's possible I'm missing something.)
package main
import "fmt"
func main() {
var number = int32(5)
fmt.Println(number) //5
}
It is a type conversion, which is required for numeric types.
Conversions are required when different numeric types are mixed in an expression or assignment. For instance, int32 and int are not the same type even though they may have the same size on a particular architecture.
Since you do a variable declaration, you need to specify the type of '5'.
Another option, as mentioned by rightfold in the comments is: var number int32 = 5
(as opposed to a short variable declaration like number := 5)
See also Go FAQ:
The convenience of automatic conversion between numeric types in C is outweighed by the confusion it causes.
When is an expression unsigned? How big is the value? Does it overflow? Is the result portable, independent of the machine on which it executes?
It also complicates the compiler; “the usual arithmetic conversions” are not easy to implement and inconsistent across architectures.
For reasons of portability, we decided to make things clear and straightforward at the cost of some explicit conversions in the code. The definition of constants in Go—arbitrary precision values free of signedness and size annotations—ameliorates matters considerably, though.
A related detail is that, unlike in C, int and int64 are distinct types even if int is a 64-bit type.
The int type is generic; if you care about how many bits an integer holds, Go encourages you to be explicit.

Resources