How to have multiple (alternate) return types in Nim? - syntax

I can declare a proc to return a "union type", but cannot actually return values of more than one type:
proc test(b: bool) : int|string =
if b: 1 else: "hello"
echo test true
echo test false
Expected:
1
hello
Actual:
Error: type mismatch: got 'string' for '"hello"' but expected 'int literal(1)'
Even if I swap the return types (string|int) the error is the same. I am only allowed to return an int. I tried putting the return type in parens; and I tried using or instead of |. No dice.
What am I missing? (I don't want to use a variant object.)
The code can be tested online at the Nim Playground. I've scoured google and the Nim documentation, and come up empty.

Return types have to be known at compile time in Nim. Imagine you tried to return the result of that procedure to a string variable. Now you're in a scenario where one return value works, but the other would be a compilation error. In order to allow Nim to figure out whether to throw an error or not it must be able to figure out which type will be returned at compile time. The solution here is to use a static[bool] and a when in place of the if. If you actually need a type that can hold different types on runtime you have to use variant objects.

Related

Rego: Set comprehension undefined

I am trying to understand why the following examples on using set comprehension give different results:
https://play.openpolicyagent.org/p/5x5mXmsyr0
https://play.openpolicyagent.org/p/IVQlTYcVpD
In the first example, rlt is evaluated to an empty set despite foo["c"] is undefined. I expect rlt to also be undefined.
In the second example, I removed the function but directly set rlt2 to the result of a set comprehension. This time it does return undefined.
Can someone explain the difference here?
I think what you see here is the type checker doing the best it can.
The error you get, is because at compile time, the type checker knows which keys exist.
For the function call, foo is a function argument,
myFunc(foo) = rlt {
rlt := {f | f := foo["c"]}
}
and the compiler cannot tell if foo["c"] exists or does not exist -- that depends on the actual call. You might define the function like that, but use it in other ways like
$ echo '{"c": 123}' | opa eval -I -d policy.rego 'data.play.myFunc(input)'
So it doesn't do any kind of deeper flow analysis.
Now rlt is not undefined because set (object, array) comprehensions are never undefined -- if their bodies are always undefined, the overall collection becomes an empty set (object, array).

linter err113: do not define dynamic errors, use wrapped static errors instead

I am using err113 as part of golangci-lint.
It is complaining about ...
foo_test.go:55:61: err113: do not define dynamic errors, use wrapped static errors instead: "errors.New(\"repo gave err\")" (goerr113)
repoMock.EXPECT().Save(gomock.Eq(&foooBarBar)).Return(nil, errors.New("repo gave err")),
^
foo_test.go:22:42: err113: do not define dynamic errors, use wrapped static errors instead: "errors.New(\"oops\")" (goerr113)
repoMock.EXPECT().FindAll().Return(nil, errors.New("oops"))
^
What is best way to fix this ?
Quoting https://github.com/Djarvur/go-err113
Also, any call of errors.New() and fmt.Errorf() methods are reported
except the calls used to initialise package-level variables and the
fmt.Errorf() calls wrapping the other errors.
I am trying to get a idiomatic example for this.
Declare a package-level variables as suggested:
var repoGaveErr = errors.New("repo gave err")
func someFunc() {
repoMock.EXPECT().Save(gomock.Eq(&foooBarBar)).Return(nil, repoGaveErr)
}
Every call to errors.New allocates a new unique error value. The application creates a single value representing the error by declaring the package-level variable.
There are two motivations for the single value:
The application can compare values for equality to check for a specific error condition.
Reduce memory allocations (although probably not a big deal in practice)
The value io.EOF is a canonical example.
Since GO 1.13 you can define a new error type, wrap it and use it.
for example, if you want to return an "operation not permitted" + the operation.
you need to implement something like
var OperationNotPermit = errors.New("operation not permitted")
func OperationNotFoundError(op string) error {
return fmt.Errorf("OperationNotPermit %w : %s", OperationNotPermit, op)
}
then in your code, when you want to return the error,
return nil, OperationNotFoundError(Op)
Let's back to question case:
first, define the custom error and the wapper
var repoError = errors.New("repositoryError")
func RepositoryError(msg string) error {
return fmt.Errorf("%w: %s", repoError,msg)
}
then in your code,
repoMock.EXPECT().Save(gomock.Eq(&foooBarBar)).Return(nil, RepositoryError("YOUR CUSTOM ERROR MESSAGE"))
Since it hasn't been said before, you probably don't need to define package level errors for tests. Given the idea is to wrap errors so they can be compared and unwrapped in the caller, returning a dynamic error in a test is fine as long as the purposes of your test are served.

Is that one argument or none for a Perl 6 block?

What is the Perl 6 way to tell the difference between an argument and no argument in a block with no explicit signature? I don't have any practical use for this, but I'm curious.
A block with no explicit signature puts the value into $_:
my &block := { put "The argument was $_" };
The signature is actually ;; $_? is raw. That's one optional argument. The #_ variable isn't defined in the block because there is no explicit signature.
There's the no argument, where $_ will be undefined:
&block(); # no argument
But there's also a one argument situation where $_ will be undefined. A type object is always undefined:
&block(Int);
But, an $_ with nothing in it is actually an Any (rather than, say, Nil). I can't tell the difference between these two cases:
&block();
&block(Any);
Here's a longer example:
my $block := {
say "\t.perl is {$_.perl}";
if $_ ~~ Nil {
put "\tArgument is Nil"
}
elsif ! .defined and $_.^name eq 'Any' {
put "\tArgument is an Any type object"
}
elsif $_ ~~ Any {
put "\tArgument is {$_.^name} type object"
}
else {
put "\tArgument is $_";
}
};
put "No argument: "; $block();
put "Empty argument: "; $block(Empty);
put "Nil argument: "; $block(Nil);
put "Any argument: "; $block(Any);
put "Int argument: "; $block(Int);
Notice the no argument and Any argument forms show the same things:
No argument:
.perl is Any
Argument is an Any type object
Empty argument:
.perl is Empty
Argument is Slip type object
Nil argument:
.perl is Nil
Argument is Nil
Any argument:
.perl is Any
Argument is an Any type object
Int argument:
.perl is Int
Argument is Int type object
As far as I know, the only way to know the number of parameters passed without an explicit signature, is to use #_ inside the body, which will generate a :(*#_) signature.
my &block := { say "Got #_.elems() parameter(s)" };
block; # Got 0 parameter(s)
block 42; # Got 1 parameter(s)
dd block.signature; # :(*#_)
Yeah, the good old #_ is still there, if you want it :-)
{ put $_.perl }
Is sort of similar to this: (which doesn't work)
-> ;; $_? is raw = CALLERS::<$_> { put $_.perl }
Since the default is default for $_ outside of the block is Any, if you don't place anything into $_ before you call the function you get Any.
To get something at all similar where you can tell the difference use a Capture :
my &foo = -> ;; |C ($_? is raw) {
unless C.elems {
# pretend it was defined like the first Block above
CALLER::<$_> := CALLER::CALLERS::<$_>
}
my $called-with-arguments := C.elems.Bool;
if $called-with-arguments {
say 'called with arguments';
} else {
say 'not called with arguments';
}
}
Here's how I solved this. I'd love to do this in a cleaner way but the cleverness of the language gets in the way and I have to work around it. This works for positional parameters but there are deeper shenanigans for named parameters and I won't deal with those here.
I had another question, Why does constraining a Perl 6 named parameter to a definite value make it a required value?, where the answers clarified that there are actually no optional parameters. There are merely parameters that have a default value and that there is an implicit default value if I don't explicitly assign one.
The crux of my problem is that I want to know when I gave the parameter a value and when I didn't. I give it a value through an argument or an explicit default. An implicit default is a type object of the right type. That's Any if I didn't specify a type. That implicit default must satisfy any constraint I specify.
The first goal is to tightly constrain the values a user can supply when they call code. If an undefined value is not valid then they shouldn't be allowed to specify one.
The second goal is to easily distinguish special cases in the code. I want to reduce the amount of special knowledge some part of the deeper code needs to know.
I can get the third case (where I know there was no argument or suitable default) by explicitly assigning a special value that I know can't be anything other meaningful thing. There's a value that's even more meaningless than Any. That's Mu. It's the most undefined values of all undefined values. The Any is one of two subtypes of Mu (the other is Junction) but you should almost never see a Mu end up in one of your values in normal code. Undefined things in user code start at Any.
I can create a constraint that checks for the type I want or for Mu and set a default of Mu. If I see a Mu I know there was no argument and that it's Mu because my constraint set that.
Since I'm using Mu there are some things I can't do, like use the === operator. Smart matching won't work because I don't want to test the inheritance chain. I can check the object name directly:
my $block := ->
$i where { $^a.^name eq 'Mu' or $^a ~~ Int:D } = Mu
{
say "\t.perl is {$i.perl}";
put do given $i {
when .^name eq 'Mu' { "\tThere was no argument" }
when Nil { "\tArgument is Nil" }
when (! .defined and .^name eq 'Any') {
"\tArgument is an Any type object"
}
when .defined {
"\tArgument is defined {.^name}"
}
default { "\tArgument is {.^name}" }
}
};
put "No argument: "; $block();
put "Empty argument: "; try $block(Empty); # fails
put "Nil argument: "; try $block(Nil); # fails
put "Any type argument: "; try $block(Any); # fails
put "Int type argument: "; try $block(Int); # fails
put "Int type argument: "; $block(5);
Now most of those invocations fail because they don't specify the right things.
If these were routines I could make multis for a small number of cases but that's an even worse solution in the end. If I had two parameters I'd need four multis. With three such parameters I'd need six. That's a lot of boilerplate code. But, blocks aren't routines so that's moot here.

Intellisense with Union Types

I find that intellisense is missing when assigning to a var with a type that is a union type. This makes sense - the compiler doesn't know which of the unioned types you are assigning (although at some point it could deduce when it has enough information but it does not do this either...).
Fine - so I can be explicit and cast the assignment to the type I intend, and the intellisense returns. But this leads to a second problem - for some reason it seems that TypeScript will allow the cast of an empty object literal to any interface, but as soon as a single property is added, the object literal must satisfy the entire interface.
If have two direct questions about this behavior, and they are in the comments in the following code example. I realize I could declare the test vars of more specific types - that is not the point of this topic. Thanks for your help.
interface ITestOne {
a: string;
b?: string;
}
interface ITestTwo {
c: string;
}
type EitherType = ITestOne | ITestTwo;
var test1: EitherType = {}; // ERROR, no intellisense to help fill out the required properties in the object literal
var test2: EitherType = {} as ITestOne; // ALLOWED - Why is this allowed?
var test3: EitherType = { b: 'blah' } as ITestOne; // ERROR: property a is missing. Why ISN'T this allowed if the line above is allowed?
UPDATE 2017-0131
Reply From a bug report I opened on the typescript project on this topic:
What type assertion does, it tells the compiler to "shut up" and trust you. The operator behaves both as an upcast and as a downcast operator. The only check is that the one of the types is assignable to the other.
In the example above, for test: {a: string, b?:string} is assignable to {} (which requires no arguments); for test2 {a: string, b?:string} is assignable to {b:string}, since the type of the only required argument in the target b matches. for test3 neither {a: string, b?:string} is assignable to {b:string, x:string} since it is missing x nor {b:string, x:string} to {a: string, b?:string} since it is missing a.
So, when casting, the source or the target are only verified not to be two completely unrelated types (i.e. number and string), but otherwise the assignment is allowed. My test3 case produced the described result in TypeScript 1.7, but it is now allowed in TypeScript 2.1.
My question about how to get meaningful intellisense in this scenario still stands. However, I suspect the answer is that it is simply not supported without the use of a type guard block.

Coping with misleading error messages of the Swift compiler (context dependence, type inference)

While the Swift compiler (Xcode 7.2) seems perfectly correct in diagnosing an error for some source text equivalent to the following, it took long to detect the actual error made. Reason: the programmer needs to look not at the text marked, but elsewhere, thus mislead, wondering why an optional string and a non-optional string can not be operands of ??...
struct Outer {
var text : String
}
var opt : String?
var context : Outer
context = opt ?? "abc"
Obviously, the last line should have had context.text as the variable to be assigned. This is diagnosed:
confusion2.swift:9:19: error: binary operator '??' cannot be applied\
to operands of type 'String?' and 'String'
context = opt ?? "abc"
~~~ ^ ~~~~~
The message is formally correct. (I am assuming that type checking the left hand side establishes an expected type (Outer) for the right hand side, and this, then, renders the expression as not working, type-wise.) Taken literally, though, the diagnosis is wrong, as is seen when fixing the left hand side: ?? can be applied to operands of type String? and String.
Now, if this is as good as it gets, currently, in terms of compiler messages, what are good coping strategies? Is remembering
Type inference!
Context!
…
a start? Is there a more systematical approach? A check list?
Update (I'm adding to the list as answers come in. Thanks!)
break statements apart, so as to have several lines checked separately (#vacawama)
Beware of optionals (such as values got from dictionaries), see testSwitchOpt below
Another one
enum T {
case Str(String)
case Integer(Int)
}
func testSwitchOpt(x : T?) -> Int {
switch x {
case .Integer(let r): return r
default: return 0
}
}
The compiler says
optandswitch.swift:8:15: error: enum case 'Integer' not found in type 'T?'
case .Integer(let r): return r
A fix is to write switch x! (or a more cautious let), so as to make type checking address the proper type, I guess.
I could, perhaps should, file some report at Apple, but the issue seems to represent a recurring subject—I have seen this with other compilers—and I was hoping for some general and re-usable hints, if you don't mind sharing them.
Swift's type inference system is great in general, but it can lead to very confusing to outright wrong error messages.
When you get one of these Swift error messages that makes no sense, a good strategy is to break the line into parts. This will allow Swift to return a better error message before it goes too far down the wrong path.
For example, in your case, if you introduce a temporary variable, the real problem becomes clear:
// context = opt ?? "abc"
let temp = opt ?? "abc"
context = temp
Now the error message reads:
Cannot assign value of type 'String' to type 'Outer'

Resources