Implicitly lazy gather/take not considered a "lazy" object - lazy-evaluation

The documentation for gather/take mentions
Binding to a scalar or sigilless container will also force laziness.
However,
my \result = gather { for 1..3 { take $_² } };
say result.is-lazy # OUTPUT: «False␤»
Same happens if you use a scalar, and binding using := Is there some way to create implicitly lazy gather/take statements?
Update: It's actually lazy, only it does not respond to the is-lazy method in the expected way:
my $result := gather { for 1..3 { say "Hey"; take $_² } };
say $result[0] # OUTPUT: «Hey␤1␤»
So the question is "What are the conditions for is-lazy to consider things actually lazy?"

I think the problem is really that you cannot actually tell what's going on inside a gather block. So that's why that Seq object tells you it is not lazy.
Perhaps it's more a matter of documentation: if is-lazy returns True, then you can be sure that the Seq (well, in fact its underlying Iterator) is not going to end by itself. If is-lazy returns False, it basically means that we cannot be sure.
One could argue that in that case is-lazy should return the Bool type object, which will also be interpreted as being false (as all type objects are considered to be False in boolean context). But that would at least give some indication that it is really undecided / undecidable.

Related

What is the term for "to asynchronously return"?

Many programming languages have the concept of asynchronous functions. The return value of an async function is not the required type itself, but a promise/future that will eventually contain the value.
I'm doing technical writing and I'm struggling to come up with a succinct term for the act of asynchronously returning a value. Consider the following synchronous function:
function foo(): boolean { ... }
Here, I'd write "If foo returns true, ..." This is perfectly understandable.
Now consider this asynchronous function:
function bar(): Promise<boolean> { ... }
I could write: "If bar returns a promise that resolves to true, ..." This is technically correct, but rather awkward, especially if used repeatedly.
I could write: "If bar asynchronously returns true, ..." This is shorter, but I'm not quite happy with it. To me, it puts the emphasis on "asynchronously", not on "true".
I could write: "If bar returns true, ..." This is short, but not technically correct.
I would like to write something like: "If bar yields true, ...", but the verb to yield already has a different meaning in the context of iterables (at least in some languages).
Is there a succinct way of expressing this concept?
I think yield (or it synonym produce) are totally fine. It should be clear from the context that a promise is meant. When it is not clear, I would use the long and technically precise version (returns a promise that fulfills with).
Another option would be when bar() resolves to true, omitting the implicit "the promise returned by the call" but using bar() instead of bar to refer to the result of the call, not the function itself (which is not a promise that can resolve).

Read from an enum without pattern matching

The Rust documentation gives this example where we have an instance of Result<T, E> named some_value:
match some_value {
Ok(value) => println!("got a value: {}", value),
Err(_) => println!("an error occurred"),
}
Is there any way to read from some_value without pattern matching? What about without even checking the type of the contents at runtime? Perhaps we somehow know with absolute certainty what type is contained or perhaps we're just being a bad programmer. In either case, I'm just curious to know if it's at all possible, not if it's a good idea.
It strikes me as a really interesting language feature that this branch is so difficult (or impossible?) to avoid.
At the lowest level, no, you can't read enum fields without a match1.
Methods on an enum can provide more convenient access to data within the enum (e.g. Result::unwrap), but under the hood, they're always implemented with a match.
If you know that a particular case in a match is unreachable, a common practice is to write unreachable!() on that branch (unreachable!() simply expands to a panic!() with a specific message).
1 If you have an enum with only one variant, you could also write a simple let statement to deconstruct the enum. Patterns in let and match statements must be exhaustive, and pattern matching the single variant from an enum is exhaustive. But enums with only one variant are pretty much never used; a struct would do the job just fine. And if you intend to add variants later, you're better off writing a match right away.
enum Single {
S(i32),
}
fn main() {
let a = Single::S(1);
let Single::S(b) = a;
println!("{}", b);
}
On the other hand, if you have an enum with more than one variant, you can also use if let and while let if you're interested in the data from a single variant only. While let and match require exhaustive patterns, if let and while let accept non-exhaustive patterns. You'll often see them used with Option:
fn main() {
if let Some(x) = std::env::args().len().checked_add(1) {
println!("{}", x);
} else {
println!("too many args :(");
}
}

Which closure implementation is faster between two examples

I'm writing some training material for the Groovy language and I'm preparing an example which would explain Closures.
The example is a simple caching closure for "expensive" methods, withCache
def expensiveMethod( Long a ) {
withCache (a) {
sleep(rnd())
a*5
}
}
So, now my question is: which of the two following implementations would be the fastest and more idiomatic in Groovy?
def withCache = {key, Closure operation ->
if (!cacheMap.containsKey(key)) {
cacheMap.put(key, operation())
}
cacheMap.get(key)
}
or
def withCache = {key, Closure operation ->
def cached = cacheMap.get(key)
if (cached) return cached
def res = operation()
cacheMap.put(key, res)
res
}
I prefer the first example, as it doesn't use any variable but I wonder if accessing the get method of the Map is slower than returning the variable containing the computed result.
Obviously the answer is "it depends on the size of the Map" but, out of curiosity, I would like to have the opinion of the community.
Thanks!
Firstly I agree with OverZealous, that worrying about two get operations is a premature optimization. The second exmaple is also not equal to the first. The first allows null for example, while the second on uses Groovy-Truth in the if, which means that null evals to false, as does for example an empty list/array/map. So if you want to show calling Closure I would go with the first one. If you want something more idiomatic I would do this instead for your case:
def expensiveMethod( Long a ) {
sleep(rnd())
a*5
}
def cache = [:].withDefault this.&expensiveMethod

Why does using Linq's .Select() return IEnumerable<dynamic> even though the type is clearly defined?

I'm using Dapper to return dynamic objects and sometimes mapping them manually. Everything's working fine, but I was wondering what the laws of casting were and why the following examples hold true.
(for these examples I used 'StringBuilder' as my known type, though it is usually something like 'Product')
Example1: Why does this return an IEnumerable<dynamic> even though 'makeStringBuilder' clearly returns a StringBuilder object?
Example2: Why does this build, but 'Example1' wouldn't if it was IEnumerable<StringBuilder>?
Example3: Same question as Example2?
private void test()
{
List<dynamic> dynamicObjects = {Some list of dynamic objects};
IEnumerable<dynamic> example1 = dynamicObjects.Select(s => makeStringBuilder(s));
IEnumerable<StringBuilder> example2 = dynamicObjects.Select(s => (StringBuilder)makeStringBuilder(s));
IEnumerable<StringBuilder> example3 = dynamicObjects.Select(s => makeStringBuilder(s)).Cast<StringBuilder>();
}
private StringBuilder makeStringBuilder(dynamic s)
{
return new StringBuilder(s);
}
With the above examples, is there a recommended way of handling this? and does casting like this hurt performance? Thanks!
When you use dynamic, even as a parameter, the entire expression is handled via dynamic binding and will result in being "dynamic" at compile time (since it's based on its run-time type). This is covered in 7.2.2 of the C# spec:
However, if an expression is a dynamic expression (i.e. has the type dynamic) this indicates that any binding that it participates in should be based on its run-time type (i.e. the actual type of the object it denotes at run-time) rather than the type it has at compile-time. The binding of such an operation is therefore deferred until the time where the operation is to be executed during the running of the program. This is referred to as dynamic binding.
In your case, using the cast will safely convert this to an IEnumerable<StringBuilder>, and should have very little impact on performance. The example2 version is very slightly more efficient than the example3 version, but both have very little overhead when used in this way.
While I can't speak very well to the "why", I think you should be able to write example1 as:
IEnumerable<StringBuilder> example1 = dynamicObjects.Select<dynamic, StringBuilder>(s => makeStringBuilder(s));
You need to tell the compiler what type the projection should take, though I'm sure someone else can clarify why it can't infer the correct type. But I believe by specifying the projection type, you can avoid having to actually cast, which should yield some performance benefit.

Which syntax is better for return value?

I've been doing a massive code review and one pattern I notice all over the place is this:
public bool MethodName()
{
bool returnValue = false;
if (expression)
{
// do something
returnValue = MethodCall();
}
else
{
// do something else
returnValue = Expression;
}
return returnValue;
}
This is not how I would have done this I would have just returned the value when I knew what it was. which of these two patterns is more correct?
I stress that the logic always seems to be structured such that the return value is assigned in one plave only and no code is executed after it's assigned.
A lot of people recommend having only one exit point from your methods. The pattern you describe above follows that recommendation.
The main gist of that recommendation is that if ou have to cleanup some memory or state before returning from the method, it's better to have that code in one place only. having multiple exit points leads to either duplication of cleanup code or potential problems due to missing cleanup code at one or more of the exit points.
Of course, if your method is couple of lines long, or doesn't need any cleanup, you could have multiple returns.
I would have used ternary, to reduce control structures...
return expression ? MethodCall() : Expression;
I suspect I will be in the minority but I like the style presented in the example. It is easy to add a log statement and set a breakpoint, IMO. Plus, when used in a consistent way, it seems easier to "pattern match" than having multiple returns.
I'm not sure there is a "correct" answer on this, however.
Some learning institutes and books advocate the single return practice.
Whether it's better or not is subjective.
That looks like a part of a bad OOP design. Perhaps it should be refactored on the higher level than inside of a single method.
Otherwise, I prefer using a ternary operator, like this:
return expression ? MethodCall() : Expression;
It is shorter and more readable.
Return from a method right away in any of these situations:
You've found a boundary condition and need to return a unique or sentinel value: if (node.next = null) return NO_VALUE_FOUND;
A required value/state is false, so the rest of the method does not apply (aka a guard clause). E.g.: if (listeners == null) return null;
The method's purpose is to find and return a specific value, e.g.: if (nodes[i].value == searchValue) return i;
You're in a clause which returns a unique value from the method not used elsewhere in the method: if (userNameFromDb.equals(SUPER_USER)) return getSuperUserAccount();
Otherwise, it is useful to have only one return statement so that it's easier to add debug logging, resource cleanup and follow the logic. I try to handle all the above 4 cases first, if they apply, then declare a variable named result(s) as late as possible and assign values to that as needed.
They both accomplish the same task. Some say that a method should only have one entry and one exit point.
I use this, too. The idea is that resources can be freed in the normal flow of the program. If you jump out of a method at 20 different places, and you need to call cleanUp() before, you'll have to add yet another cleanup method 20 times (or refactor everything)
I guess that the coder has taken the design of defining an object toReturn at the top of the method (e.g., List<Foo> toReturn = new ArrayList<Foo>();) and then populating it during the method call, and somehow decided to apply it to a boolean return type, which is odd.
Could also be a side effect of a coding standard that states that you can't return in the middle of a method body, only at the end.
Even if no code is executed after the return value is assigned now it does not mean that some code will not have to be added later.
It's not the smallest piece of code which could be used but it is very refactoring-friendly.
Delphi forces this pattern by automatically creating a variable called "Result" which will be of the function's return type. Whatever "Result" is when the function exits, is your return value. So there's no "return" keyword at all.
function MethodName : boolean;
begin
Result := False;
if Expression then begin
//do something
Result := MethodCall;
end
else begin
//do something else
Result := Expression;
end;
//possibly more code
end;
The pattern used is verbose - but it's also easier to debug if you want to know the return value without opening the Registers window and checking EAX.

Resources