So I'm trying to write a higher order function that adds Mapped Diagnostic Context (MDC) to a closure that is passed as parameter.
There are two ways to make it work. One for non suspending functions (MDC.put) and one for suspending functions (MDCContext)... But is there a way to make it work in any kind of functions?
I know there's no "suspending variance", other than using the inline modifier, yet...
Just curious!
Related
What's the usual best practice to split up a really long match on an enum with dozens of variants to handle, each with dozens or hundreds of lines of code?
I've started to create helper functions for each case and just call those functions passing in the enum's fields (or whatever they're called). But it seems a bit redundant to have MyEnum::MyCase{a,b,c} => handle_mycase(a,b,c) many times.
And if that is the best practice, is it possible to destructure MyEnum::MyCase directly in that helper function's parameters, despite the fact that technically it's refutable, since realistically I already know I'm calling it with the right case?
Maybe the crate enum_dispatch helps you.
IIRC, on a high level: It assumes that all your enum variants implement a trait with a function handle_mycase. Then handle_mycase can be called on the enum directly and will be dispatched to the concrete struct.
As stated in the Microsoft docs, the parameter Flags of the LdrRegisterDllNotification must be zero, but no further explanation is provided. What's the purpose of defining this parameter at all if the only accepted value is zero? What happens if a non-zero value is passed instead?
Parameters where the documentation tells you to pass zero has two possible reasons:
The parameter is unused in all existing Windows versions but might be used for something in the future. The developer might have envisioned extra features but they did not have time to implement them etc.
The parameter is used to pass undocumented information/flags that triggers some private functionality inside the function. Windows 95 for example supports undocumented flags in its *Alloc functions that causes them to allocate shared memory visible to all processes.
Either way, the best practice is to just follow the documentation and pass zero.
Can someone provide a better explanation of the xdmp:eval() and xdmp:value() functions?
I had tried to follow the Developer API. However, I am not really satisfied with the instances and it's a bit vague for me. I would really appreciate if someone could help me understand those functions and their differences with examples.
Both functions are for executing strings of code dynamically, but xdmp:value is evaluated against the current context, such that if you have variables defined in the current scope or modules declared, you can reference them without redeclaring them.
xdmp:eval necessitates the creation of an entirely new context that has no knowledge of the context calling xdmp:eval. One must define a new XQuery prolog, and variables from the main context are passed to the xdmp:eval call as parameters and declared as external variables in the eval script.
Generally, if you can use xdmp:value, it's probably the best choice; however, xdmp:eval has some capabilities that xdmp:value doesn't, namely everything defined in the <options> argument. Through these options, it's possible to control the user executing the query, the database it's executed against, transaction mode, etc.
There is another function for executing dynamic strings: xdmp:unpath, and it's similar to xdmp:value, but more limited in that it can only execute XPath.
In ruby you can throw :label so long as you've wrapped everything in a catch(:label) do block.
I want to add this to a custom lispy language but I'm not sure how it's implemented under the hood. Any pointers?
This is an example of a non-local exit. If you are using your host language's (in this case C) call stack for function calls in your target language (so e.g. a function call in your lisp equates to a function call in C), then the easiest way is to use your host language's form of non-local exit. In C that means setjmp/longjmp.
If, however, you are maintaining your target language's call stack separately then you have many options for how to do this. One really simple way would be to have each lexical-scope exit yield two values; the actual value returned, and an exception state, if any. Then you can check for the exception at runtime and propagate this value up. This has the downside of incurring extra cost to function calls when no condition is signaled, but may be sufficient for a toy language.
The book "Lisp In Small Pieces" covers about a half-dozen ways of handling this, if you're interested.
Some time in the past year, many functions in the V8 API were changed to have an explicit Isolate parameter. E.g. whereas you used to write ObjectTemplate::New(), now you must pass in an Isolate argument: ObjectTemplate::New(Isolate::GetCurrent()).
Is there any reason you would ever pass an isolate other than the one returned from GetCurrent()? If you were to do so, would that even work?
The reason I ask is that I'm writing bindings to use V8 for another programming language. If the Isolate parameter is always the current isolate, I might as well omit that parameter and hardcode the call to GetCurrent in the glue layer.