Usefulness of explicit Isolate parameter in V8 API - v8

Some time in the past year, many functions in the V8 API were changed to have an explicit Isolate parameter. E.g. whereas you used to write ObjectTemplate::New(), now you must pass in an Isolate argument: ObjectTemplate::New(Isolate::GetCurrent()).
Is there any reason you would ever pass an isolate other than the one returned from GetCurrent()? If you were to do so, would that even work?
The reason I ask is that I'm writing bindings to use V8 for another programming language. If the Isolate parameter is always the current isolate, I might as well omit that parameter and hardcode the call to GetCurrent in the glue layer.

Related

Higher order inline functions, and MDCContext

So I'm trying to write a higher order function that adds Mapped Diagnostic Context (MDC) to a closure that is passed as parameter.
There are two ways to make it work. One for non suspending functions (MDC.put) and one for suspending functions (MDCContext)... But is there a way to make it work in any kind of functions?
I know there's no "suspending variance", other than using the inline modifier, yet...
Just curious!

Why does the first parameter of the function LdrRegisterDllNotification have to be zero?

As stated in the Microsoft docs, the parameter Flags of the LdrRegisterDllNotification must be zero, but no further explanation is provided. What's the purpose of defining this parameter at all if the only accepted value is zero? What happens if a non-zero value is passed instead?
Parameters where the documentation tells you to pass zero has two possible reasons:
The parameter is unused in all existing Windows versions but might be used for something in the future. The developer might have envisioned extra features but they did not have time to implement them etc.
The parameter is used to pass undocumented information/flags that triggers some private functionality inside the function. Windows 95 for example supports undocumented flags in its *Alloc functions that causes them to allocate shared memory visible to all processes.
Either way, the best practice is to just follow the documentation and pass zero.

Looking for difference between xdmp:value() vs xdmp:eval()

Can someone provide a better explanation of the xdmp:eval() and xdmp:value() functions?
I had tried to follow the Developer API. However, I am not really satisfied with the instances and it's a bit vague for me. I would really appreciate if someone could help me understand those functions and their differences with examples.
Both functions are for executing strings of code dynamically, but xdmp:value is evaluated against the current context, such that if you have variables defined in the current scope or modules declared, you can reference them without redeclaring them.
xdmp:eval necessitates the creation of an entirely new context that has no knowledge of the context calling xdmp:eval. One must define a new XQuery prolog, and variables from the main context are passed to the xdmp:eval call as parameters and declared as external variables in the eval script.
Generally, if you can use xdmp:value, it's probably the best choice; however, xdmp:eval has some capabilities that xdmp:value doesn't, namely everything defined in the <options> argument. Through these options, it's possible to control the user executing the query, the database it's executed against, transaction mode, etc.
There is another function for executing dynamic strings: xdmp:unpath, and it's similar to xdmp:value, but more limited in that it can only execute XPath.

How are ChildWindowFromPointEx and ChildWindowFromPoint different except the "flags" parameter?

Windows API has ChildWindowFromPoint() and ChildWindowFromPointEx() functions and they differ in that the latter has uFlags parameter specifying which windows to skip.
It looks like if I pass CWP_ALL into ChildWindowFromPointEx() I'll get exactly the same effect as I would have with ChildWindowFromPoint().
Is the only difference in uFlags parameter? Can I just use ChildWindowFromPointEx() everywhere and pass CWP_ALL when I need ChildWindowFromPoint() behavior?
If it helps at all, I hacked up a quick test application that calls both functions and stepped into the disassembled USER32.DLL to see where the calls go.
For ChildWindowFromPoint, after some preamble, I reached this point:
The main processing was delegated to the call at 75612495.
Then, for ChildWindowFromPointEx, I step into the assembly and get this:
As that entry point is the target of the call from the first function, it seems pretty clear to me that ChildWindowFromPoint calls ChildWindowFromPointEx, presumably with uFlags set to CWP_ALL (my assembler knowledge is limited but I'm looking hard at that push 0 before the call - CWP_ALL is defined as zero).
If you intent to always use ChildWindowFromPointEx with CWP_ALL, you could just use ChildWindowFromPoint().
If you intent to always use ChildWindowFromPoint, you could just use ChildWindowFromPointEx with CWP_ALL.
ChildWindowFromPoint is equivalent to ChildWindowFromPointEx with CWP_ALL.
Advice: use ChildWindowFromPointEx (you may one day have usage for other flags value)

Should I make sure arguments aren't null before using them in a function?

The title may not really explain what I'm really trying to get at, couldn't really think of a way to describe what I mean.
I was wondering if it is good practice to check the arguments that a function accepts for nulls or empty before using them. I have this function which just wraps some hash creation like so.
Public Shared Function GenerateHash(ByVal FilePath As IO.FileInfo) As String
If (FilePath Is Nothing) Then
Throw New ArgumentNullException("FilePath")
End If
Dim _sha As New Security.Cryptography.MD5CryptoServiceProvider
Dim _Hash = Convert.ToBase64String(_sha.ComputeHash(New IO.FileStream(FilePath.FullName, IO.FileMode.Open, IO.FileAccess.Read)))
Return _Hash
End Function
As you can see I just takes a IO.Fileinfo as an argument, at the start of the function I am checking to make sure that it is not nothing.
I'm wondering is this good practice or should I just let it get to the actual hasher and then throw the exception because it is null.?
Thanks.
In general, I'd suggest it's good practice to validate all of the arguments to public functions/methods before using them, and fail early rather than after executing half of the function. In this case, you're right to throw the exception.
Depending on what your method is doing, failing early could be important. If your method was altering instance data on your class, you don't want it to alter half of the data, then encounter the null and throw an exception, as your object's data might them be in an intermediate and possibly invalid state.
If you're using an OO language then I'd suggest it's essential to validate the arguments to public methods, but less important with private and protected methods. My rationale here is that you don't know what the inputs to a public method will be - any other code could create an instance of your class and call it's public methods, and pass in unexpected/invalid data. Private methods, however, are called from inside the class, and the class should already have validated any data passing around internally.
One of my favourite techniques in C++ was to DEBUG_ASSERT on NULL pointers. This was drilled into me by senior programmers (along with const correctness) and is one of the things I was most strict on during code reviews. We never dereferenced a pointer without first asserting it wasn't null.
A debug assert is only active for debug targets (it gets stripped in release) so you don't have the extra overhead in production to test for thousands of if's. Generally it would either throw an exception or trigger a hardware breakpoint. We even had systems that would throw up a debug console with the file/line info and an option to ignore the assert (once or indefinitely for the session). That was such a great debug and QA tool (we'd get screenshots with the assert on the testers screen and information on whether the program continued if ignored).
I suggest asserting all invariants in your code including unexpected nulls. If performance of the if's becomes a concern find a way to conditionally compile and keep them active in debug targets. Like source control, this is a technique that has saved my ass more often than it has caused me grief (the most important litmus test of any development technique).
Yes, it's good practice to validate all arguments at the beginning of a method and throw appropriate exceptions like ArgumentException, ArgumentNullException, or ArgumentOutOfRangeException.
If the method is private such that only you the programmer could pass invalid arguments, then you may choose to assert each argument is valid (Debug.Assert) instead of throw.
If NULL is an inacceptable input, throw an exception. By yourself, like you did in your sample, so that the message is helpful.
Another method of handling NULL inputs is just to respont with a NULL in turn. Depends on the type of function -- in the example above I would keep the exception.
If its for an externally facing API then I would say you want to check every parameter as the input cannot be trusted.
However, if it is only going to be used internally then the input should be able to be trusted and you can save yourself a bunch of code that's not adding value to the software.
You should check all arguments against the set of assumptions that you make in that function about their values.
As in your example, if a null argument to your function doesn't make any sense and you're assuming that anyone using your function will know this then being passed a null argument shows some sort of error and some sort of action taken (eg. throwing an exception). And if you use asserts (as James Fassett got in and said before me ;-) ) they cost you nothing in a release version. (they cost you almost nothing in a debug version either)
The same thing applies to any other assumption.
And it's going to be easier to trace the error if you generate it than if you leave it to some standard library routine to throw the exception. You will be able to provide much more useful contextual information.
It's outside the bounds of this question, but you do need to expose the assumptions that your function makes - for example, through the comment header to your function.
According to The Pragmatic Programmer by Andrew Hunt and David Thomas, it is the responsibility of the caller to make sure it gives valid input. So, you must now choose whether you consider a null input to be valid. Unless it makes specific sense to consider null to be a valid input (e.g. it is probably a good idea to consider null to be a legal input if you're testing for equality), I would consider it invalid. That way your program, when it hits incorrect input, will fail sooner. If your program is going to encounter an error condition, you want it to happen as soon as possible. In the event your function does inadvertently get passed a null, you should consider it to be a bug, and react accordingly (i.e. instead of throwing an exception, you should consider making use of an assertion that kills the program, until you are releasing the program).
Classic design by contract: If input is right, output will be right. If input is wrong, there is a bug. (if input is right but output is wrong, there is a bug. That's a gimme.)
I'll add a couple of elaborations (in bold) to the excellent design by contract advice offerred by Brian earlier...
The priniples of "design by contract" require that you define what is acceptable for the caller to pass in (the valid domain of input values) and then, for any valid input, what the method/provider will do.
For an internal method, you can define NULLs as outside the domain of valid input parameters. In this case, you would immediately assert that the input parameter value is NOT NULL. The key insight in this contract specification is that any call passing in a NULL value IS A CALLER'S BUG and the error thrown by the assert statement is the proper behavior.
Now, while very well defined and parsimonius, if you're exposing the method to external/public callers, you should ask yourself, is that the contract I/we really want?
Probably not. In a public interface, you'd probably accept the NULL (as technically in the domain of inputs that the method accepts), but then decline to process gracefully w/ a return message. (More work to meet the naturally more complex customer-facing requirement.)
In either case, what you're after is a protocol that handles all of the cases from both the perspective of the caller and the provider, not lots of scattershot tests that can make it difficult to assess the completeness or lack of completeness of the contractual condition coverage.
Most of the time, letting it just throw the exception is pretty reasonable as long as you are sure the exception won't be ignored.
If you can add something to it, however, it doesn't hurt to wrap the exception with one that is more accurate and rethrow it. Decoding "NullPointerException" is going to take a bit longer than "IllegalArgumentException("FilePath MUST be supplied")" (Or whatever).
Lately I've been working on a platform where you have to run an obfuscator before you test. Every stack trace looks like monkeys typing random crap, so I got in the habit of checking my arguments all the time.
I'd love to see a "nullable" or "nonull" modifier on variables and arguments so the compiler can check for you.
If you're writing a public API, do your caller the favor of helping them find their bugs quickly, and check for valid inputs.
If you're writing an API where the caller might untrusted (or the caller of the caller), checked for valid inputs, because it's good security.
If your APIs are only reachable by trusted callers, like "internal" in C#, then don't feel like you have to write all that extra code. It won't be useful to anyone.

Resources