Session Info in global variables - session

I have a function in PowerShell, which establishes a session and stores the info in global variables:
$global:transport = new-object Fix.ServerSocketTransport $FixHost, $FixPort
Now, within the application the $global:transport variable is used to send and receive the data.
After the script execution ends will the session be closed? Will the $global:transport value be reset? (I have commented out the part where we disconnect the session)
After the script ends, even though I do not create a new session, it sends and receives data through $global:transport variable. Why does this happen?

Globals are indeed global to the session. After you script executes (and creates the global) that variable and its value persists. BTW PowerShell / .NET do not automatically close objects. If the object implements a finalizer then when it is collected via garbage collection (at some indeterminate time in the future) then the finalizer will run and close or release associated native resources. If the object implements IDisposable or otherwise has a Close() or Dispose() method on it, you should call that method when you're done with the object. Also, in order to keep PowerShell from hanging onto the object forever (you did put it in a global), you can either A) set the global variable to $null or B) (and even better) remove the variable altogether using Remove-Variable.
Another option is to create a script scope variable in your outter most script (startup script). This script variable should be visible to any other scripts you execute and will go away when the script is finished. However, as in the case above, if the object implement Close() or Dispose() you should call that on the object when you're done with it.

Related

Using alternative event loop without setting global policy

I'm using uvloop with websockets as
import uvloop
coro = websockets.serve(handler, host, port) # creates new server
loop = uvloop.new_event_loop()
loop.create_task(coro)
loop.run_forever()
It works fine, I'm just wondering whether I could run to some unexpected problems without setting the global asyncio policy to uvloop. As far as I understand, not setting the global policy should work as long as nothing down there doesn't use the global asyncio methods, but works with the passed-down event loop directly. Is that correct?
There are three main global objects in asyncio:
the policy (common to all threads)
the default loop (specific to the current thread)
the running loop (specific to the current thread)
All the attempts to get the current context in asyncio go through a single function, asyncio.get_event_loop.
One thing to remember is that since Python 3.6 (and Python 3.5.3+), get_event_loop has a specific behavior:
If it's called while a loop is running (e.g within a coroutine), the running loop is returned.
Otherwise, the default loop is returned by the policy.
Example 1:
import uvloop
asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
loop = asyncio.get_event_loop()
loop.run_forever()
Here the policy is the uvloop policy. The loop returned by get_event_loop is a uvloop, and it is set as the default loop for this thread. When this loop is running, it is registered as the running loop.
In this example, calling get_event_loop() anywhere in this thread returns the right loop.
Example 2:
import uvloop
loop = uvloop.new_event_loop()
asyncio.set_event_loop(loop)
loop.run_forever()
Here the policy is still the default policy. The loop returned by new_event_loop is a uvloop, and it is set as the default loop for this thread explicitly using asyncio.set_event_loop. When this loop is running, it is registered as the running loop.
In this example, calling get_event_loop() anywhere in this thread returns the right loop.
Example 3:
import uvloop
loop = uvloop.new_event_loop()
loop.run_forever()
Here the policy is still the default policy. The loop returned by new_event_loop is a uvloop, but it is not set as the default loop for this thread. When this loop is running, it is registered as the running loop.
In this example, calling get_event_loop() within a coroutine returns the right loop (the running uvloop). But calling get_event_loop() outside a coroutine will result in a new standard asyncio loop, set as the default loop for this thread.
So the first two approaches are fine, but the third one is discouraged.
Custom event loop should be passed as param
If you want to use custom event loop without using asyncio.set_event_loop(loop), you'll have to pass loop as param to every relevant asyncio coroutines or objects, for example:
await asyncio.sleep(1, loop=loop)
or
fut = asyncio.Future(loop=loop)
You may notice that probably any coroutine/object from asyncio module accepts this param.
Same thing is also applied to websockets library as you may see from it's source code. So you'll need to write:
loop = uvloop.new_event_loop()
coro = websockets.serve(handler, host, port, loop=loop) # pass loop as param
There's no guarantee that your program would work fine if you won't pass your event loop as param like that.
Possible, but uncomfortable
While theoretically you can use some event loop without changing policy I find it's extremely uncomfortable.
You'll have to write loop=loop almost everywhere, it's annoying
There's no guarantee that some third-party would allow you to pass
loop as param and won't just use asyncio.get_event_loop()
Base on that I advice you to reconsider your decision and use global event loop.
I understand that it may be felt "unright" to use global event loop, but "right" way is to pass loop as param everywhere is worse on practice (in my opinion).

Asynchronous node.js calls fail in when run in a VM script

I've managed to write an extension to my custom node.js server, that allows an AJAX script to be run using the VM module. I've also succeeded in persuading VM to acknowledge the existence of require, which is necessary for my scripts to be of any use.
But I've hit a truly weird and intractable problem.
When I run the AJAX script "normally", by handing it off to a child process using child_process.spawn(), all my asynchronous API calls work as expected, and my callback chain runs to completion. In the case of the script I'm currently testing, this uses fs.readdir() to launch the callback chain, and it works in child process mode, returning directory listings to the HTML client that makes the initial AJAX request.
But when I run the AJAX script inside VM using vm.script(), followed by script.runInContext(), something weird happens. The script reaches my fs.readdir() call, but the callback is never called! The entire script stalls at this point!
Since the same V8 engine is running the code in both child process and VM mode, I'm more than a little puzzled, as to why the AJAX script works perfectly if launched in child process mode, but fails to launch the callback chain at all when launched in VM mode.
The only reason I can think of, why VM doesn't like my AJAX script, is because the callbacks are defined as inner functions of a closure, and the outer function comes to an end after the fs.readdir() call. But if that's the case, why does the script work perfectly, both when run in the node.js debugger, and passed to a child process by my server after an AJAX request?
The documentation for VM makes no mention of fs calls being blocked if called within code running under VM, and I wouldn't have embarked upon this development path, if I'd been informed beforehand that such blocking had been implemented.
So what is going on?
EDIT: Here's the code that processes the arguments passed to the script. It calls a function getCmdArgs(), defined earlier, which I don't need to list here because it's been tested and works. It works in both child process mode and VM mode, and conducts a stringent test to see if the argument passing variables needed to supply the script with its data actually exist, before
returning a suitable return argument (the args in question if they exist). Once again, this part works.
//Get the arguments passed to the script - new dual mode version! See function notes above.
var args = getCmdArgs();
//Check to see if the script is being run in a VM sandbox by the server ...
if (args[0] == "RunInVM")
runMode = "V";
else
runMode = "P";
//End if/else
Next, once the script arguments have been passed to the script, it's time to process them, and find out what directory is being referenced. I use an anonymised path reference, which begins with the word "root", the idea being that the client does not need to know where on the server it has had its permitted storage allocated, only that it has storage on disc that it's permitted to access. The client, once given this, can manipulate that directory tree at will, and the child process versions of the scripts that permit this all work. So, this is how the arguments are processed:
//Set our root directory, to which all other directory references will be attached ...
//ROOTPATH is set at the beginning of the script, and is known only to the script, not the client
var dirRoot = ROOTPATH;
//Now get the arguments that were passed to the script ...
var tmpChk = args[2];
var queryMarker = tmpChk.indexOf("?");
if (queryMarker != -1)
{
//Strip off the initial "?"
var actualQuery = tmpChk.substring(queryMarker+1);
//Separate the x=y parts from each other
var queryParts = actualQuery.split("=");
var argType = queryParts[0];
var argData = queryParts[1];
if (argType === "dir")
{
requestedDir = argData;
//Here, we have an argument of the form "dir=xxx". Find out what "xxx" is.
if (argData === "root")
dirName = dirRoot;
else
{
//Here, we have an argument of the form "dir=root/xxx". Find out what "xxx" is, and make it a subdirectory of our
//chosen root directory.
var subIndex = argData.indexOf("root/");
var subDir = argData.substring(subIndex+5);
subDir = subDir.replace(/\x2F/g, path.sep);
subDir = qs.unescape(subDir);
dirName = dirRoot+path.sep+subDir;
//Also, insert an entry for the parent directory into our custom DirEntry array ...
var newEntry = new DirEntry();
newEntry.fileName = "(Parent Directory)";
newEntry.fileType = "DIR";
//Remember we're using the UNIX separator for our GET call!
var pdIdx = requestedDir.lastIndexOf("/");
var pd = requestedDir.substring(0,pdIdx);
//This is the secure path to the parent directory
newEntry.filePath = pd;
myDirEntries[idx2++] = newEntry;
//End if/else
}
//End if
}
}
The above generates the initial directory entry to be returned (via a custom DirEntry() object), and again, this all works. It also determines what path is to be referenced when searching the server's storage.
At this point, we're ready to roll, and we call fs.readdir() accordingly:
//Get entire contents of dir in one go!
fs.readdir(dirName, DirContentsCallback);
Now when running in child process mode, the above function executes, and the callback function DirContentsCallBack(), which is the first part of the callback chain, is duly launched. But when I try to run it in VM mode, this doesn't happen. The callback is never called. Now if fs.readdir() had actually executed, and an error had been generated, then surely my callback would have been called, with an error argument that was non-null? Instead, the whole script just grinds to a halt and hangs.
I inserted a series of console.log() calls to check the progress of the script, to see if any parts of the callback chain were ever executed, and NONE of the console.log() calls fired. My entire callback chain stalled. This does not happen if I run the script in child process mode, by passing it to cp.spawn() - instead, I end up with a completed AJAX transaction, and my directory contents are nicely displayed in the client, once the client receives the JSON data generated by the script. What's even more bizarre, is that the SAME DATA is being passed to the script in both modes - I've checked this repeatedly. Yet the data that worked perfectly in child process mode, and generates a target directory path that is duly examined for entries, comes crashing to a silent halt, the moment I try executing fs.readdir() in VM mode.
No to be perfectly honest, I don't see why I needed to bother posting a host of prior code that all works, when the part that DOESN'T work in VM mode is the last line of the lot. What I opened this question with, seemed to me perfectly sufficient to alert people to the problem. But if it means I receive meaningful answers, then so be it - waste time posting code that works I shall.

Can methods of objects be called from ABAP debugger script?

I'm just discovering the new (to my system) ABAP Debugger Script.
Say this is my program:
* Assume i have the class LCL_SMTH with public methods INCREMENT and REFRESH
DATA: lo_smth TYPE REF TO lcl_smth.
CREATE OBJECT LO_SMTH.
lo_smth->increment( ).
WRITE 'Nothing hapenned'.
Could i get my script to call the REFRESH method after it exits INCREMENT?
I set the script to execute on calling of the INCREMENT method, and it does so. Next I know I have to STEP OUT (F7) -> which I also do - i just don't know how to invoke the REFRESH method.
Debugger script can do exactly what you could do manually. So you can't ... unless you could manually. Since you can jump manually in the debugger, debugger script can as well. So if there is a suitable call to REFRESH somewhere in the code, then you can jump there and back as well.

boost::unique_lock/upgrade_to_unique_lock && boost::shared_lock can exist at the same time ? it worries me

I did experiments with boost::upgrade_to_unique_lock/unique_lock && boost::shared_lock, scenario is:
1 write thread, where it has
boost::unique_lock existing with a
boost::shared_mutex, in the thread, I
write to a global AClass
3 read thread, each one has
boost::shared_lock with the same
boost:;shrared_mutex, they have a
loop to read the global AClass
I observed all the threads are holding locks( 1 unique, 3 shared ) at the same time, and they all
running data access loops.
my concern is AClass is not thread-safe, if I can do read/write at the same time in different threads, the read could crash. Even it's not AClass, we use primitive types, reading them surely will not crash, but the data could be dirty, isn't it ?
boost::shared_lock<boost::shared_mutex>(gmutex);
This is not an "unnamed lock." This creates a temporary shared_lock object which locks gmutex, then that temporary shared_lock object is destroyed, unlocking gmutex. You need to name the object, making it a variable, for example:
boost::shared_lock<boost::shared_mutex> my_awesome_lock(gmutex);
my_awesome_lock will then be destroyed at the end of the block in which it is declared, which is the behavior you want.

Can somebody explain this remark in the MSDN CreateMutex() documentation about the bInitialOwner flag?

The MSDN CreatMutex() documentation (http://msdn.microsoft.com/en-us/library/ms682411%28VS.85%29.aspx) contains the following remark near the end:
Two or more processes can call CreateMutex to create the same named mutex. The first process actually creates the mutex, and subsequent processes with sufficient access rights simply open a handle to the existing mutex. This enables multiple processes to get handles of the same mutex, while relieving the user of the responsibility of ensuring that the creating process is started first. When using this technique, you should set the bInitialOwner flag to FALSE; otherwise, it can be difficult to be certain which process has initial ownership.
Can somebody explain the problem with using bInitialOwner = TRUE?
Earlier in the same documentation it suggests a call to GetLastError() will allow you to determine whether a call to CreateMutex() created the mutex or just returned a new handle to an existing mutex:
Return Value
If the function succeeds, the return value is a handle to the newly created mutex object.
If the function fails, the return value is NULL. To get extended error information, call GetLastError.
If the mutex is a named mutex and the object existed before this function call, the return value is a handle to the existing object, GetLastError returns ERROR_ALREADY_EXISTS, bInitialOwner is ignored, and the calling thread is not granted ownership. However, if the caller has limited access rights, the function will fail with ERROR_ACCESS_DENIED and the caller should use the OpenMutex function.
Using bInitialOwner combines two steps into one: creating the mutex and acquiring the mutex. If multiple people can be creating the mutex at once, the first step can fail while the second step can succeed.
As the other answerers mentioned, this isn't strictly a problem, since you'll get ERROR_ALREADY_EXISTS if someone else creates it first. But then you have to differentiate between the cases of "failed to create or find the mutex" and "failed to acquire the mutex; try again later" just by using the error code. It'll make your code hard to read and easier to screw up.
In contrast, when bInitialOwner is FALSE, the flow is much simpler:
result = create mutex()
if result == error:
// die
result = try to acquire mutex()
if result == error:
// try again later
else:
// it worked!
Well, not sure if there's a real problem. But if you set the argument to TRUE in both processes then you have to check the value of GetLastError() to check if you actually ended up having ownership. It will be first-come-first serve. Useful perhaps only if you use a named mutex to implement a singleton process instance.
The flag is used to create the mutex in an owned state - the successful caller will atomically create the synchronisation object and also acquire the lock before returning in the case that the caller needs to be certain that no race condition can form between creating the object and acquiring it.
Your protocol will determine whether you ever need to do this in one atomic operation.

Resources