Asynchronous node.js calls fail in when run in a VM script - ajax

I've managed to write an extension to my custom node.js server, that allows an AJAX script to be run using the VM module. I've also succeeded in persuading VM to acknowledge the existence of require, which is necessary for my scripts to be of any use.
But I've hit a truly weird and intractable problem.
When I run the AJAX script "normally", by handing it off to a child process using child_process.spawn(), all my asynchronous API calls work as expected, and my callback chain runs to completion. In the case of the script I'm currently testing, this uses fs.readdir() to launch the callback chain, and it works in child process mode, returning directory listings to the HTML client that makes the initial AJAX request.
But when I run the AJAX script inside VM using vm.script(), followed by script.runInContext(), something weird happens. The script reaches my fs.readdir() call, but the callback is never called! The entire script stalls at this point!
Since the same V8 engine is running the code in both child process and VM mode, I'm more than a little puzzled, as to why the AJAX script works perfectly if launched in child process mode, but fails to launch the callback chain at all when launched in VM mode.
The only reason I can think of, why VM doesn't like my AJAX script, is because the callbacks are defined as inner functions of a closure, and the outer function comes to an end after the fs.readdir() call. But if that's the case, why does the script work perfectly, both when run in the node.js debugger, and passed to a child process by my server after an AJAX request?
The documentation for VM makes no mention of fs calls being blocked if called within code running under VM, and I wouldn't have embarked upon this development path, if I'd been informed beforehand that such blocking had been implemented.
So what is going on?
EDIT: Here's the code that processes the arguments passed to the script. It calls a function getCmdArgs(), defined earlier, which I don't need to list here because it's been tested and works. It works in both child process mode and VM mode, and conducts a stringent test to see if the argument passing variables needed to supply the script with its data actually exist, before
returning a suitable return argument (the args in question if they exist). Once again, this part works.
//Get the arguments passed to the script - new dual mode version! See function notes above.
var args = getCmdArgs();
//Check to see if the script is being run in a VM sandbox by the server ...
if (args[0] == "RunInVM")
runMode = "V";
else
runMode = "P";
//End if/else
Next, once the script arguments have been passed to the script, it's time to process them, and find out what directory is being referenced. I use an anonymised path reference, which begins with the word "root", the idea being that the client does not need to know where on the server it has had its permitted storage allocated, only that it has storage on disc that it's permitted to access. The client, once given this, can manipulate that directory tree at will, and the child process versions of the scripts that permit this all work. So, this is how the arguments are processed:
//Set our root directory, to which all other directory references will be attached ...
//ROOTPATH is set at the beginning of the script, and is known only to the script, not the client
var dirRoot = ROOTPATH;
//Now get the arguments that were passed to the script ...
var tmpChk = args[2];
var queryMarker = tmpChk.indexOf("?");
if (queryMarker != -1)
{
//Strip off the initial "?"
var actualQuery = tmpChk.substring(queryMarker+1);
//Separate the x=y parts from each other
var queryParts = actualQuery.split("=");
var argType = queryParts[0];
var argData = queryParts[1];
if (argType === "dir")
{
requestedDir = argData;
//Here, we have an argument of the form "dir=xxx". Find out what "xxx" is.
if (argData === "root")
dirName = dirRoot;
else
{
//Here, we have an argument of the form "dir=root/xxx". Find out what "xxx" is, and make it a subdirectory of our
//chosen root directory.
var subIndex = argData.indexOf("root/");
var subDir = argData.substring(subIndex+5);
subDir = subDir.replace(/\x2F/g, path.sep);
subDir = qs.unescape(subDir);
dirName = dirRoot+path.sep+subDir;
//Also, insert an entry for the parent directory into our custom DirEntry array ...
var newEntry = new DirEntry();
newEntry.fileName = "(Parent Directory)";
newEntry.fileType = "DIR";
//Remember we're using the UNIX separator for our GET call!
var pdIdx = requestedDir.lastIndexOf("/");
var pd = requestedDir.substring(0,pdIdx);
//This is the secure path to the parent directory
newEntry.filePath = pd;
myDirEntries[idx2++] = newEntry;
//End if/else
}
//End if
}
}
The above generates the initial directory entry to be returned (via a custom DirEntry() object), and again, this all works. It also determines what path is to be referenced when searching the server's storage.
At this point, we're ready to roll, and we call fs.readdir() accordingly:
//Get entire contents of dir in one go!
fs.readdir(dirName, DirContentsCallback);
Now when running in child process mode, the above function executes, and the callback function DirContentsCallBack(), which is the first part of the callback chain, is duly launched. But when I try to run it in VM mode, this doesn't happen. The callback is never called. Now if fs.readdir() had actually executed, and an error had been generated, then surely my callback would have been called, with an error argument that was non-null? Instead, the whole script just grinds to a halt and hangs.
I inserted a series of console.log() calls to check the progress of the script, to see if any parts of the callback chain were ever executed, and NONE of the console.log() calls fired. My entire callback chain stalled. This does not happen if I run the script in child process mode, by passing it to cp.spawn() - instead, I end up with a completed AJAX transaction, and my directory contents are nicely displayed in the client, once the client receives the JSON data generated by the script. What's even more bizarre, is that the SAME DATA is being passed to the script in both modes - I've checked this repeatedly. Yet the data that worked perfectly in child process mode, and generates a target directory path that is duly examined for entries, comes crashing to a silent halt, the moment I try executing fs.readdir() in VM mode.
No to be perfectly honest, I don't see why I needed to bother posting a host of prior code that all works, when the part that DOESN'T work in VM mode is the last line of the lot. What I opened this question with, seemed to me perfectly sufficient to alert people to the problem. But if it means I receive meaningful answers, then so be it - waste time posting code that works I shall.

Related

With rbx.lua, how do you edit GUI properties?

Changing UI_ELEMENT.Visible to true, and then false shows and hides the UI element, however when I switch it to true again it doesn't reappear. I believe this may be an issue of how I'm doing it rather than what I'm doing.
Hi,
I'm new to Roblox Lua (But I have Javascript and C# experience). I am working on making a 'Garage' or 'Parts' GUI. I'm am trying to make a click detector on a text object set the UI_ELEMENT.Visible of a UI element to true. And a text button (Part of the previously mentions UI element) set that UI_ELEMENT.Visible back to false.
This process works fine until I run through it multiple times (e.g setting to true, then false, and then true again). The UI_ELEMENT.Visible is 'locked' at true (as in setting it to false just results in it being set back to true next frame) but the UI doesn't show.
Code:
click_detector1.MouseClick:connect(function(player) -- When clicked
_G.PlayerInfo[player.Name].status = "In Garage" -- set player status to in garage (works fine no issues)
_G.PlayerInfo[player.Name].topbar = "" -- reset topbar (works)
print("this is only supposed to happen once") -- a check to see if this is running more than once
game.Players[tostring(player.Name)].PlayerGui.Garage.menu.Visible = true -- one way that should work
--.Enabled = true -- another way that should work
--.menu.Position = UDim2.new(0.5, 0, 0,0) -- another way that should work (setting position to center of screen)
end)
The above is in a server script (let's call it script #1).
button = script.Parent
local function onButtonActivated()
local Players = game:GetService("Players")
local player = Players.LocalPlayer -- get the local player
print("I am only running once") -- test to see if this is running more than once
game.Players[tostring(player.Name)].PlayerGui.Garage.menu.Visible = false -- one way that should work
--.Enabled = false -- another way that should work
--.menu.Position = UDim2.new(10, 0, 0,0) -- another way that should work (change x scale to off screen)
end
button.Activated:Connect(onButtonActivated)
The above is in a local script (let's call this script #2).
The interesting thing is that none of the methods I proposed in the 'another way that should work' actually function more than the initial first cycle of the loop (e.g setting to true, then false, and then true again).
Also logging the tests to see if they run multiple times only runs once each time it is cycled through (Like it should). However, this means the code for setting it to visible is also running, but not logging an error or doing what it should do.
Thanks, Daniel Morgan
I believe the issue lies with your use of a server script and local script. LocalScripts only change what's on the player's client. Non-local scripts change whats on the server. In other words, non-local scripts affect everyone while LocalScripts only affect individual players.
Setting the visibility of the GUI to false through a LocalScript will only change the GUI on the player's client. However, the server will still see the player's GUI visibility as true. This discrepancy may be causing you issues.
I would suggest an alternative method using RemoteEvents. Instead of changing the GUI visibility to false in a LocalScript as you do in script #2, I would use a RemoteEvent to do that instead. In your case it would look something like the following:
The following is what would be in a non-local script:
local ReplicatedStorage = game:GetService("ReplicatedStorage")
local remoteEvent = Instance.new("RemoteEvent",ReplicatedStorage)
remoteEvent.Name = "MyRemoteEventName"
-- player is the player object, visibility is a boolean (true or false)
local function ChangeGuiVisibility(player, visibility)
player.PlayerGui.Garage.menu.Visible = visibility
end
-- Call "onCreatePart()" when the client fires the remote event
remoteEvent.OnServerEvent:Connect(onCreatePart)
You can Fire this remote event from a local script like this:
local ReplicatedStorage = game:GetService("ReplicatedStorage")
local visiblity = false
local remoteEvent = ReplicatedStorage:WaitForChild("MyRemoteEventName")
-- Fire the remote event.
remoteEvent:FireServer(visibility)-- NOTE: the player object is automatically passed in as the first argument to FireServer()
Here is a very good guide on RemoteEvents: https://developer.roblox.com/en-us/articles/Remote-Functions-and-Events
Basically, RemoteEvents allow you to bridge the gap between the client and server. The client can fire an event to which the server will respond to. A while ago, this was not the case with Roblox. Changes from the client were visible to the server, which made exploiting games very easy.
Also, I'd like to suggest some changes to your first script, which may be helpful for you in the future.
Instead of:
game.Players[tostring(player.Name)].PlayerGui.Garage.menu.Visible = true
try
if player.PlayerGui:FindFirstChild("Garage") then
player.PlayerGui.Garage.menu.Visible = true
end
There is no need to access game.Players since the MouseClick event already returns the player object that clicked the button. I use FindFirstChild which checks if the Garage GUI exists. This can prevent potential errors from occurring and is often good practice.
Hopefully this helps, please follow up if you're still having issues.

In Cypress when to use Custom Command vs Task?

What is the difference between a Custom Command and a Task? I am trying to understand how they should each be used.
Custom Command documentation: https://docs.cypress.io/api/cypress-api/custom-commands.html
Task documentation: https://docs.cypress.io/api/commands/task.html
A command (most methods on the global cy object) is a function that enqueues (pushes) an action to a queue of currently-executing commands. The queue executes serially and asynchronously (that's why return value of a command is an object having a .then method --- but despite that and the fact it behaves like promise, it's not a promise). Until a previous command is finished, the next command doesn't execute.
Commands are defined and executed directly in the browser.
A custom command is a regular command, but defined by you as opposed to the default commands that Cypress supplies out of the box. Custom commands are useful for automating a workflow you repeat in your tests over and over (e.g. by grouping several default cy commands together).
Commands are used to interact with your web app under test (AUT) --- most notably with the DOM (e.g. via cy.get(selector) to query the DOM), and to make assertions.
It's also important to realize that while commands are being executed serially, they are enqueued immediately (in the same event loop tick), and any expressions you pass to them are evaluated then and there. This isn't a Cypress-specific behavior, just plain JavaScript. That's why you can't do things like these:
// INCORRECT USAGE
let value;
cy.get('.myInput').invoke('val').then(val => value = val);
cy.get('.mySecondInput').type(value); // ✗ value is undefined here
Nor can you use async/await:
// INCORRECT USAGE
let value;
// ✗ doesn't work on Cypress commands
const value = await cy.get('.myInput').invoke('val');
cy.get('.mySecondInput').type(value);
A task is a function defined and executed on the Cypress backend process (Node.js), not in the browser.
To execute a task (which you previously defined in your cypress/plugins/index.js file), you need to first enqueue it as a regular command in your test via cy.task(taskName, data). Cypress then (when the command takes its turn to execute) sends a message to the backend process where the task is executed.
Data your task returns is serialized (via JSON.stringify or something similar) and sent back to the browser where it's passed to a callback you potentially chained to your cy.task() command using .then(callback).
Tasks are mainly used to communicate with your own server backend, to e.g. seed the database; or for I/O such as reading/writing to a file (although cypress supplies commands for these such as cy.exec() or cy.writeFile()).
There are no default tasks --- every task you execute you first need to define yourself.
Another important point is that the messages that are sent between processes (the Cypress browser process, and the Cypress node process) are sent via an IPC channel, and must be serializable. That means that the data you pass to cy.task(taskName, data) are stringified, as well as is the response returned from the task itself. Thus, sending e.g. an object containing a method will not work (that is, the method won't be transferred at all).
Great answers but in order to sum up, here are two main differences that help you choose whether you need a Cypress command or a task :
If you need to run a promise or interact with your backend, go with a task.
If you are interacting with the DOM and making assertions, go with a command.
Taken from this article
Cypress commands, in general do not return promises. The documentation refers to them as 'thenable', it simply means the results are only obtained via the .then(result=>{}) construct.
This is why the comment above is true. (Shown here for reference)
// INCORRECT USAGE
let value;
// ✗ doesn't work on Cypress commands
const value = await cy.get('.myInput').invoke('val');
cy.get('.mySecondInput').type(value);
However, there is a native way to wrap cypress command to get true async/await behavior as shown here:
function getAsync(query, cb) {
let prom = new Promise<any[]>((resolve, reject) => {
cy.get(query).then((elements) => {
let objArray = [];
if (elements === undefined) {
reject();
}
elements.each((index) => {
let element = elements[index];
let obj = cb(element);
objArray.push(obj);
});
resolve(objArray);
});
});
return prom;
}
To call this function above:
it('Gets specific DOM Elements', async () => {
let data = await getAsync('form', getForm);
...

cypress write customised logs into file with writeFile is a good idea?

I would like to write some customised logs into files with cypress. Such as 'current test case name', 'user's id used for this test', 'fail or pass', etc.
I googled awhile, found cy.writeFile meet my needs. But it seems most people would recommend cy.task for logging.
So is cy.writeFile for logging a good idea? If not, what's the reason?
Thanks!
BTW, here's the codes, very simple:
function logger (log) {
cy.writeFile('logs/combined.log', log + '\n', { flag: 'a+' })
}
module.exports = {
logger
}
The cy.task() command is used generally when you do not want the tests themselves to interrupt the logging or if you want to interact with the node process itself. Whereas the cy.writeFile() has to be called between each test and cannot interact with the node process. You can add things in your plugin files so that continous logs can be produced regardless of the test being ran and concatenate those things into the same file.
// in plugins/index.js file
on('task', {
readJson: (filename) => {
// reads the file relative to current working directory
return fsExtra.readJson(path.join(process.cwd(), filename)
}
})
https://docs.cypress.io/api/commands/task.html#Command
cy.task() provides an escape hatch for running arbitrary Node code, so
you can take actions necessary for your tests outside of the scope of
Cypress. This is great for:
Seeding your test database. Storing state in Node that you want
persisted between spec files. Performing parallel tasks, like making
multiple http requests outside of Cypress. Running an external
process. In the task plugin event, the command will fail if undefined
is returned. This helps catch typos or cases where the task event is
not handled.
If you do not need to return a value, explicitly return null to signal
that the given event has been handled.

Can methods of objects be called from ABAP debugger script?

I'm just discovering the new (to my system) ABAP Debugger Script.
Say this is my program:
* Assume i have the class LCL_SMTH with public methods INCREMENT and REFRESH
DATA: lo_smth TYPE REF TO lcl_smth.
CREATE OBJECT LO_SMTH.
lo_smth->increment( ).
WRITE 'Nothing hapenned'.
Could i get my script to call the REFRESH method after it exits INCREMENT?
I set the script to execute on calling of the INCREMENT method, and it does so. Next I know I have to STEP OUT (F7) -> which I also do - i just don't know how to invoke the REFRESH method.
Debugger script can do exactly what you could do manually. So you can't ... unless you could manually. Since you can jump manually in the debugger, debugger script can as well. So if there is a suitable call to REFRESH somewhere in the code, then you can jump there and back as well.

Uploading a file using post() method of QNetworkAccessManager

I'm having some trouble with a Qt application; specifically with the QNetworkAccessManager class. I'm attempting to perform a simple HTTP upload of a binary file using the post() method of the QNetworkAccessManager. The documentation states that I can give a pointer to a QIODevice to post(), and that the class will transmit the data found in the QIODevice. This suggests to me that I ought to be able to give post() a pointer to a QFile. For example:
QFile compressedFile("temp");
compressedFile.open(QIODevice::ReadOnly);
netManager.post(QNetworkRequest(QUrl("http://mywebsite.com/upload") ), &compressedFile);
What seems to happen on the Windows system where I'm developing this is that my Qt application pushes the data from the QFile, but then doesn't complete the request; it seems to be sitting there waiting for more data to show up from the file. The post request isn't "closed" until I manually kill the application, at which point the whole file shows up at my server end.
From some debugging and research, I think this is happening because the read() operation of QFile doesn't return -1 when you reach the end of the file. I think that QNetworkAccessManager is trying to read from the QIODevice until it gets a -1 from read(), at which point it assumes there is no more data and closes the request. If it keeps getting a return code of zero from read(), QNetworkAccessManager assumes that there might be more data coming, and so it keeps waiting for that hypothetical data.
I've confirmed with some test code that the read() operation of QFile just returns zero after you've read to the end of the file. This seems to be incompatible with the way that the post() method of QNetworkAccessManager expects a QIODevice to behave. My questions are:
Is this some sort of limitation with the way that QFile works under Windows?
Is there some other way I should be using either QFile or QNetworkAccessManager to push a file via post()?
Is this not going to work at all, and will I have to find some other way to upload my file?
Any suggestions or hints would be appreciated.
Update: It turns out that I had two different problems: one on the client side and one on the server side. On the client side, I had to ensure that my QFile object stayed around for the duration of the network transaction. The post() method of QNetworkAccessManager returns immediately but isn't actually finished immediately. You need to attach a slot to the finished() signal of QNetworkAccessManager to determine when the POST is actually finished. In my case it was easy enough to keep the QFile around more or less permanently, but I also attached a slot to the finished() signal in order to check for error responses from the server.
I attached the signal to the slot like this:
connect(&netManager, SIGNAL(finished(QNetworkReply*) ), this, SLOT(postFinished(QNetworkReply*) ) );
When it was time to send my file, I wrote the post code like this (note that compressedFile is a member of my class and so does not go out of scope after this code):
compressedFile.open(QIODevice::ReadOnly);
netManager.post(QNetworkRequest(QUrl(httpDestination.getCString() ) ), &compressedFile);
The finished(QNetworkReply*) signal from QNetworkAccessManager triggers my postFinished(QNetworkReply*) method. When this happens, it's safe for me to close compressedFile and to delete the data file represented by compressedFile. For debugging purposes I also added a few printf() statements to confirm that the transaction is complete:
void CL_QtLogCompressor::postFinished(QNetworkReply* reply)
{
QByteArray response = reply->readAll();
printf("response: %s\n", response.data() );
printf("reply error %d\n", reply->error() );
reply->deleteLater();
compressedFile.close();
compressedFile.remove();
}
Since compressedFile isn't closed immediately and doesn't go out of scope, the QNetworkAccessManager is able to take as much time as it likes to transmit my file. Eventually the transaction is complete and my postFinished() method gets called.
My other problem (which also contributed to the behavior I was seeing where the transaction never completed) was that the Python code for my web server wasn't fielding the POST correctly, but that's outside the scope of my original Qt question.
You're creating compressedFile on the stack, and passing a pointer to it to your QNetworkRequest (and ultimately your QNetworkAccessManager). As soon as you leave the method you're in, compressedFile is going out of scope. I'm surprised it's not crashing on you, though the behavior is undefined.
You need to create the QFile on the heap:
QFile *compressedFile = new QFile("temp");
You will of course need to keep track of it and then delete it once the post has completed, or set it as the child of the QNetworkReply so that it it gets destroyed when the reply gets destroyed later:
QFile *compressedFile = new QFile("temp");
compressedFile->open(QIODevice::ReadOnly);
QNetworkReply *reply = netManager.post(QNetworkRequest(QUrl("http://mywebsite.com/upload") ), compressedFile);
compressedFile->setParent(reply);
You can also schedule automatic deletion of a heap-allocated file using signals/slots
QFile* compressedFile = new QFile(...);
QNetworkReply* reply = Manager.post(...);
// This is where the tricks is
connect(reply, SIGNAL(finished()), reply, SLOT(deleteLater());
connect(reply, SIGNAL(destroyed()), compressedFile, SLOT(deleteLater());
IMHO, it is much more localized and encapsulated than having to keep around your file in the outer class.
Note that you must remove the first connect() if you have your postFinished(QNetworkReply*) slot, in which you must then not forget to call reply->deleteLater() inside it for the above to work.

Resources