I'm just discovering the new (to my system) ABAP Debugger Script.
Say this is my program:
* Assume i have the class LCL_SMTH with public methods INCREMENT and REFRESH
DATA: lo_smth TYPE REF TO lcl_smth.
CREATE OBJECT LO_SMTH.
lo_smth->increment( ).
WRITE 'Nothing hapenned'.
Could i get my script to call the REFRESH method after it exits INCREMENT?
I set the script to execute on calling of the INCREMENT method, and it does so. Next I know I have to STEP OUT (F7) -> which I also do - i just don't know how to invoke the REFRESH method.
Debugger script can do exactly what you could do manually. So you can't ... unless you could manually. Since you can jump manually in the debugger, debugger script can as well. So if there is a suitable call to REFRESH somewhere in the code, then you can jump there and back as well.
Related
I've managed to write an extension to my custom node.js server, that allows an AJAX script to be run using the VM module. I've also succeeded in persuading VM to acknowledge the existence of require, which is necessary for my scripts to be of any use.
But I've hit a truly weird and intractable problem.
When I run the AJAX script "normally", by handing it off to a child process using child_process.spawn(), all my asynchronous API calls work as expected, and my callback chain runs to completion. In the case of the script I'm currently testing, this uses fs.readdir() to launch the callback chain, and it works in child process mode, returning directory listings to the HTML client that makes the initial AJAX request.
But when I run the AJAX script inside VM using vm.script(), followed by script.runInContext(), something weird happens. The script reaches my fs.readdir() call, but the callback is never called! The entire script stalls at this point!
Since the same V8 engine is running the code in both child process and VM mode, I'm more than a little puzzled, as to why the AJAX script works perfectly if launched in child process mode, but fails to launch the callback chain at all when launched in VM mode.
The only reason I can think of, why VM doesn't like my AJAX script, is because the callbacks are defined as inner functions of a closure, and the outer function comes to an end after the fs.readdir() call. But if that's the case, why does the script work perfectly, both when run in the node.js debugger, and passed to a child process by my server after an AJAX request?
The documentation for VM makes no mention of fs calls being blocked if called within code running under VM, and I wouldn't have embarked upon this development path, if I'd been informed beforehand that such blocking had been implemented.
So what is going on?
EDIT: Here's the code that processes the arguments passed to the script. It calls a function getCmdArgs(), defined earlier, which I don't need to list here because it's been tested and works. It works in both child process mode and VM mode, and conducts a stringent test to see if the argument passing variables needed to supply the script with its data actually exist, before
returning a suitable return argument (the args in question if they exist). Once again, this part works.
//Get the arguments passed to the script - new dual mode version! See function notes above.
var args = getCmdArgs();
//Check to see if the script is being run in a VM sandbox by the server ...
if (args[0] == "RunInVM")
runMode = "V";
else
runMode = "P";
//End if/else
Next, once the script arguments have been passed to the script, it's time to process them, and find out what directory is being referenced. I use an anonymised path reference, which begins with the word "root", the idea being that the client does not need to know where on the server it has had its permitted storage allocated, only that it has storage on disc that it's permitted to access. The client, once given this, can manipulate that directory tree at will, and the child process versions of the scripts that permit this all work. So, this is how the arguments are processed:
//Set our root directory, to which all other directory references will be attached ...
//ROOTPATH is set at the beginning of the script, and is known only to the script, not the client
var dirRoot = ROOTPATH;
//Now get the arguments that were passed to the script ...
var tmpChk = args[2];
var queryMarker = tmpChk.indexOf("?");
if (queryMarker != -1)
{
//Strip off the initial "?"
var actualQuery = tmpChk.substring(queryMarker+1);
//Separate the x=y parts from each other
var queryParts = actualQuery.split("=");
var argType = queryParts[0];
var argData = queryParts[1];
if (argType === "dir")
{
requestedDir = argData;
//Here, we have an argument of the form "dir=xxx". Find out what "xxx" is.
if (argData === "root")
dirName = dirRoot;
else
{
//Here, we have an argument of the form "dir=root/xxx". Find out what "xxx" is, and make it a subdirectory of our
//chosen root directory.
var subIndex = argData.indexOf("root/");
var subDir = argData.substring(subIndex+5);
subDir = subDir.replace(/\x2F/g, path.sep);
subDir = qs.unescape(subDir);
dirName = dirRoot+path.sep+subDir;
//Also, insert an entry for the parent directory into our custom DirEntry array ...
var newEntry = new DirEntry();
newEntry.fileName = "(Parent Directory)";
newEntry.fileType = "DIR";
//Remember we're using the UNIX separator for our GET call!
var pdIdx = requestedDir.lastIndexOf("/");
var pd = requestedDir.substring(0,pdIdx);
//This is the secure path to the parent directory
newEntry.filePath = pd;
myDirEntries[idx2++] = newEntry;
//End if/else
}
//End if
}
}
The above generates the initial directory entry to be returned (via a custom DirEntry() object), and again, this all works. It also determines what path is to be referenced when searching the server's storage.
At this point, we're ready to roll, and we call fs.readdir() accordingly:
//Get entire contents of dir in one go!
fs.readdir(dirName, DirContentsCallback);
Now when running in child process mode, the above function executes, and the callback function DirContentsCallBack(), which is the first part of the callback chain, is duly launched. But when I try to run it in VM mode, this doesn't happen. The callback is never called. Now if fs.readdir() had actually executed, and an error had been generated, then surely my callback would have been called, with an error argument that was non-null? Instead, the whole script just grinds to a halt and hangs.
I inserted a series of console.log() calls to check the progress of the script, to see if any parts of the callback chain were ever executed, and NONE of the console.log() calls fired. My entire callback chain stalled. This does not happen if I run the script in child process mode, by passing it to cp.spawn() - instead, I end up with a completed AJAX transaction, and my directory contents are nicely displayed in the client, once the client receives the JSON data generated by the script. What's even more bizarre, is that the SAME DATA is being passed to the script in both modes - I've checked this repeatedly. Yet the data that worked perfectly in child process mode, and generates a target directory path that is duly examined for entries, comes crashing to a silent halt, the moment I try executing fs.readdir() in VM mode.
No to be perfectly honest, I don't see why I needed to bother posting a host of prior code that all works, when the part that DOESN'T work in VM mode is the last line of the lot. What I opened this question with, seemed to me perfectly sufficient to alert people to the problem. But if it means I receive meaningful answers, then so be it - waste time posting code that works I shall.
I'm currently using Calabash framework to automate functional testing for a native Android and IOS application. During my time studying it, I stumbled upon this example project from Xamarin that uses page objects design pattern which I find to be much better to organize the code in a Selenium fashion.
I have made a few adjustments to the original project, adding a file called page_utils.rb in the support directory of the calabash project structure. This file has this method:
def change_page(next_page)
sleep 2
puts "current page is #{current_page_name} changing to #{next_page}"
#current_page = page(next_page).await(PAGE_TRANSITION_PARAMETERS)
sleep 1
capture_screenshot
#current_page.assert_info_present
end
So in my custom steps implementation, when I want to change the page, I trigger the event that changes the page in the UI and update the reference for Calabash calling this method, in example:
#current_page.click_to_home_page
change_page(HomePage)
PAGE_TRANSITION_PARAMETERS is a hash with parameters such as timeout:
PAGE_TRANSITION_PARAMETERS = {
timeout: 10,
screenshot_on_error: true
}
Just so happens to be that whenever I have a timeout waiting for any element in any screen during a test run, I get a generic error message such as:
Timeout waiting for elements: * id:'btn_ok' (Calabash::Android::WaitHelpers::WaitError)
./features/support/utils/page_utils.rb:14:in `change_page'
./features/step_definitions/login_steps.rb:49:in `/^I enter my valid credentials$/'
features/04_support_and_settings.feature:9:in `And I enter my valid credentials'
btn_ok is the id defined for the trait of the first screen in my application, I don't understand why this keeps popping up even in steps ahead of that screen, masking the real problem.
Can anyone help getting rid of this annoyance? Makes really hard debugging test failures, specially on the test cloud.
welcome to Calabash!
As you might be aware, you'll get a Timeout waiting for elements: exception when you attempt to query/wait for an element which can't be found on the screen. When you call page.await(opts), it is actually calling wait_for_elements_exist([trait], opts), which means in your case that after 10 seconds of waiting, the view with id btn_ok can't be found on the screen.
What is assert_info_present ? Does it call wait_for_element_exists or something similar? More importantly, what method is actually being called in page_utils.rb:14 ?
And does your app actually return to the home screen when you invoke click_to_home_page ?
Unfortunately it's difficult to diagnose the issue without some more info, but I'll throw out a few suggestions:
My first guess without seeing your application or your step definitions is that #current_page.click_to_home_page is taking longer than 10 seconds to actually bring the home page back. If that's the case, simply try increasing the timeout (or remove it altogether, since the default is 30 seconds. See source).
My second guess is that the element with id btn_ok is not actually visible on screen when your app returns to the home screen. If that's the case, you could try changing the trait definition from * id:'btn_ok' to all * id:'btn_ok' (the all operator will include views that aren't actually visible on screen). Again, I have no idea what your app looks like so it's hard to say.
My third guess is it's something related to assert_info_present, but it's hard to say without seeing the step defs.
On an unrelated note, I apologize if our sample code is a bit outdated, but at the time of writing we generally don't encourage the use of #current_page to keep track of a page. Calabash was written in a more or less stateless manner and we generally encourage step definitions to avoid using state wherever possible.
Hope this helps! Best of luck.
I have a function in PowerShell, which establishes a session and stores the info in global variables:
$global:transport = new-object Fix.ServerSocketTransport $FixHost, $FixPort
Now, within the application the $global:transport variable is used to send and receive the data.
After the script execution ends will the session be closed? Will the $global:transport value be reset? (I have commented out the part where we disconnect the session)
After the script ends, even though I do not create a new session, it sends and receives data through $global:transport variable. Why does this happen?
Globals are indeed global to the session. After you script executes (and creates the global) that variable and its value persists. BTW PowerShell / .NET do not automatically close objects. If the object implements a finalizer then when it is collected via garbage collection (at some indeterminate time in the future) then the finalizer will run and close or release associated native resources. If the object implements IDisposable or otherwise has a Close() or Dispose() method on it, you should call that method when you're done with the object. Also, in order to keep PowerShell from hanging onto the object forever (you did put it in a global), you can either A) set the global variable to $null or B) (and even better) remove the variable altogether using Remove-Variable.
Another option is to create a script scope variable in your outter most script (startup script). This script variable should be visible to any other scripts you execute and will go away when the script is finished. However, as in the case above, if the object implement Close() or Dispose() you should call that on the object when you're done with it.
Currently developing a connector DLL to HP's Quality Center. I'm using their (insert expelative) COM API to connect to the server. An Interop wrapper gets created automatically by VStudio.
My solution has 2 projects: the DLL and a tester application - essentially a form with buttons that call functions in the DLL. Everything works well - I can create defects, update them and delete them. When I close the main form, the application stops nicely.
But when I call a function that returns a list of all available projects (to fill a combo box), if I close the main form, VStudio still shows the solution as running and I have to stop it.
I've managed to pinpoint a single function in my code that when I call, the solution remains "hung" and if I don't, it closes well. It's a call to a property in the TDC object get_VisibleProjects that returns a List (not the .Net one, but a type in the COM library) - I just iterate over it and return a proper list (that I later use to fill the combo box):
public List<string> GetAvailableProjects()
{
List<string> projects = new List<string>();
foreach (string project in this.tdc.get_VisibleProjects(qcDomain))
{
projects.Add(project);
}
return projects;
}
My assumption is that something gets retained in memory. If I run the EXE outside of VStudio it closes - but who knows what gets left behind in memory?
My question is - how do I get rid of whatever calling this property returns? Shouldn't the GC handle this? Do I need to delve into pointers?
Things I've tried:
getting the list into a variable and setting it to null at the end of the function
Adding a destructor to the class and nulling the tdc object
Stepping through the tester function application all the way out, whne the form closes and the Main function ends - it closes, but VStudio still shows I'm running.
Thanks for your assistance!
Try to add these 2 lines to post-build event:
call "$(DevEnvDir)..\Tools\vsvars32.bat"
editbin.exe /NXCOMPAT:NO "$(TargetPath)"
Have you tried manually releasing the List object using System.Runtime.InteropServices.Marshal.ReleaseComObject when you are finished with it ?
I suspect some dangling threads.
When this happens, pause the process in the debugger and see what threads are still around.
May be try to iterate the list manually using it's count and Item properties instead of using it's iterator, some thing like:
for (int i=1; i <= lst.Count ; ++i)
{
string projectName = lst.Item(i);
}
It might be the Iterator that keeps it alive and not the list object itself, if not using an iterator might not have a problem.
Both QWebFrame and QWebPage have void loadFinished(bool ok) signal which can be used to detect when a web page is completely loaded. The problem is when a web page has some content loaded asynchronously (ajax). How to know when the page is completely loaded in this case?
I haven't actually done this, but I think you may be able to achieve your solution using QNetworkAccessManager.
You can get the QNetworkAccessManager from your QWebPage using the networkAccessManager() function. QNetworkAccessManager has a signal finished ( QNetworkReply * reply ) which is fired whenever a file is requested by the QWebPage instance.
The finished signal gives you a QNetworkReply instance, from which you can get a copy of the original request made, in order to identify the request.
So, create a slot to attach to the finished signal, use the passed-in QNetworkReply's methods to figure out which file has just finished downloading and if it's your Ajax request, do whatever processing you need to do.
My only caveat is that I've never done this before, so I'm not 100% sure that it would work.
Another alternative might be to use QWebFrame's methods to insert objects into the page's object model and also insert some JavaScript which then notifies your object when the Ajax request is complete. This is a slightly hackier way of doing it, but should definitely work.
EDIT:
The second option seems better to me. The workflow is as follows:
Attach a slot to the QWebFrame::javascriptWindowObjectCleared() signal. At this point, call QWebFrame::evaluateJavascript() to add code similar to the following:
window.onload = function() { // page has fully loaded }
Put whatever code you need in that function. You might want to add a QObject to the page via QWebFrame::addToJavaScriptWindowObject() and then call a function on that object. This code will only execute when the page is fully loaded.
Hopefully this answers the question!
To check the load of specific element you can use a QTimer. Something like this in python:
#pyqtSlot()
def on_webView_loadFinished(self):
self.tObject = QTimer()
self.tObject.setInterval(1000)
self.tObject.setSingleShot(True)
self.tObject.timeout.connect(self.on_tObject_timeout)
self.tObject.start()
#pyqtSlot()
def on_tObject_timeout(self):
dElement = self.webView.page().currentFrame().documentElement()
element = dElement.findFirst("css selector")
if element.isNull():
self.tObject.start()
else:
print "Page loaded"
When your initial html/images/etc finishes loading, that's it. It is completely loaded. This fact doesn't change if you then decide to use some javascript to get some extra data, page views or whatever after the fact.
That said, what I suspect you want to do here is expose a QtScript object/interface to your view that you can invoke from your page's script, effectively providing a "callback" into your C++ once you've decided (from the page script) that you've have "completely loaded".
Hope this helps give you a direction to try...
The OP thought it was due to delayed AJAX requests but there also could be another reason that also explains why a very short time delay fixes the problem. There is a bug that causes the described behaviour:
https://bugreports.qt-project.org/browse/QTBUG-37377
To work around this problem the loadingFinished() signal must be connected using queued connection.