We have a huge number of test running against our web application and i have come across a very strange error.
We have a function that will upload a file into the application, within this it will click the browse button, input a location and click ok followed by upload.
This works on 90% of the tests and its the same function being called into all the seperate scripts, but in some tests its failing because its unable to locate the object (in this case the browse button on the dialog box)
Its being tested on the multiple machines, its the same target server we are testing against, its the same IE version but we care getting a different results and im running out of ideas.
although when you map the object in TestComplete and compare this against what the test is looking for, they are identical.
Mapped Object using the object spy
Aliases.browser.pageModspace.panelMangoentryformC.panelMangoentryformAddFile.panelBd.panelEntryformcontent.panelModspacedialog.formEntryform.tableFilesourceTable.cellFilesourceOptionFile.fileFilesourceinputfield
The object it failed to find
Aliases.browser.pageModspace.panelMangoentryformC.panelMangoentryformAddFile.panelBd.panelEntryformcontent.panelModspacedialog.formEntryform.tableFilesourceTable.cellFilesourceOptionFile.fileFilesourceinputfield
Does anyone have any ideas?
Load of objects can take different time on different computers. You can try the following:
Modify your test to get the problematic object via the WaitAliasChild method. In this case, TestComplete will wait for an object during the specified time:
Aliases.browser.pageModspace.panelMangoentryformC.panelMangoentryformAddFile.
panelBd.panelEntryformcontent.panelModspacedialog.formEntryform.tableFilesourceTable.
cellFilesourceOptionFile.WaitAliasChild(“fileFilesourceinputfield”, 20000)
Details: http://smartbear.com/viewarticle/55413/
Increase the Auto-wait timeout project option. This will make TestComplete wait for objects longer. However, you need to use this option very carefully as it can affect the total time execution. Details: http://smartbear.com/viewarticle/55316/
Related
I'm trying to run a query that gets the windows service name corresponding to a process ID:
SELECT * FROM Win32_Service where ProcessId = {myID}
This query is expected to run for valid or invalid process IDs as my component may run on a windows service, or as part of the main application or even tests.
When I use run > wbemtest and test this query with a non existing pid it usually comes back instantly, but there's one machine where this takes 2 minutes.
I don't understand why this runs so much slower on that machine particularly, is there a way to diagnose what's causing this? How can it be fixed?
For investigating WMI issues, there are different places in the event viewer:
Windows Logs, Application and System
Application and Services Logs
Microsoft
Windows
WMI-Activity (in the View menu, you might need to switch on "Show Analytic and Debug logs)
A lot is described in this URL.
I've created a VS 2010 load test. The test executes on a dedicated test agent which fires http posts at a couple of web servers.
After running the tests I realised that I needed to make a few changes to the counter sets that I'd assigned. Problem is, even though I've changed the counter sets and updated the Counter Set Mappings under the active run settings I'm still getting back results using the the counter sets from my 1st load test run.
Also, test tests run ok but I get lots of pop-up msg boxes saying "index was outside the bounds of the array"
Any ideas what I need to fix this would be greatly appreciated.
After rebooting the server, the counters reset.
We have a requirement to capture initial page load/paint time while callback requests are loading data on the page. We are using Load Runner performance testing. The average response times for a transaction are 9-10 seconds. However, we are more interested on how quickly the page paints vs the data in each web part is loaded (ignoring JavaScript call back requests). Is there any setting/way to capture such data in Load Runner?
If Load Runner cannot capture/distinguish the data as needed above, is there any other tool perhaps we can use with a browser by executing it manually while we are load test is going on?
Thanks !
...how quickly the page paints...
This is a GUI level event, so you will need a virtual user type which samples at the GUI to answer this issue. If you combine a GUI Virtual User (QTP-based) with a protocol level virtual user (HTTP) for a common named event (Login vs Login_GUI) then you will be able to measure the time inside of the browser.
GUI Virtual Uses have been part of the definition for LoadRunner since version 1.0. They began as XRunner defined, moved to WinRunner defined and now are defined by QuickTest Professional.
I am going to create a typical business application that will be used by a few hundred consultants. Normally, the consultants would be presented with an error message with a standard text. As the application will be a complicated one with lots of changes being made to it constantly I would like the following:
When an error message is presented, the user has the option to "send" the error message to the developers. The developers should be able to open the incoming file in i.e. Eclipse and debug the steps of the last 10 minutes of work step by step (one line at a time if they want to). Everything should be transparent, meaning that they for example should be able to see the return values of calls to the database.
Are there any solutions that offer such functionality today, my preferred language is Python or also Java. I know that there will be a huge performance hit because of such functionality, but that is acceptable as this kind of software is not performance sensitive.
It would be VERY nice if the database also had a cronology so that one could query the database for values that existed at the exact time that a specific line of code was run in the application, leading up to the bug.
You should try to use logging, e.g. commit logs from the DB and logging the user interactions with the application, if it is a web application you can start with the log files from the webserver. Make sure that the logfiles include all submitted data such as the complete GET url with parameters and POST with entity body. You can configure the web server to generate such logs when necesary.
Then you build a test client that can parse the log files and re-create all the user interaction that caused the problem to appear. If you suspect race conditions you should log with high precision (ms resolution) and make sure that the test client can run through the same sequences over and over again to stress those critical parts.
Replay (as your title suggests) is the best way to reproduce an error, just collect all the data needed to recreate the input that generated a specific state/situation. Do not focus on internal structures and return values, when it comes to hunting down an error or a bug you should not work in forensic mode, e.g. trying to analyse the cause of the crash by analyzing the wreck, you should crash the plane over and over again and add more and more logging/or use a debugger until you know that goes wrong.
This is my setting: I have written a .NET application for local client machines, which implements a feature that could also be used on a webpage. To keep this example simple, assume that the client installs a software into which he can enter some data and gets some data back.
The idea is to create a webpage that holds a form into which the user enters the same data and gets the same results back as above. Due to the company's available web servers, the first idea was to create a mono webservice, but this was dismissed for reasons unknown. The "service" is not to be run as a webservice, but should be called by a PHP script. This is currently realized by calling the mono application via shell_exec from PHP.
So now I am stuck with a mono port of my application, which works fine, but takes way too long to execute. I have already stripped out all unnecessary dlls, methods etc, but calling the application via the command line - submitting the desired data via commandline parameters - takes approximately 700ms. We expect about 10 hits per second, so this could only work when setting up a lot of servers for this task.
I assume the 700m are related to the cost of starting the application every time, because it does not make much difference in terms of time if I handle the request only once or five hundred times (I take the original input, vary it slighty and do 500 iterations with "new" data every time. Starting from the second iteration, the processing time drops down to approximately 1ms per iteration)
My next idea was to setup the mono application as a remoting server, so that it only has to be started once and can then handle incoming requests. I therefore wrote another mono application that serves as the client. Calling the client, letting the client pass the data to the server and retrieving the result now takes 344ms. This is better, but still way slower than I would expect and want it to be.
I have then implemented a new project from scratch based on this blog post and get stuck with the same performance issues.
The question is: am I missing something related to the mono-projects that could improve the speed of the client/server? Although the idea of creating a webservice for this task was dismissed, would a webservice perform better under these circumstances (as I would not need the client application to call the service), although it is said that remoting is faster than webservices?
I could have made that clearer, but implementing a webservice is currently not an option (and please don't ask why, I didn't write the requirements ;))
Meanwhile I have checked that it's indeed the startup of the client, which takes most of the time in the remoting scenario.
I could imagine accessing the server via pipes from the command line, which would be perfectly suitable in my scenario. I guess this would be done using sockets?
You can try to use AOT to reduce the startup time. On .NET you would use ngen for that purpoise, on mono just do a mono --aot on all assemblies used by your application.
AOT'ed code is slower than JIT'ed code, but has the advantage of reducing startup time.
You can even try to AOT framework assemblies such as mscorlib and System.
I believe that remoting is not an ideal thing to use in this scenario. However your idea of having mono on server instead of starting it every time is indeed solid.
Did you consider using SOAP webservices over HTTP? This would also help you with your 'web page' scenario.
Even if it is a little to slow for you in my experience a custom RESTful services implementation would be easier to work with than remoting.