TestComplete stuck on unknown object - testcomplete

I have a question about TestComplete. My automated tests jump sometimes into a different window in the tested application and they become stuck there. This is caused by unknown controls for the specified test (it is searching for f.e. combobox which doesn't exists on the window). I was wandering if there is some way to avoid this situation and just skip to another test?
The problem is that TC stays in endless loop of searching for the not existing object.
Thanks in advance for your responses.
Josef

You need to organize your tests with test items. In this case, you will be able to specify the Test Item value for your test items' Stop on error property and TestComplete will start executing the next test if an error occurs during execution of the current test. You can find more information on this in the Tests and Test Items and Stopping Tests on Errors and Exceptions help topics.

It does not jump there alone, does it? Be sure to have the right buttons pressed. If 2 windows are similar and in one window there is the ComboBox you want to test and in the other window not then I would go for something like this:
if(Aliases.GenericWindow.WaitAliasChild("ComboBoxInQuestion", customTimeoutInMilliseconds).Exists)
Log.Message("Do something with ComboBox");
The timeout can be set in the WaitAliasChild function. This waits customTimeoutInMilliseconds and if there is no ComboBox found it just skips the tests that were made for the ComboBox.

Related

Suspeciously long pauses between Click and SendKeys actions in WDS in JMeter

I have an action which fills in a form with 2 fields and clicks submit button.
Issue: load time for this sampler grew up during several script runs from 2-3 sec initially (which was visually close to manual execution) to 14-16 sec with no visible reason (no changes in application or script).
Investigation: ~4-5sec pauses between click into a field and sendKeys actions for both 2 fields in script when manually fields are clickable with no visible delays after form appears.
Log:
WDS code:
Q: Is there any explanation of such strange behavior with ~5sec + ~5sec pauses in each field?
What should I do to fix this issue with too long and unexplainable pauses which provide unrelevant results?
Should I consider to switch to JSR223/groovy or it is not sampler type & code-related problem?
Your question doesn't tell the full story so I don't think it's possible to come up with the comprehensive answer so far only few generic recommendations:
Take a thread dump at the moment when JMeter is "waiting" and doing nothing, it will allow you to detect what's going on there
Monitor JVM parameters using JVisualVM or equivalent, it might be the case the JVM is doing garbage collections to free up heap memory space, in this case you will have to tune JMeter for maximum performance
You're "finding" the element twice, the second line is absolutely unnecessary so instead of
wait.until(pkg.ExpectedConditions.visibilityOfElementLocated(pkg.By.xpath("some xpath")))
var element = WDS.browser.findElement(pkg.By.xpath("the same xpath"))
you can do it only once like:
var element = wait.until(pkg.ExpectedConditions.visibilityOfElementLocated(pkg.By.xpath("some xpath")))
XPath is the most powerful but the least performing locator strategy, if it's possible - try to stick to element ID attributes, if it is not possible - go for CSS Selectors
Follow JMeter Best Practices like don't run your test in GUI mode, avoid storing response data, etc.

How to stop the UI navigating to the most recent result in the Services view?

So I have maybe 10 different connections active at any one time, running a bunch of statements on different dbs. Every time a single statement/query is completed, my results view jumping to the latest completed statement in the console, on any one of the open running connections - which is annoying when its something like quickly dropping a temp table when i'm quickly reading results from another output.
Any idea if you can prevent this from happening?
Unfortunately, it is impossible now. Please file a feature request here: https://youtrack.jetbrains.com/issues/DBE
The workaround can be using 'In-editor results' mode, when you'll see the result just under your query and no one will ever grab it from you!

I have random timeouts in cypress tests

I'm working with cypress now for 3 months and I try to fix this problem for 2 months now and i really don't now how to fix it.
When i run all my tests there are a lot of tests failing. And every-time its another test (random).
The application that i'm testing has an button that is disabled and when the fields are stuffed with text, the button becomes active.
but the problem is that cypress clicks on the button when the button is still disabled. the button needs some time to get active, now I have put the following in the code:
cy.wait('#budgetblindsPost')
cy.wait(500)
But this is also not working. I have less errors but I still get errors.
Here is an example of an error I get
Here is also an example of my code
Using cy.wait() all over the place may eventually solve issues related to timeout, but will make your test suite unnecessarily slow. Instead, you should increase the timeout(s)
One-off
This command will only fail after 30 seconds of not being able to find the object, or, when it finds it, 30 seconds of not being able to click it.
cy.get('#model_save', {timeout: 30000}).click({timeout: 30000});
Please note that your value of 500 means half a second, which may not be enough.
Global
If you find yourself overriding the timeout with the same value in a lot of places, you may wish to increase it once for all in the config.
defaultCommandTimeout: 4000
Time, in milliseconds, to wait until most DOM based commands are considered timed out

How to Run Tibco BW Activity using JAVA Code Activity in TIBCO BW

Is it possible to insert the java code to run a previous activity in the process definition flow ?
For Example: A process definition contains the following items.
Start--> ReadFile-> SoapRequestReply -->end
In the above example I want to retry the SoapRequestReply activity with the help of java code if the execution of that activity contains any error.
I want to implement the logic in a generic way... I know the said concept can be implemented with the help of "REPEAT ON ERROR UNTILL TRUE" Group but I want to do it with the help of java code. so the new process definition would look like this.
Start--> ReadFile-> SoapRequestReply --exception-->RetryOnce(Java Code) --> end..
The Java Code will execute the Previous activity one more time.
Please suggest...
This is, indeed, a perfect fit for an error group. But if you really can't afford to use one, you could create a SubProcess that calls back your MainProcess on error and hold the retry count in a Job Shared Variable. Please note that this is a quick and dirty workaround.
You could do it by simply surround the SoapRequestReply with a Group.
This could be either a "Repeat on Error until true" group that repeats
by condition x times if an error occurs or a "while true" loop with individual
handling (error transition) e.g. for logging purposes.
No Java Coding / Activities needed.
With best regards
Seb

What results can I force in a cucumber scenario

Using ruby / cucumber, I know you can explicitly call a fail("message"), but what are your other options?
The reason I ask is that we have 0... I repeat, absolutly NO control over our test data. We have cucumber tests that test edge cases that we may or may not have users for in our database. We (for obvious reasons) do not want to throw away the tests, because they are valuable; however since our data set cannot test that edge case, it fails because the sql statement returns an empty data set. Right now, we just have those tests failing, however I would like to see something along the lines of "no_data" or something like that if the sql statement returns an empty data set. So the output would look like
Scenarios: 100 total (80 passed, 5 no_data, 15 fail)
I am willing to use the already implemented "skipped" if there is a skip("message") function.
What are my options so we can see that with the current data, we just don't have any test data for those tests? making these manual tests is also not an option. They need to be run ever week with our automation, but somehow separate from the failures. Failure means defect, no_data found means it's not a testable condition. It's the difference between a warning: we have not tested this edge case, and Alert: broken code.
You can't invoke 'skipped', but you can certainly call pending with or without an error message. I've used this in a similar situation to yours. Unless you're running in strict mode then having pending scenarios won't cause any failures. The problem I encountered was that occasionally a step would get mis-spelled causing cucumber to mark that as pending, since it was not matching a step definition. That then became lost in the sea of 'legitimate' pending scenarios and was weeks before we discovered it.

Resources