Which tag can be used to break the execution of flow in certain conditions? - ipaf

I am unable to break the execution of flow while scripting in pace automation framework. Please suggest the other way or suggest any tag to break execution of flow in certain condition

This tag can be used to finish the flow in certain conditions.
Syntax:
<finishFlow/>

Related

Not able to achieve synchronization with the help of Synchronizing timer in JMeter

Building a test plan in JMeter. I have different Transactions and each of them has number of HTTP samplers and "if conditions".
So basically each user might not perform the same action based on the "if condition". I want all the users to start performing the same transaction at the same time and also wait for the other users(Threads) if they have not reached to the current transaction.
I know that I can achieve this with the help of Synchronizing timer but somehow I am not able to achieve this with it.
How to achieve it with any possible method ?
PS - I just want the threads to start transaction at the same time. it does not matter if they performing same sampler or not.
I can suggest another approach,
You can use tearDown Thread Group,
execute after the test has finished executing its regular Thread Groups.
After all threads are done you execute tearDown and can execute anything you want, including Module Control which can reuse transaction from your main thread group
It is quite hard to guess what's going wrong without seeing your Test Plan structure. Just in case be aware that Timers obey Scoping Rules so if you want a request to be executed by N threads in parallel - you need to put the Synchronizing Timer as a child of this request.
See Using the JMeter Synchronizing Timer article for comprehensive information and example test plan.

Need to execute a step (each feature may have diff step) only once before a Cucumber feature file

I want to execute a specific step only once before each cucumber feature files. A cucumber feature files can have multiple scenarios. I don't want Background steps here which executes before each scenario. Every feature file can have a step (which is different in each feature) which will execute only once. So i can't use that step into before hook as i have a specific step for every 20 features. Sample Gherkin shows below:
Scenario: This will execute only once before all scenario in this current feature
When Navigate to the Page URL
Scenario: scenario 1
When Some Action
Then Some Verification
Scenario: scenario 2
When Some Action
Then Some Verification
Scenario: scenario 3
When Some Action
Then Some Verification
I hope you guys understand my Question. I am using Ruby Capybara Cucumber in my framework.
Cucumber doesn't really support what you are asking about. A way to implement this with cucumber hooks would be to use these two pieces of doc:
https://github.com/cucumber/cucumber/wiki/Hooks#tagged-hooks
https://github.com/cucumber/cucumber/wiki/Hooks#running-a-before-hook-only-once
You would tag all your feature files appropriately and you can implement tagged Before hooks that execute once on a per feature tag basis.
It's not beautiful but it accomplishes what you want without waiting on a feature request (or using a different tool).
This can be achieved by associating a Before, After, Around or AfterStep hook with one or more tags. Examples:
Before('#cucumis, #sativus') do
# This will only run before scenarios tagged
# with #cucumis OR #sativus.
end
This must be in the top 5 most frequent questions on the Cucumber mailing list. You can do what you want with hooks. However you almost certainly should not do what you want. The execution time you save by taking this approach is totally outweighed by the amount of time and effort it will take to debug the intermittent failures that such an approach generally leads to.
One of the foundations of creating automated tests is to start from a consistent place. When you have code that setups key things in scenarios, but that is not run for every scenario you have to do the following:
Ensure your setup code creates a consistent base to start from (this is easy)
Ensure that every scenario that uses this base, does not modify the base in any way at all (this is very very difficult)
In your example you'd have to ensure that every action in every scenario ends up on your original page URL. If just one scenario fails to do that, then you will end up with intermittent failures, and you will have to go through every scenario to find your culprit.
In general it is much easier and more effective to put your effort into making your setup code FAST enough so that you are not worried about running it before each scenario.
Yes, This can be done by passing the actual value in you feature file and using "(\\d+)" in you java file. Look at below shown code for better understanding.
Scenario: some test scenario
Given whenever a value is 50
In myFile.java, write the step definition as shown below
#Given("whenever a value is (\\d+)$")
public void testValueInVariable(int value) throws Throwable {
assertEqual(value, 50);
}
you can also have a look at below link to get more clear picture:
https://thomassundberg.wordpress.com/2014/05/29/cucumber-jvm-hello-world/
Some suggestions have been given, especially the one quoting the official documentation which uses a global variable to store whether or not initial setup has been run.
For my case, where multiple features were executed one after another, I had to reset the variable again by checking whether scenario.feature.name has changed:
$feature_name ||= ''
$is_setup ||= false
Before do |scenario|
current_feature_name = scenario.feature.name rescue nil
if current_feature_name != $feature_name
$feature_name = current_feature_name
$is_setup = false
end
end
$is_setup can then be used in steps to determine whether any initial setup needs to be done.

How to Run Tibco BW Activity using JAVA Code Activity in TIBCO BW

Is it possible to insert the java code to run a previous activity in the process definition flow ?
For Example: A process definition contains the following items.
Start--> ReadFile-> SoapRequestReply -->end
In the above example I want to retry the SoapRequestReply activity with the help of java code if the execution of that activity contains any error.
I want to implement the logic in a generic way... I know the said concept can be implemented with the help of "REPEAT ON ERROR UNTILL TRUE" Group but I want to do it with the help of java code. so the new process definition would look like this.
Start--> ReadFile-> SoapRequestReply --exception-->RetryOnce(Java Code) --> end..
The Java Code will execute the Previous activity one more time.
Please suggest...
This is, indeed, a perfect fit for an error group. But if you really can't afford to use one, you could create a SubProcess that calls back your MainProcess on error and hold the retry count in a Job Shared Variable. Please note that this is a quick and dirty workaround.
You could do it by simply surround the SoapRequestReply with a Group.
This could be either a "Repeat on Error until true" group that repeats
by condition x times if an error occurs or a "while true" loop with individual
handling (error transition) e.g. for logging purposes.
No Java Coding / Activities needed.
With best regards
Seb

How to run JMeter failed threads after test stops?

I'm using JMeter to run a functional test to update the password of a lot of users (22K). I've separated the users in 2 scripts and used a Ultimate Thread Group with Start Threads Count = 100, which is the value with which I got less errors, however I still got 1.5% transactions failed, and I need to rerun only this failed threads, because all users need to have the same password.
I've tried to get answers to this specific problems, but I have only found ways to prevent this from happening, like using a While controller with a timer, or logging the full response for failure, but I haven't found if there is a way to specifically rerun the failed threads.
Does anyone know if this is possible?
You will have to do following.
Use JSR223 sampler to set the rescode=0
While controller with (if rescode!=200)
HTTP Sampler
JSR223 post processor as javascript as the scripting language.
Store response code using prev.getResponseCode()
e.g. vars.put("rescode", prev.getResponseCode());
You might have to add some more intelligence to the script to avoid infinite loop.
Another approach to solving the problem would be to anticipate errors on some of the password update calls and build a data file upon failure with the information you need.
For Example:
Create a regular expression post processor that has default value of false, and template value of true. Make the expression match the expected response, and fail if the sample fails.
Then, after that sampler, you can add an if statement based on the new true/false variable. If it is false, you know the previous password update failed. Inside the if statement, add a dummy sampler with response data containing all the information you need to know which accounts you must retry.
Then, add a simple file writer to this dummy sampler, and log the dummy sampler response data to a file. At the end of a test run this data file would contain all the information you need to re-try all failed accounts.
Sadly this is a slightly manual process, but I'm sure with a little creativity you could automate recursive test runs until the re-try file is empty. Beanshell file IO might let you handle it all inside a single test run.
-Addled

Specify test end condition in Visual Studio Load Test

I'm testing a large BizTalk system using Visual Studio Load Test. Load Test to pushes messages into MQ, these are picked up by BizTalk and then processed.
Rather than having the test finish (and all performance counters ending) as soon as Visual Studio has finished injecting messages to MQ, I want the test to end if and only if some condition is met (in my case if (SELECT COUNT(*) FROM BizTalkMsgBoxDb.dbo.Spool) == 4).
I can see a bunch of ways to run stuff after the test is complete, but no obvious way to extend the test and continue monitoring unless some user-defined exit condition is met.
Is this possible, or if not, does anyone have an idea for a good work-around/hack to achieve this?
You'll want to write a custom load test plugin. Details begin at this URL: http://msdn.microsoft.com/en-us/library/ms243153.aspx
The plugin can manipulate the scenario, extending the duration of the test until your condition is met.
I imagine you want to keep the load test running after queueing up a bunch of requests in order to continue to monitor the performance while the requests are processed. Although we can't control the load test duration, there is a way to achieve this.
Don't limit the test duration: Set the load test duration (or number of iterations) to a very large value -- larger than you anticipate (or know) it will take for the end condition to be satisfied.
Limit the scenario that queues up requests: In the load test scenario properties, in the Options section, set the Maximum Test Iterations so that the user load will drop to zero after sending the desired number of requests. If setting an iteration limit is not possible for some reason, you can instead write a load test plugin that sets the user load to zero in a specified scenario after a certain amount of test time has elapsed.
Check for end condition: Write a web test plugin that checks the database for your end condition. Attach this plugin to a new webtest in a new scenario and set Think Time Between Test Iterations on the scenario so that the test runs only as often as needed (60 seconds?). When the condition is reached, the plugin should write a predetermined value into the user context (the user context is accessible in the web test context as $LoadTestUserContext, and is only available in a load test, not when running a webtest standalone).
Abort the test: Write a load test plugin that looks for the flag value in the user context in the TestFinished event. When the value is found, the plugin calls LoadTest.Abort().
There is one minor disadvantage to this method: the test state is marked as Aborted in the results database.
At time of writing there is (still) no way to extend the duration of the test using a custom load test plugin, nor by having a virtual user type that refused to exit, nor by locking the close-down period of the test and preventing it from exiting that way.
The only way we managed to do something like this was to directly manipulate the LoadTest database and inject performance counter data in afterwards from log files, but this is neither smart nor recommended.
Oh well..

Resources