Using ruby / cucumber, I know you can explicitly call a fail("message"), but what are your other options?
The reason I ask is that we have 0... I repeat, absolutly NO control over our test data. We have cucumber tests that test edge cases that we may or may not have users for in our database. We (for obvious reasons) do not want to throw away the tests, because they are valuable; however since our data set cannot test that edge case, it fails because the sql statement returns an empty data set. Right now, we just have those tests failing, however I would like to see something along the lines of "no_data" or something like that if the sql statement returns an empty data set. So the output would look like
Scenarios: 100 total (80 passed, 5 no_data, 15 fail)
I am willing to use the already implemented "skipped" if there is a skip("message") function.
What are my options so we can see that with the current data, we just don't have any test data for those tests? making these manual tests is also not an option. They need to be run ever week with our automation, but somehow separate from the failures. Failure means defect, no_data found means it's not a testable condition. It's the difference between a warning: we have not tested this edge case, and Alert: broken code.
You can't invoke 'skipped', but you can certainly call pending with or without an error message. I've used this in a similar situation to yours. Unless you're running in strict mode then having pending scenarios won't cause any failures. The problem I encountered was that occasionally a step would get mis-spelled causing cucumber to mark that as pending, since it was not matching a step definition. That then became lost in the sea of 'legitimate' pending scenarios and was weeks before we discovered it.
Related
I have multiple test cases covering different components and in different specs. Each of them run successfully but when I run them together, some of them randomly fail, some for weird reasons like a css-selector isn't found
let routerElement = contentComponentElement.querySelector("router-outlet");
expect(routerElement).toBeTruthy(); //fails sometimes
Could it be possible that because I am running them together, a test case is picking residue or left-over state of the previous test case? Is it possible to clear all the previous data/html etc. before running a new test case?
The issue was some of the test cases used Observables and I wasn't waiting for the Observables to finish before moving to the next test case. I started calling done for such test cases and now things are in order.
I have a huge flow to test using APIs. There are 3 endpoints. One is starting a process (db migration) that can last ~2-3 days, one is returning the status of the current running process (in progress, success, fail) and the last one is returning all the failed processes (as a list).
The whole flow should be:
Start the first process
Call the second endpoint until the first process ends (should get Fail or Success)
If the process failed, call the first endpoint again, if not, go to the next process.
The problem is that 1 process can last around 2-3 days and we have around 20k processes to check. (this should take a lot of time). I do have a special VM only for this.
My question: does it worth trying to implement a solution for this using JMeter?
It is not worth implementing in JMeter unless you want to use the tool as a workload automation engine that replaces functionalities provided by UC4 AppWorkr or Control-M. Based on what you describe, it does not appear to be a load test except the 2nd part that continuously queries the services for success/failure. I do not know the architecture behind that implementation. Hence, I am unable to quantify even that would be a load test or not.
I want to execute a specific step only once before each cucumber feature files. A cucumber feature files can have multiple scenarios. I don't want Background steps here which executes before each scenario. Every feature file can have a step (which is different in each feature) which will execute only once. So i can't use that step into before hook as i have a specific step for every 20 features. Sample Gherkin shows below:
Scenario: This will execute only once before all scenario in this current feature
When Navigate to the Page URL
Scenario: scenario 1
When Some Action
Then Some Verification
Scenario: scenario 2
When Some Action
Then Some Verification
Scenario: scenario 3
When Some Action
Then Some Verification
I hope you guys understand my Question. I am using Ruby Capybara Cucumber in my framework.
Cucumber doesn't really support what you are asking about. A way to implement this with cucumber hooks would be to use these two pieces of doc:
https://github.com/cucumber/cucumber/wiki/Hooks#tagged-hooks
https://github.com/cucumber/cucumber/wiki/Hooks#running-a-before-hook-only-once
You would tag all your feature files appropriately and you can implement tagged Before hooks that execute once on a per feature tag basis.
It's not beautiful but it accomplishes what you want without waiting on a feature request (or using a different tool).
This can be achieved by associating a Before, After, Around or AfterStep hook with one or more tags. Examples:
Before('#cucumis, #sativus') do
# This will only run before scenarios tagged
# with #cucumis OR #sativus.
end
This must be in the top 5 most frequent questions on the Cucumber mailing list. You can do what you want with hooks. However you almost certainly should not do what you want. The execution time you save by taking this approach is totally outweighed by the amount of time and effort it will take to debug the intermittent failures that such an approach generally leads to.
One of the foundations of creating automated tests is to start from a consistent place. When you have code that setups key things in scenarios, but that is not run for every scenario you have to do the following:
Ensure your setup code creates a consistent base to start from (this is easy)
Ensure that every scenario that uses this base, does not modify the base in any way at all (this is very very difficult)
In your example you'd have to ensure that every action in every scenario ends up on your original page URL. If just one scenario fails to do that, then you will end up with intermittent failures, and you will have to go through every scenario to find your culprit.
In general it is much easier and more effective to put your effort into making your setup code FAST enough so that you are not worried about running it before each scenario.
Yes, This can be done by passing the actual value in you feature file and using "(\\d+)" in you java file. Look at below shown code for better understanding.
Scenario: some test scenario
Given whenever a value is 50
In myFile.java, write the step definition as shown below
#Given("whenever a value is (\\d+)$")
public void testValueInVariable(int value) throws Throwable {
assertEqual(value, 50);
}
you can also have a look at below link to get more clear picture:
https://thomassundberg.wordpress.com/2014/05/29/cucumber-jvm-hello-world/
Some suggestions have been given, especially the one quoting the official documentation which uses a global variable to store whether or not initial setup has been run.
For my case, where multiple features were executed one after another, I had to reset the variable again by checking whether scenario.feature.name has changed:
$feature_name ||= ''
$is_setup ||= false
Before do |scenario|
current_feature_name = scenario.feature.name rescue nil
if current_feature_name != $feature_name
$feature_name = current_feature_name
$is_setup = false
end
end
$is_setup can then be used in steps to determine whether any initial setup needs to be done.
I am using VS 2010 unit tests to create a load test.
BeginTimer/EndTimer are used to measure the response time.
On success, the timer works as expected.
If an error occurs in the application, I don't want the timer to record the response time. This will throw off the analysis reports. As an example, an OpenOrder method on success may take 30 seconds, but on failure (eg order not found), the OpenOrder might return in 2 seconds. I want the response time metrics to represent only thebactions that were successful.
Alternatively, is there a way to filter out the timers/transactions
that were not successfull?
Is there another way to do this? Something else besides
BeginTimer/EndTimer?
Code Snippet:
testContextInstance.BeginTimer("trans_01");
OpenOrder("123");// Execute some code
// If the open fails, I want to disregard/kill the timer.
// If I do a 'return' here, never calling EndTimer, the timer
// is still ended and its response time recorded.
testContextInstance.EndTimer("trans_01");
This is a deficiency in the unit testing API. There is a similar lack in the web testing API. I too have wanted a way to disregard timers around failures, but I think we are out of luck here. I believe an enhancement request to Microsoft is needed.
There is (possibly) a (lengthy) workaround: add in your own timer (using Stopwatch class) that you can ignore/terminate at will, and also add the relevant code to insert a result row directly into the transactions table in the load test results database (asynchronously for best performance).
But that's awful. It would much easier if the API simply offered a 'KillTimer' method.
I have unit tests defined for my Visual Studio 2008 solution. These tests are defined in multiple methods and in multiple classes across several files.
I've read in a blog article that when using MSTest, it is a mistake to think that you can depend on the order of execution of your tests:
Execution Interleaving: Since each instance of the test class is instantiated separately on a different thread, there are no guarantees
regarding the order of execution of unit tests in a single class, or
across classes. The execution of tests may be interleaved across
classes, and potentially even assemblies, depending on how you chose
to execute your tests. The key thing here is – all tests could be
executed in any order, it is totally undefined.
That said, I have to have a pre-execution step before any of these tests gets to run. That is, I actually want to define an order of execution somehow. For example, 1) first create the database; 2) test that it's created; then 3) run the remaining 50 tests in arbitrary order.
Any ideas on how I can do that?
I wouldn't test that the database is successfully created; I will assume that all subsequent tests will fail if it is not, and it feels in a way that you would be testing the test code.
Regarding a pre-test step to set up the database, you can do that by creating a method and decorating it with the ClassInitialize attribute. That will make the test framework execute that method prior to any other method within the test class:
[ClassInitialize()]
public static void InitializeClass(TestContext testContext)
{
// your init code here
}
Unit tests should all work standalone, and should not have dependencies on each other, otherwise you can't run a single test in isolation.
Every test that needs the database should then just create it on demand (if it's not already been created - you can use a singleton/static class to ensure that if multiple tests are executed in a batch, the database is only actually created once).
Then it won't matter which test executes first; it'll just be created the first time a test needs a database to use.
In theory it is correct that tests should be independent of each other and be able to run standalone. But in practice, there is a difference between theory and practice, and VS2010 gives me a hard time with its fixed order of execution (random order that is always the same).
Here are some examples:
I have a unit test that cross checks the dates between some tables and verifies that everything is in agreement. Obviously it is of no use to run this test on an empty database, so I want to to run SOME TIME AFTER the unit test that inserts data. Sorry VS2010 doesn't let you do this.
OK, cool, then I will add it to the insert unit test as an epilogue. But then I want to cross check other 10 things and instead of having a unit test ("Make sure that entities with various parameters can be inserted without crashes") I end up having a mega-test.
Then another case.
My unit test inserts entities, just insert, to make sure that this part of the logic works ok. Then I have a multi-threaded version of the test, to make sure that there are no deadlocks and stuff. Clearly I need the multi-threaded test to run SOME TIME AFTER the single threaded test, and ONLY if the single threaded test succeeds. Sorry, VS2010 can't do this.
Another case. I have a unit test that deletes ALL entities of a given kind in the database. This should result in a bunch of empty tables and lots of zeros in other tables. Clearly it is useless to run it on an empty database, so the test inserts 10.000 entities if it finds the DB empty. However, if it runs AFTER the multithreaded test, it will find 250.000 entities, and to delete ALL of them takes TIME. Sorry, VS2010 won't let me do anything about it.
The funny thing is that because of this situation my unit tests started slowly turning into mega-tests, that took more than 30 mins to complete (each) and then VS2010 would time them out, cause the default test timeout is 30 mins. OMG please help! :-)