How to write a test case to check if an email string is valid in Mocha / Chai in Javascript? - mocha.js

I am new to JavaScript testing. I want to know how to write test cases for scenarios like the name field should not contain any number or the one other example I asked above.
Are there any good practices to follow?

You can follow the link below to get some examples. I normally follow this link while writing the test scripts.
https://www.chaijs.com/guide/styles/#expect

Related

Jmeter JSONs comparison

Currently I am working on moving some API DDT (data from CSV) tests from RobotFramework to Jmeter and what troubles me is the lack of proper JSONs assertion which is able to ignore some keys during comparison. I am totally new to jmeter so I am not sure if there's no such option available.
I am pretty sure we are using the wrong tool for this job, especially because functional testers would take the job of writing new tests. However, my approach (to make it as easy as possible for functionals) is to create jmeter plugin which takes response and compare it to baseline (excluding ignored keys defined in its GUI). What do you think? Is there any builtin I can use instead? Or do you know anything about some existing plugin?
Thanks in advance
The "proper" assertion is JSON Assertion available since JMeter 4.0
You can use arbitrary JSON Path queries to filter response according to your expected result
Example:
If it is not enough - you can always go for JSR223 Assertion, Groovy language has built-in JSON support so it will be way more flexible than any existing or future plugin.
Please find below the approach that I can think of:-
Take the response/HTML/json source code dump for the base line using "save response to a file".
Take the response dump for the AUT that needs to be compare or simply 2nd run dump.
Use 2 FTP sampler's to make calls for the locally saved response dump's.
Use compare assertion to compare the 2 FTP call response's. In the compare assertion, you can use RegEx String and Substitution to mask the timestamps or userID to something common for both so that it will be ignored in comparison.
Below I have shown just an image for my thought's for help.
You need to take care on how to save and fetch the response's.
Hope this help.

Is there a way to create a custom cucumber formatter that prints the Given, When and Then steps

I am trying to create a customer cucumber formatter and found this: http://www.relishapp.com/cucumber/cucumber/docs/extending-cucumber/custom-formatter
I noticed the documentation provided an example that only showed how to print the "Feature" name and "Scenario" name.
I am trying to also print the Given, When, and Then steps. Can someone provide me an example of that?
A good example of this is to use cucumbers own included formatters as examples.
https://github.com/cucumber/cucumber/tree/master/lib/cucumber/formatter
Take a look at the "pretty" formatter. It likely has most of what you need.
Of particular interest to your question are the methods:
before_step
before_step_result
step_name

How to handle multi language website in jmeter script

I have website which supports English and French.I have already created script for website in English but now they want me to test against french website.So how can i extended my script that asssertions does not fail i test script any of those languages.
You can easily add flexibility to your Assertions so they would check English OR French word presence in the response.
For instance if you want to use a single assertion to check whether there is Welcome OR Bienvenue word in the response you can combine them using pipe as follows:
Welcome|Bienvenue
As per How to Use JMeter Assertions in 3 Easy Steps guide Response Assertion in "Contains" and "Matches" mode accepts Perl5-style regular expressions so you should have enough flexibility to be able to check both English and French website versions.
In short
Your tests should be language-agnostic, especially performance-/load-tests.
Explanation
UI tests should use generic selectors such as tags <p>, <div>, <table>, element Id's <div id="basket"> or CSS classes <p class="message"> for looking up elements. As you're using JMeter, I assume you're on some sort of performance-/load-tests. If so, then you want to look most likely for some action elements to progress your tests.
If you cannot omit some language dependency (for example localized URL-paths), I would suggest using JMeter variables that are set according to the language you're testing with. See here for details
In contrast to performance tests, acceptance or general web UI tests would incorporate testing of some labels. Selenium or other HTML capturing tests are usually backed by some test code written by you or your team. That code can rely on resource bundles, translations, etc. so you can test for the correct labels.
HTH, Mark

How to make a feature run before other

I have two cucumber feature ( DeleteAccountingYear.feature and AddAccountingYear.feature).
How can i do to make that the second feature(AddAccountingYear.feature) run before the first one (AddAccountingYear.feature).
I concur with #alannichols about tests being independent of each other. Thats a fundamental aspect of automation suite. Otherwise we will end up with a unmaintainable, flaky test suite.
To run a certain feature file before running another feature appears to me like a test design issue.
Cucumber provides few options to solve issues like this:
a) Is DeleteAccountingYear.feature really a feature of its own? If not you can use the cucumber Background: option. The steps provided in the background will be run for each scenario in that feature file. So your AddAccountingYear.feature will look like this:
Feature: AddingAccountingYear
Background:
Given I have deleted accounting year
Scenario Outline: add new accounting year
Then I add new account year
b) If DeleteAccountingYear.feature is indeed a feature of its own and needs to be in its own feature file, then you can use setup and teardown functions. In cucumber this can be achieved using hooks. You can tag AddDeleteAccountingYear.feature with a certain tag say #doAfterDeleteAccountYear. Now from the Before hooks you can do the required setup for this specific tag. The before hooks(for ruby) will look like:
Before('#doAfterDeleteAccountYear') do
#Call the function to delete the account year
end
If the delete account year is written as a function, then the only thing required is to call this method in the before hook. This way the code will be DRY compliant as well.
If these options doesn't work for you, one another way of forcing the order of execution is by using a batch/shell script. You can add individual cucumber commands for each feature in the order you would like to execute and then just execute the script. The downside of it is different reports will be generated for each feature file. But this is something that I wouldn't recommend for the reasons mentioned above.
From Justin Ko's website - https://jkotests.wordpress.com/2013/08/22/specify-execution-order-of-cucumber-features/ the run order is determined in the following way:
Alphabetically by feature file directory
Alphabetically by feature file name
Order of scenarios within the feature file
So to run one feature before the other you could change the name of the feature file or put it in a separate feature folder with a name that alphabetically first.
However, it is good practice to make all of your tests independent of one another. One of the easiest way to do this is to use mocks to create your data (i.e. the date you want to delete), but that isn't always an option. Another way would be to create the data you want to delete in the set up of the delete tests. The downside to doing this is that it's a duplication of effort, but it won't matter what order the tests run in. This may not be an issue now, but with a larger test suite and/or multiple coders using the test repo it may be difficult to maintain the test ordering based solely on alphabetical sorting.
Another option would be to combine the add and delete tests. This goes against the general rule that one test should test one thing but is often a pragmatic approach if your tests take a long time to run and adding the add data step to the set up for delete would add a lot of time to your test suite.
Edit: After reading that link to Justin Ko's site you can specify the features that are run when you run cucumber, and it will run them in the order that you give. For any that you don't care about the order for you can just put the whole feature folder at the end and cucumber will run through them, skipping any that have already been run. Copy paste example from the link above -
cucumber features\folder2\another.feature features\folder1\some.feature features

External Data File for Unit Tests

I'm a newbie to Unit Testing and I'm after some best practice advice. I'm coding in Cocoa using Xcode.
I've got a method that's validating a URL that a user enters. I want it to only accept http:// protocol and only accept URLs that have valid characters.
Is it acceptable to have one test for this and use a test data file? The data file provides example valid/invalid URLs and whether or not the URL should validate. I'm also using this to check the description and domain of the error message.
Why I'm doing this
I've read Pragmatic Unit Testing in Java with JUnit and this gives an example with an external data file, which makes me think this is OK. Plus it means I don't need to write lots of unit tests with very similar code just to test different data.
But on the other hand...
If I'm testing for:
invalid characters
and an invalid protocol
and valid URLs
all in the same test data file (and therefore in the same test) will this cause me problems later on? I read that one test should only fail for one reason.
Is what I'm doing OK?
How do other people use test data in their unit tests, if at all?
In general, use a test data file only when it's necessary. There are a number of disadvantages to using a test data file:
The code for your test is split between the test code and the test data file. This makes the test more difficult to understand and maintain.
You want to keep your unit tests as fast as possible. Having tests that unnecessarily read data files can slow down your tests.
There are a few cases where I do use data files:
The input is large (for example, an XML document). While you could use String concatenation to create a large input, it can make the test code hard to read.
The test is actually testing code that reads a file. Even in this case, you might want to have the test write a sample file in a temporary directory so that all of the code for the test is in one place.
Instead of encoding the valid and invalid URLs in the file, I suggest writing the tests in code. I suggest creating a test for invalid characters, a test for invalid protocol(s), a test for invalid domain(s), and a test for a valid URL. If you don't think that has enough coverage, you can create a mini integration test to test multiple valid and invalid URLs. Here's an example in Java and JUnit:
public void testManyValidUrls() {
UrlValidator validator = new UrlValidator();
assertValidUrl(validator, "http://foo.com");
assertValidUrl(validator, "http://foo.com/home");
// more asserts here
}
private static void assertValidUrl(UrlValidator validator, String url) {
assertTrue(url + " should be considered valid", validator.isValid(url);
}
While I think this is a perfectly reasonable question to ask, I don't think you should be overly concerned about this. Strictly speaking, you are correct that each test should only test for one thing, but that doesn't preclude your use of a data file.
If your System Under Test (SUT) is a simple URL parser/validator, I assume that it takes a single URL as a parameter. As such, there's a limit to how much simultaneously invalid data you can feed into it. Even if you feed in an URL that contains both invalid characters, and an invalid protocol, it would only cause a single result (that the URL was invalid).
What you are describing is a Data-Driven Test (also called a Parameterized Test). If you keep the test itself simple, feeding it with different data is not problematic in itself.
What you do need to be concerned about is that you want to be able to quickly locate why a test fails when/if that happens some months from now. If your test output points to a specific row in you test data file, you should be able to quickly figure out what went wrong. On the other hand, if the only message you get is that the test failed and any of the rows in the file could be at fault, you will begin to see the contours of a test maintainability nightmare.
Personally, I lean slightly towards having the test data as closely associated with the tests as possible. That's because I view the concept of Tests as Executable Specifications as very important. When the test data is hard-coded within each test, it can very clearly specify the relationship between input and expected output. The more you remove the data from the test itself, the harder it becomes to read this 'specification'.
This means that I tend to define the values of input data within each test. If I have to write a lot of very similar tests where the only variation is input and/or expected output, I write a Parameterized Test, but still invoke that Parameterized Test from hard-coded tests (that each is only a single line of code). I don't think I've ever used an external data file.
But then again, these days, I don't even know what my input is, since I use Constrained Non-Determinism. Instead, I work with Equivalence Classes and Derived Values.
take a look at: http://xunitpatterns.com/Data-Driven%20Test.html

Resources