Cypress test optimise for multiple UI behavior - cypress

In our project we serve multiple clients with the same codebase and each client is having its own configuration, so the UI is behaving as per the configuration, the configuration is saved in the database for the UI elements, we don't want to write a separate test suite for different clients, the reason is when anything gets added to all the clients we should able to update at once in one place and it should work.
Can we get guidance on how we can structure our tests so that they will read the configuration per client from the database and behave accordingly?

Related

Mocking 3rd party integrations outside of the context of a test

In a lot of the apps I work on, we have this problem where we heavily rely on 1st and 3rd party APIs. So much so, in some of our apps, it is useless to try to login without those APIs being in place. Either critical pieces of data are not there or the entire app is like a server side render SPA where it houses no data on its own but pulls that data from an API at the time of a request (we cache it when we can).
This raises a huge problem when trying to develop the app locally since we do not have a sandbox environment. Our current solution is to create a service layer in between our business logic and the actual HTTP calls. We then, in our local environments, swap out the HTTP implementation for a class that just returns fake data. This works pretty well most of the time except for a couple of issues:
This only really gives us one state of the application at a time. Unlike data in the database, we are not able to easily run different seeders to replicate different scenarios.
If we run into a bug in production, we have no way of replicating the api response without actually diving into the code and adding some conditional to return that specific response. With data that is stored in the database, it is easy to login to TablePlus and manually setup some condition or even pull down select table from production.
In our mocks, our functions can get quite large and nasty if we do try to have it dynamically respond with a different response based on the resource id being request, as an example.
This makes the overhead to create each test for each scenario quite high in my opinion. If we could use something similar to a database factory to generate a bunch of different request-response pairs, we could test a lot more cases and if we could somehow, dynamically, setup certain scenarios when we are trying to replicate bugs we are running into production.
Since our applications are built with Laravel and PHP, unlike the database, mocks don't persist from one request to another. We cannot simple throw open a tinker and start seeding out API integrations with data like we can in the database.
I was trying to think of a way to do it with a cache and set request-response pairs. This could also be move to the database but would prefer not to have that extra table there that is only used locally.
Any ideas?

How can i use one data source for multiple test cases in Load test

I have created Performance test cases on Visual studio 2017, The issue i am facing is i have to make all of the test case get through login Data source,
As in Load test they will run parallel, The question is how can i use one data source to run for all the test cases
Instead of adding data source for all the test cases
Thank You in advance
Data sources are added to Web Performance Tests. They are not added to load tests. If all the web tests need a login then each test needs to have a suitable data source added. If the login actions can be written in a called web test (see the Insert call to web test command in the web test editor) then that login (i.e. called) web test could have a datasource for the username and password etc.
One data source can be used by several web tests within the same load test. Each web test will have its own data access method (i.e. Unique or Sequential or Random etc). That means that the same login data may be used by two or more test executions at the same time. If one data source file is needed but each test must use different logins then see this answer for some ideas.

How to create Performance testing framework in jmeter?

For functional automation we use to create a framework which is reusable for automating application. Is there any way to create Performance testing framework in jmeter. So that we can use same framework for Performance testing of different applications.
Please help if any one knows and provide more information regarding it.
You can consider JMeter as a "framework" which already comes with test elements to build requests via different protocols/transports, applying assertions, generating reports, etc.
It is highly unlikely you will be able to re-use existing script for another application as JMeter acts on protocol level therefore there will be different requests for different applications.
There is a mechanism in JMeter allowing to re-use pieces of test plan as modules so you won't have to duplicate your code, check out Test Fragments and Module Controller, however it is more applicable for a single application.
The only "framework-like" approach I can think of is adding your JMeter tests into continuous integration process so you will have a build step which will execute performance tests and publish reports, basically you will be able to re-use the same test setup and reporting routine and the only thing which will change from application to application will be .jmx test script(s). See JMeter Maven Plugin and/or JMeter Ant Task for more details.
You must first ask yourself, how dynamic is my conversation that I am attempting to replicate. If you have a very stable services API where the exposed external interface is static, but the code to handle it on the back end is changing, then you have a good shot at building something which has a long life.
But, if you are like the majority of web sites in the universe then you are dealing with developers who are always changing something, adding a resource, adding of deleting form values (hidden or not), headers, etc.... In this case you should consider that your scripts are perishable, with a limited life, and you will need to rebuild them at some point.
Having noted the limited lifetime of a piece of code to test a piece of code with a limited lifetime, are there some techniques you can use to insulate yourself? Yes. Rule of thumb is the higher up the stack you go to build your test scripts the more insulated you are from changes under the covers ( assuming the layer you build to is stable ). The trade off is with more of the intelligence under the covers of your test interface, the higher the resource cost for any individual virtual user which then dictates more hosts for test execution and more skew from client side code which can distort the view of what is coming from the server. An example, run a selenium script instead of a base jmeter script. A browser is invoked, you have the benefit of all of the local javascript processing to handle the dynamic changes and your script has a longer life.

Automated API blackbox testing

I have a (slightly complex) spring webservice which communicates with multiple frontends via a RESTful API (JSON) and additionally with other devices via SOAP or REST. I'd like setup an automated test environment which is capable of the following things:
create preconditions via fixtures (POSTGRES DB)
send REST or SOAP messages against the API
is able to run certain task (requests against the API) at a specific
time/date
assert and validate the produced results (return of the API call or
check the DB)
run all tests independet from any frontend/UI
integrate the testing environment in my infrastructure (i.e. create a
docker container which runs all tests deployed by Jenkins)
preferably I'd like to build reusable components (i.e. for creating a user that is needed in multiple different tests and so on...). I know there are a lot of tools and frameworks (SoapUI, JMETER,...). But before trying them all and getting lost, I'd like to get an experience report from someone who has a simular setup.
we are using JMeter for API testing. We tried SOAPui but it had some memory issues. So we are pushing forward with JMeter and so far so good.
For your questions:
We are using MySQL, but this post seems to show how to set up a postgres connection in JMeter: https://hiromia.blogspot.com/2015/03/how-to-perform-load-testing-on.html
JMeter can send REST API requests
I'm not sure if this is possible but you could probably have your Jenkins job scheduled to run when you need the API to run the specific task at the specific time.
There are quite a few assertions in JMeter. I use the Response and the BeanShell Assertions a lot.
JMeter is independent from any front end UI which helps pinpoint bugs.
I have not run docker but I am running via Jenkins. This jenkins plugin has been helpful: https://wiki.jenkins.io/display/JENKINS/Log+Parser+Plugin
A few more Tips:
Use the HTTP Request Defaults element. It will save you from having to update all your HTTP requests.
Use the User Defined Variables to define variables you need.
You can combine user defined variables like: ${namePrefix}${myTime} but it will have to be in a 2nd User Defined Variable element(you cant combine them in the same element)
If you have multiple test environments, set up a user defined variable with a value like this: ${__P(testenv,staging)} This way, you can change it from a CLI like this: -Jtestenv=HOTFIX
We are using ExtentReports for pretty html results reports with a custom JSR223 Listener(find my old post on this site).
If your site uses cookies, use the HTTP Cookie Manager.
If you need things to happen in order, on the Test Plan element, check this option: Run Threat Groups consecutively. If you dont, JMeter runs them in a random order.
Hope this is helpful. Happy Testing!

How to be successful in web user interface testing?

We are setting up a Selenium test campaign on a big web application.
The first thing we've done was to build a framework which initialize SQL data in database before the test, launch the test, archive results and then clear data.
We've integrate that in a Maven 2 process, run every day by TeamCity on a dedicated database.
We've set up several Selenium tests now but It's not as used as planned.
Reasons are that tests are sometimes broken for other reasons than regressions (data may have changed, stored procedure may have been recompiled and so on).
I would like to know if there are big success in user interface testing and if so, reasons to that. Commons errors may also interest me.
Testability helps a lot. The biggest win for testability in web apps is if all of the HTML elements you need to interact with on the page have unique and consistent attributes. If the attributes you are using to identify the HTML elements (Selenium uses xpath) are not consistent/reliable from build-to-build, or session-to-session, your test scripts will fail. Also, these attributes must be unique, so that the automation tool (in this case Selenium) can reliably find the object on the web page.
If you want reliable unit tests, you need to have the same input. Starting state of the database is the input. So, you need to have the same starting database each time. Of course, if you wish to do testing with different input, you need to create another unit test (as results will obviously not be the same).
When I do stuff like this, I always use the same database as a starting point. Of course, some of the tests might fail without modifying the database is correct way, so some other subsequent tests might fail as well even though they wouldn't otherwise. If your unit-test tool allows, you should define dependencies between tests to make sure that those tests will not be run at all when the 'parent' one fails.
I use http-unit which has the added benefit of working before any styling has been added to the page.
http://httpunit.sourceforge.net/
You can attach the tests to run in the integration test phase for maven2.
From the site
Written in Java, HttpUnit emulates the
relevant portions of browser behavior,
including form submission, JavaScript,
basic http authentication, cookies and
automatic page redirection, and allows
Java test code to examine returned
pages either as text, an XML DOM, or
containers of forms, tables, and
links.

Resources