Is Cypress a good choice for API Automation Testing than Rest Assured/Karate? [closed] - cypress

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I have been googling about using Cypress for Automation API testing. I found below links:
Example: Cypress Real World App - API Testing
Docs: API Testing
Blog: Add GUI to your e2e API tests
Plugin: cy-api
API Testing with Cypress
Most of the cases I can think of like Oauth, All REST Methods including Form-data, GraphQL, Assertions, mocking can be achieved using Cypress.
However I was wondering why its not being used and is it a good choice over other tools?
Please suggest!
Thanks a lot in Advance.

Ask these questions and you will get the answer:
How easy is it to chain API requests and feed the response of one request into the next?
How easy is it to create data-driven tests? Which is to loop over a tabular set of data and make requests to the same end-point.
Can you run your tests in parallel? (Without having to pay extra)
And when you run tests in parallel, do you get a single aggregate HTML report?
Can you mix async operations (e.g. message queues, API hooks / call-backs) into your test suites?
Can you mix database calls into your test suites and assertions?
Can you assert "deeply" on response payloads, and easily ignore (or just check formats for) any data-values that are random, such as time-stamps and generated id-s?
Can you re-use your functional API tests as performance tests?
Can you step-through and debug your API tests using an IDE?
Can you write mocks in the same framework, that you can then use to replace services you depend on in your end-to-end tests?
I've found very few frameworks that can actually do all the above. You can do the research :)

I currently work with Cypress only for api testing (a decision made by the company). There's no right or wrong answer. If it's the tool you want to use and you are comfortable with it, then go for it.
That being said, take into account that Cypress is a framework designed to do end to end or even component testing and therefore, even when you do not have a UI, it will always use the browser to perform actions (this in the long term, and depending the amount of tests, can have an impact in the run time). Lots of things are designed to interact with ui components and you need to patch/wrap your way to perform more complex stuff.
I personally had some issues with nested requests, or when having to perform multiple actions with responses. Asynchrony in general (handled in a weird way by cypress in my opinion).
If I could choose, I would go with a framework designed for api testing just because it will be more flexible and probably will have more tools for you to use when dealing with some of the things I mentioned above.
Hope this helps and good luck!

Related

Can I use karate for testing microservices? [duplicate]

This question already has answers here:
Deploying microservice to be tested within the test [duplicate]
(3 answers)
Closed 1 year ago.
Let me give some background. We're trying to do e2e testing between a bunch of spring boot services that write to kafka, move files and talk to other services to do the file moving. I think we're pretty good on integration testing with mocks and whatnot, but is there a way to leverage karate or any other testing framework to help achieve fully e2e for the few major scenarios? Like can i have a test on one service that takes actual data and sends to the next service without mocking?
I hope that made sense. Thank you.
It really does not matter whether the "system under test" is a single service or a collection of services - we just need to think in terms of input to the system, and the outputs from the system. From the description it looks like, the "system under test" does ...
(a) interact with some downstream services : you should be able to mock them using Karate
(b) produces to Kafka : See if https://github.com/Sdaas/karate-kafka can help here
(c) does some file i/o
For the last one, you may need to write some custom Java code that can be called from within Karate. See https://github.com/Sdaas/karate-kafka/tree/master/src/test/java/karate/java for some examples.
As you said, karate is for e2e testing. The answer is quite easy: if you can assess changes in your overall state across all microservices according to the result of some action(s), then yes.
Of course all these changes, actions and assessments have to be HTTP, since it is Karate’s main protocol.
The answer is yes. Even within the same Scenario you can make calls to 2 different URL-s. So for example you can GET from service A and use the response to POST to service B.

Http proxy server tests [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have implemented a http proxy client/server. Currently I intended to test this proxy client/server performance. Can anybody help me what approaches exists to make these tests?
If you are looking for some tools the following will be helpful for you:
RoboHydra is a web server designed precisely to help you write and test software that uses HTTP as a communication protocol. There are many ways to use
RoboHydra, but the most common use cases are as follows: RoboHydra
allows you to combine a locally stored website front end with a back
end sat on a remote server, allowing you to test your own local
front end installation with a fully functional back end, without
having to install the back end on your local machine.
If you write a program designed to talk to a server using HTTP, you
can use RoboHydra to imitate that server and pass custom responses
to the program. This can help you reproduce different bugs and
situations that might be otherwise hard, if not impossible, to test.
https://dev.opera.com/articles/robohydra-testing-client-server-interactions/
Webserver Stress Tool simulates large numbers of users accessing a website via HTTP/HTTPS. The software can simulate up to 10.000 users that independently click their way through a set of URLs. Simple URL patterns are supported as well as complex URL patterns (via a Script file).
Webserver Stress Tool supports a number of different testing types. For example
✓ Performance Tests—this test queries single URLs of a webserver or web application to identify and discover elements that may be responsible for slower than expected performance. This test provides a unique opportunity to optimize server settings or application configurations by testing various implementations of single web pages/script to identify the fastest code or settings.
✓ Load Tests—this tests your entire website at the normal (expected) load. For load testing you simply enter the URLs, the number of users, and the time between clicks of your website traffic. This is a “real world” test.
✓ Stress Tests—these are simulations of “brute force” attacks that apply excessive load to your webserver. This type of “brute force” situation can be caused by a massive spike in user activity (i.e., a new advertising campaign). This is a great test to find the traffic threshold for your webserver.
✓ Ramp Tests—this test uses escalating numbers of users over a given time frame to determine the maximum number of users the webserver can accommodate before producing error messages.
✓ Various other tests—working with Webserver Stress Tool simply gives you more insight about your website, e.g. to determine that web pages can be requested simultaneously without problems like database deadlocks, semaphores, etc.
http://www.paessler.com/tools/webstress/features
To better understand what is client-server and web based testing and how to test these applications you may read this post http://www.softwaretestinghelp.com/what-is-client-server-and-web-based-testing-and-how-to-test-these-applications/

Integration testing with Web API - non-InMemory-tests or InMemory tests -

I would like to do integration testing on my Web API Controllers.
When the integration test starts the whole request/response pipeline of the Web API should be processed so its a real integration test.
I have read some blogs about non-InMemory-tests or InMemory tests. I need to know what is the difference and what of those approaches matches my above criteria?
I would really be glad about some explanations from people who really dealt with integration testing on Web API for self-hosting or IIS hosting (if there is a difference in testing...)
Not sure what you mean by non-in-memory testing but with integration testing involving an in-memory hosted web API, the requests are sent directly to the HttpServer, which is basically the first component to run in ASP.NET Web API pipeline. This means, the requests do not hit the network stack. So, you don't need to worry about running on specific ports, etc and also if you write good amount of tests, the time it takes to run all your tests will not be too big, since you deal with in-memory and not network. You should get comparable running times as a typical unit test. Look at this excellent post from Kiran for more details on in-memory testing. In-memory testing will exercise all the components you setup to run in the pipeline but one thing to watch out for is formatters. If you send ObjectContent in the request, there is no need to run media-formatters, since the request is already in deserialized format and hence media formatting does not happen.
If you want to get more closer and willing to take a hit on the running time, you can write your tests using a self-host. Is that what you mean by non-in-memory testing? As an example, you can use OWIN self-hosting. You can use Katana hosting APIs and host your web API and hit it with your requests. Of course, this will use the real HttpListener and the requests do traverse the network stack although it is all happening in the same machine. The tests will be comparatively slower but you get much closer to your prod runs probably.
I personally have not seen anyone using web-hosting and doing lots of integration testing. It is technically possible to fire off your requests using HttpClient and inspect the response and assert stuff but you will not have lot of control over arranging your tests programatically.
My choice is to mix and match, that is, use in-memory tests as much as possible and use Katana-based host only for those specific cases I need to really hit the network.

Debugging and Testing a Web Application Efficiently

I have written a web application which I am trying to test but I am finding that some of the things that I am doing are really repetitive and inefficient. For example, I might want to test just the reporting component of the application but in order to access the reporting section, you are required to log-in. I find myself logging in all the time just to test a completely unrelated component. What are some strategies that I can use to bypass these kind of constraints?
Maybe you should Unit test these functionalities. This way you can automate repetitive tasks.
It also helps to improve your code quality by doing so ;)
A link to get you started : http://msdn.microsoft.com/en-us/magazine/dd942838.aspx

JMeter versus The Grinder? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I'm looking at stress testing our website and having trouble picking the right tool.
It looks to me like two of the most popular are JMeter and The Grinder. Can anyone help with reasons in favor of either?
Thanks!
I have worked a bunch with The Grinder and there are three main reasons I like it.
It's free. I assume from your question, you are only looking at free solutions. There are some excellent commercial products as well, but most of the time, I cannot justify the cost
It is easy to start up processes on other machines. When really trying to crank up load on a cluster, I need to easily distribute the load out to remote machines. Grinder is great for that.
The scripts are all Jython. That allows me to easily customize my scripts to programatically tweak my scripts (e.g. randomize certain paths).
I haven't used JMeter in a long time, so I cannot say authoritatively how it compares on points 2 & 3.
As of Jmeter:
It's free.
it's easy to Start with lot of documentation on its Website and on internet
it has a proxy feature to easily create test plan from browser navigation
It is easy to start up processes on other machines. It remote testing, can be done from GUI or console.
The scripts can be written in beanshell, java, or any jsr223 language ( groovy, Javascript, scala, jexl ...)
it has a lot of built- in samplers and thanks to its plugin architecture it's very Easy to add new ones or use any scripting engine to do what's missing
it has great user mailing list
it has very reactive support
it's now a top Level Apache project
...
Use gatling.
http://gatling-tool.org/
Its lovely, great DSL, and you can just edit the .scala files and rerun and it will compile for you on demand. Also emits very nice graphs
http://gatling-tool.org/sample/index.html
If you're not familiar with Jython (like I wasn't) then there is also a great little tool that comes with Grinder (the TCP Proxy or some such name) that allows you to click around in a webapp and save your actions as a ready-made Jython script for Grinder, which you can then analyse/edit/adopt as necessary.
There is an excellent blog post that describes
Load Test Tools:
Grinder 3.11
Gatling 2.0.0.M3a
Tsung 1.51
JMeter 2.11
The Grinder
The Grinder consists of two main parts:
The Grinder Console - This is GUI application which controls various Grinder agents and monitors results in real time. The console can be used as a basic IDE for editing or developing test suites.
Grinder Agents - These are headless load generators; each can have a number of workers to create the load
Key Features of the Grinder:
TCP proxy - records network activity into the Grinder test script
Distributed testing - can scale with the increasing number of agent instances
Power of Python or Closure combined with any Java API for test script creation or modification
Flexible parameterization which includes creating test data on-the-fly and the capability to use external data sources like files, databases, etc.
Post processing and assertion - full access to test results for correlation and content verification
Support of multiple protocols
Apache JMeter
Key Features of the JMeter:
Cross-platform. JMeter can be run on any operating system with Java
Scalable. When you need to create a higher load than a single machine can create, JMeter can be executed in a distributed mode - meaning one master JMeter machine will control a number of remote hosts.
Multi-protocol support. The following protocols are all supported ‘out-of-the-box’: HTTP, SMTP, POP3, LDAP, JDBC, FTP, JMS, SOAP, TCP
Multiple implementations of pre and post processors around sampler. This provides advanced setup, teardown parametrization and correlation capabilities
Various assertions to define criteria
Multiple built-in and external listeners to visualize and analyze performance test results
Integration with major build and continuous integration systems - making JMeter performance tests part of the full software development life cycle
I just went through the process of trying at both and I would totally agree with Rob here. Grinder also seemed faster, and I really like how simple and lightweight it is compared to Jmeter. The grinder.properties file is totally easy to use, especially if you're more of a console guys then a UI guy.

Resources