Automated API blackbox testing - spring

I have a (slightly complex) spring webservice which communicates with multiple frontends via a RESTful API (JSON) and additionally with other devices via SOAP or REST. I'd like setup an automated test environment which is capable of the following things:
create preconditions via fixtures (POSTGRES DB)
send REST or SOAP messages against the API
is able to run certain task (requests against the API) at a specific
time/date
assert and validate the produced results (return of the API call or
check the DB)
run all tests independet from any frontend/UI
integrate the testing environment in my infrastructure (i.e. create a
docker container which runs all tests deployed by Jenkins)
preferably I'd like to build reusable components (i.e. for creating a user that is needed in multiple different tests and so on...). I know there are a lot of tools and frameworks (SoapUI, JMETER,...). But before trying them all and getting lost, I'd like to get an experience report from someone who has a simular setup.

we are using JMeter for API testing. We tried SOAPui but it had some memory issues. So we are pushing forward with JMeter and so far so good.
For your questions:
We are using MySQL, but this post seems to show how to set up a postgres connection in JMeter: https://hiromia.blogspot.com/2015/03/how-to-perform-load-testing-on.html
JMeter can send REST API requests
I'm not sure if this is possible but you could probably have your Jenkins job scheduled to run when you need the API to run the specific task at the specific time.
There are quite a few assertions in JMeter. I use the Response and the BeanShell Assertions a lot.
JMeter is independent from any front end UI which helps pinpoint bugs.
I have not run docker but I am running via Jenkins. This jenkins plugin has been helpful: https://wiki.jenkins.io/display/JENKINS/Log+Parser+Plugin
A few more Tips:
Use the HTTP Request Defaults element. It will save you from having to update all your HTTP requests.
Use the User Defined Variables to define variables you need.
You can combine user defined variables like: ${namePrefix}${myTime} but it will have to be in a 2nd User Defined Variable element(you cant combine them in the same element)
If you have multiple test environments, set up a user defined variable with a value like this: ${__P(testenv,staging)} This way, you can change it from a CLI like this: -Jtestenv=HOTFIX
We are using ExtentReports for pretty html results reports with a custom JSR223 Listener(find my old post on this site).
If your site uses cookies, use the HTTP Cookie Manager.
If you need things to happen in order, on the Test Plan element, check this option: Run Threat Groups consecutively. If you dont, JMeter runs them in a random order.
Hope this is helpful. Happy Testing!

Related

Can we run Jmeter test from different geo location simultaneously?

Can we run Jmeter test from different geo location simultaneously? I want to run laod test for a gaming application,so users have to be generated from different client locations(UK,US) to hit the server simultneously.players from different geolocations should hit the server at the same time.How to do that using Jmeter???
I would not recommend using JMeter distributed testing for that as it is more suitable that jmeter client and servers are in same LAN.
So you have 2 options at least:
orchestrate the running of autonomous JMeter instances using ansible or similar tools
Use a Paas like Redline13 that will allow you to easily do that using AWS with benefit of having all the JMeter setup done and have live aggregated report and final aggregated CSV and JMeter HTML dashboard
If you're load testing a server it doesn't make any difference for it where the request is coming from. In case you need to check how does the server behave under the load and what response time for different geo-locations look like you can do it manually using i.e. Amazon Micro instances which are free.
In case you don't want the manual steps you can run JMeter in distributed mode using a helper script like JMeter ec2 which automates deployment of JMeter slaves onto AWS machines but be prepared to higher connect time and latency metrics

How to create Performance testing framework in jmeter?

For functional automation we use to create a framework which is reusable for automating application. Is there any way to create Performance testing framework in jmeter. So that we can use same framework for Performance testing of different applications.
Please help if any one knows and provide more information regarding it.
You can consider JMeter as a "framework" which already comes with test elements to build requests via different protocols/transports, applying assertions, generating reports, etc.
It is highly unlikely you will be able to re-use existing script for another application as JMeter acts on protocol level therefore there will be different requests for different applications.
There is a mechanism in JMeter allowing to re-use pieces of test plan as modules so you won't have to duplicate your code, check out Test Fragments and Module Controller, however it is more applicable for a single application.
The only "framework-like" approach I can think of is adding your JMeter tests into continuous integration process so you will have a build step which will execute performance tests and publish reports, basically you will be able to re-use the same test setup and reporting routine and the only thing which will change from application to application will be .jmx test script(s). See JMeter Maven Plugin and/or JMeter Ant Task for more details.
You must first ask yourself, how dynamic is my conversation that I am attempting to replicate. If you have a very stable services API where the exposed external interface is static, but the code to handle it on the back end is changing, then you have a good shot at building something which has a long life.
But, if you are like the majority of web sites in the universe then you are dealing with developers who are always changing something, adding a resource, adding of deleting form values (hidden or not), headers, etc.... In this case you should consider that your scripts are perishable, with a limited life, and you will need to rebuild them at some point.
Having noted the limited lifetime of a piece of code to test a piece of code with a limited lifetime, are there some techniques you can use to insulate yourself? Yes. Rule of thumb is the higher up the stack you go to build your test scripts the more insulated you are from changes under the covers ( assuming the layer you build to is stable ). The trade off is with more of the intelligence under the covers of your test interface, the higher the resource cost for any individual virtual user which then dictates more hosts for test execution and more skew from client side code which can distort the view of what is coming from the server. An example, run a selenium script instead of a base jmeter script. A browser is invoked, you have the benefit of all of the local javascript processing to handle the dynamic changes and your script has a longer life.

Looking for an Object Oriented JMeter example

I'm looking to abstract the sequence of REST calls for complicated behaviors in my company's app into a series of classes that are instantiated as needed and the methods would effectively create the sequence of HTTP request calls. It's my hope that doing this would make the tests more compact and readable (as well as providing more reusable code). I would need to utilize the StandardJmeterEngine and export the test to JMX format after the HashTree test plan is created.
To cut on development time, I'm hoping to find a nice example of this; I'm sure someone's done it, but I've yet to stumble onto it.
If you are looking into the way of programmatic creation JMeter test take a look into the following sources:
JMeter API
How to Write a plugin for JMeter
Five Ways To Launch a JMeter Test without Using the JMeter GUI
If you are looking for an example project you can check out jmeter-from-code solution which demonstrates creating a JMeter Test Plan programmatically, storing it into a .jmx script file, running it and getting the .jtl results file.

Configure a proxy to make a request or run a test on Runscope

I have not been able to find if there is a way to configure a proxy on Runscope (or using runscope-radar). This is exactly my problem:
I want to make a test with this flow:
make a request to our API and save some data
make a request to an external API and save some data
make another request to our API
To be able to make a request to the external API I am using a proxy (I can execute it with Postman and see the response).
It is important to mention that this test is going to be run in our CI pipeline too, which is using TravisCI, so the solution has to be generic just to be able to be executed in other machines.
Thanks!
The agent will use the proxy settings set in the HTTPS_PROXY and HTTP_PROXY environment variables. It's not possible to configure a proxy otherwise.
Edit: I was absolutely wrong, but I feel like someone googling might happen upon this thread and this could be helpful.
All credit goes to John for his useful comment. Stringing together multiple requests does exist in runscope:
https://www.runscope.com/docs/api-testing

Integration testing with Web API - non-InMemory-tests or InMemory tests -

I would like to do integration testing on my Web API Controllers.
When the integration test starts the whole request/response pipeline of the Web API should be processed so its a real integration test.
I have read some blogs about non-InMemory-tests or InMemory tests. I need to know what is the difference and what of those approaches matches my above criteria?
I would really be glad about some explanations from people who really dealt with integration testing on Web API for self-hosting or IIS hosting (if there is a difference in testing...)
Not sure what you mean by non-in-memory testing but with integration testing involving an in-memory hosted web API, the requests are sent directly to the HttpServer, which is basically the first component to run in ASP.NET Web API pipeline. This means, the requests do not hit the network stack. So, you don't need to worry about running on specific ports, etc and also if you write good amount of tests, the time it takes to run all your tests will not be too big, since you deal with in-memory and not network. You should get comparable running times as a typical unit test. Look at this excellent post from Kiran for more details on in-memory testing. In-memory testing will exercise all the components you setup to run in the pipeline but one thing to watch out for is formatters. If you send ObjectContent in the request, there is no need to run media-formatters, since the request is already in deserialized format and hence media formatting does not happen.
If you want to get more closer and willing to take a hit on the running time, you can write your tests using a self-host. Is that what you mean by non-in-memory testing? As an example, you can use OWIN self-hosting. You can use Katana hosting APIs and host your web API and hit it with your requests. Of course, this will use the real HttpListener and the requests do traverse the network stack although it is all happening in the same machine. The tests will be comparatively slower but you get much closer to your prod runs probably.
I personally have not seen anyone using web-hosting and doing lots of integration testing. It is technically possible to fire off your requests using HttpClient and inspect the response and assert stuff but you will not have lot of control over arranging your tests programatically.
My choice is to mix and match, that is, use in-memory tests as much as possible and use Katana-based host only for those specific cases I need to really hit the network.

Resources