Integration testing with Web API - non-InMemory-tests or InMemory tests - - asp.net-web-api

I would like to do integration testing on my Web API Controllers.
When the integration test starts the whole request/response pipeline of the Web API should be processed so its a real integration test.
I have read some blogs about non-InMemory-tests or InMemory tests. I need to know what is the difference and what of those approaches matches my above criteria?
I would really be glad about some explanations from people who really dealt with integration testing on Web API for self-hosting or IIS hosting (if there is a difference in testing...)

Not sure what you mean by non-in-memory testing but with integration testing involving an in-memory hosted web API, the requests are sent directly to the HttpServer, which is basically the first component to run in ASP.NET Web API pipeline. This means, the requests do not hit the network stack. So, you don't need to worry about running on specific ports, etc and also if you write good amount of tests, the time it takes to run all your tests will not be too big, since you deal with in-memory and not network. You should get comparable running times as a typical unit test. Look at this excellent post from Kiran for more details on in-memory testing. In-memory testing will exercise all the components you setup to run in the pipeline but one thing to watch out for is formatters. If you send ObjectContent in the request, there is no need to run media-formatters, since the request is already in deserialized format and hence media formatting does not happen.
If you want to get more closer and willing to take a hit on the running time, you can write your tests using a self-host. Is that what you mean by non-in-memory testing? As an example, you can use OWIN self-hosting. You can use Katana hosting APIs and host your web API and hit it with your requests. Of course, this will use the real HttpListener and the requests do traverse the network stack although it is all happening in the same machine. The tests will be comparatively slower but you get much closer to your prod runs probably.
I personally have not seen anyone using web-hosting and doing lots of integration testing. It is technically possible to fire off your requests using HttpClient and inspect the response and assert stuff but you will not have lot of control over arranging your tests programatically.
My choice is to mix and match, that is, use in-memory tests as much as possible and use Katana-based host only for those specific cases I need to really hit the network.

Related

Can I use react testing library with jest to do integration tests?

My app has a React front-end and ASP.NET core Web API back-end. I've built some unit tests (i.e. which stub out fetch()) with react-testing-library.
Now I want to do an integration test calling the real back-end API over HTTP. There are lots of standalone API testing tools I could do this, but then I'm still not testing the interface between the React components and the server.
Is there any reason I shouldn't simply write jest tests that don't stub out fetch(), and achieve a real end-to-end test? this seems quite obvious to me but I haven't seen any articles discussing it.
FWIW after getting the first few tests going it seems to be a pretty successful approach. I have been able to "almost" remote-control the UI (by which I mean it's not actually running in a browser, but it is using my React components) without having to learn or set up selenium.
The biggest headache was JSDOM's implementation of fetch (which in their defence probably wasn't meant to do this kind of thing). I ran into lots of problems with cookies and CORS so I used node-fetch instead and this gave me much more control over the HTTP requests I was generating.

How do I Performance test a 3rd party API which has limitations on number of concurrent users?

I am new to Performance testing.
I want to Performance test my application that calls a 3rd party API (Transunion) for 2000 concurrent users using JMeter. The 3rd party API Transunion has limitation of up to 5 concurrent users maximum at a time.
How should I do Performance testing?
Thanks.
You should not be testing the 3rd-party API as this is not something you can efficiently control and even if you discover that it doesn't support 5 users but supports 1 - there is not much you will be able to do about it.
Your load test should focus solely on your application and your domain, all external stuff like banners, images, maps, videos, 3rd-party integrations should be excluded.
If the 3rd-party API is an integral part of your application you can use a Mock Object pattern to not to make the real call to the 3rd-party API but rather return a "dummy" response. If you cannot implement mocking in your application code you can use an external program like WireMock or HTTP API Mock or similar.
But be aware that the whole integrated system acts at the speed of its slowest component so if this 3rd-party integration is the essential part of your application - I don't see a lot of sense in performing the load test apart from concurrency testing like "what happens if from 2 to 5 users will do X at the same moment"
This is a classic Use Case for API simulations / remote mocks. They act as stand-ins for the real 3rd-party API dependencies that may be unavailable, impose rate limits/throttle calls, charge per transaction, etc. and they can also simulate slowness in such APIs, inject faults, and more to help with end-to-end integration or performance testing.
There is a choice of tools out there with different capabilities; you can write your own, too.
Disclaimer: I am involved with one such tool so I'll refrain from providing tool recommendations :-)

How to create Performance testing framework in jmeter?

For functional automation we use to create a framework which is reusable for automating application. Is there any way to create Performance testing framework in jmeter. So that we can use same framework for Performance testing of different applications.
Please help if any one knows and provide more information regarding it.
You can consider JMeter as a "framework" which already comes with test elements to build requests via different protocols/transports, applying assertions, generating reports, etc.
It is highly unlikely you will be able to re-use existing script for another application as JMeter acts on protocol level therefore there will be different requests for different applications.
There is a mechanism in JMeter allowing to re-use pieces of test plan as modules so you won't have to duplicate your code, check out Test Fragments and Module Controller, however it is more applicable for a single application.
The only "framework-like" approach I can think of is adding your JMeter tests into continuous integration process so you will have a build step which will execute performance tests and publish reports, basically you will be able to re-use the same test setup and reporting routine and the only thing which will change from application to application will be .jmx test script(s). See JMeter Maven Plugin and/or JMeter Ant Task for more details.
You must first ask yourself, how dynamic is my conversation that I am attempting to replicate. If you have a very stable services API where the exposed external interface is static, but the code to handle it on the back end is changing, then you have a good shot at building something which has a long life.
But, if you are like the majority of web sites in the universe then you are dealing with developers who are always changing something, adding a resource, adding of deleting form values (hidden or not), headers, etc.... In this case you should consider that your scripts are perishable, with a limited life, and you will need to rebuild them at some point.
Having noted the limited lifetime of a piece of code to test a piece of code with a limited lifetime, are there some techniques you can use to insulate yourself? Yes. Rule of thumb is the higher up the stack you go to build your test scripts the more insulated you are from changes under the covers ( assuming the layer you build to is stable ). The trade off is with more of the intelligence under the covers of your test interface, the higher the resource cost for any individual virtual user which then dictates more hosts for test execution and more skew from client side code which can distort the view of what is coming from the server. An example, run a selenium script instead of a base jmeter script. A browser is invoked, you have the benefit of all of the local javascript processing to handle the dynamic changes and your script has a longer life.

How does one unit test network-dependent operations?

Some of my apps grab data from the internet. I'm getting into the habit of writing unit tests, and I suspect that if I write a test for values returned from the internet, I will run into latency and/or reliability problems.
What's a valid way to test data that "lies on the other end of a web request"?
I'm using the stock unit testing toolkit that comes with Xcode, but the question theoretically applies to any testing framework.
Unit test is focused specifically on the business logic of your class. There would no latency, reliability etc as you would use some mock object to simulate what you actually interact.
What you are describing is some form of integration testing and for the OP seems like is not what you intent.
You should "obscure" the other end by mocking and not really access the network, a remote database etc.
Among others:
introduce artificial latency in requests
use another machine on the same network or at least another VM
test connection failures (either by connecting to a non existent server or cutting physically the connection
test for incomplete data (connection could be cut half way)
test for duplicate data (app could try to submit the request more than once if it thinks it was not successful - and in some scenarios may lead to lost data)
All of these should fail gracefully (either on the server side or on the client side)
I posed this question to the venerable folks on #macdev IRC channel on freenode.net, and I got a few really good answers.
Mike Ash (of mikeash.com) suggests implementing a local web server inside my app. For complex cases, I'd probably do this. However, I'm just using some of the built in initWithContentsOfURL:(NSURL *)url method of NSData.
For simpler cases, mike says an alternate method is to pass base64 encoded dummy data directly to the NSData initializer. (Use data://dummyDataEncodedAsBase64GoesAfterTheDataProtocolThingy.)
Similarly, "alistra" suggests using local file URLs that point to files containing mock data.

Performance testing application for bottle necks using production data

I have been tasked with looking for a performance testing solution for one of our Java applications running on a Weblogic server. The requirement is to record production requests (both GET and POST including POST data) and then run these requests in a performance test environment with a copy of the production database.
The reasons for using production requests instead of a test script are:
It is a large application with no existing test scripts so it would be a a large amount of work to write scripts to cover the entire application.
Some performance issues only appear when users do a number of actions in a particular order.
To test using actual user interaction with the system not an estimation at how the users may interact with the system. We all know that users will do things we have not thought of.
I want to be able to fix performance issues and rerun the requests against the fixed code before releasing to production.
I have looked at using JMeters Access Log Sampler with server access logs however the access logs do not contain POST data and the access log sampler only looks at the request URL so it cannot simulate users submitting form data.
I have also looked at using the JMeter HTTP Proxy Server however this can record the actions of only one user and requires the user to configure their browser to use the proxy. This same limitation exist with Tsung and The Grinder.
I have looked at using Wireshark and TCReplay but recording at the packet level is excessive and will not give any useful reports at a request level.
Is there a better way to analyze production performance considering I need to be able to test fixes before releasing to production?
That is going to be a hard ask. I work with Visual Studio Test Edition to load test my applications and we are only able to "estimate" the users activity on the site.
It is possible to look at the logs and gather information on the likelyhood of certain paths through your app. You can then look at the production database to look at the likely values entered in any post requests. From that you will have to make load tests that approach the useage patterns of your production site.
With any current tools I don't think it is possible to record and playback actual user interation.
It is possible to alter your web app so that is records and logs every request and post against session and datetime. This custom logging could be then used to generate load test requests against a test website. This would be some serious code change to your existing site and would likely have performance impacts.
That said, I have worked with web apps that do this level of logging and the ability to analyse the exact series of page posts/requests that caused an error is quite valuable to a developer.
So in summary: It is possible, but I have not heard of any off the shelf tools that do it.
Please check out this Whitepaper by Impetus Technologies on this page.. http://www.impetus.com/plabs/sandstorm.html
Honestly, I'm not sure the task you're being asked to do is even possible, let alone a good idea. Depending on how complex the application's backend is, and how perfect you can recreate the state (ie: all the way down to external SOA services or the time/clock), it may not be possible to make those GET and POST requests reproduce the same behavior.
That said, performance testing against production data is always great, but it usually requires application-specific knowledge that will stress said data. Simply repeating HTTP GETs and POSTs will almost certainly not yield useful results.
Good luck!
I would suggest the following to get the production requests and simulate the accurate workload:
1) Use coremetrics: CoreMetrics provides such solutions using which you can know the application usage patterns. This would help in coming up with an accurate workload model. This model can then be converted into test scripts and executed against a masked copy of production database. This will provide you accurate results about the application performance in realtime.
2) Another option would be creating a small utility using AOP (Aspect oriented apporach) so that it can trace all the requests and corresponding method traces. This would help in identifying the production usage pattern and in turn accurate simulation of workload. AOP frameworks such as AspectJ can be used. This would not require any changes in code. The instrumentation can be done on the fly. The other benefit would be that thi cna only be enabled for a specific time window and then it can be turned off.
Regards,
batterywalam

Resources