How does one unit test network-dependent operations? - xcode

Some of my apps grab data from the internet. I'm getting into the habit of writing unit tests, and I suspect that if I write a test for values returned from the internet, I will run into latency and/or reliability problems.
What's a valid way to test data that "lies on the other end of a web request"?
I'm using the stock unit testing toolkit that comes with Xcode, but the question theoretically applies to any testing framework.

Unit test is focused specifically on the business logic of your class. There would no latency, reliability etc as you would use some mock object to simulate what you actually interact.
What you are describing is some form of integration testing and for the OP seems like is not what you intent.
You should "obscure" the other end by mocking and not really access the network, a remote database etc.

Among others:
introduce artificial latency in requests
use another machine on the same network or at least another VM
test connection failures (either by connecting to a non existent server or cutting physically the connection
test for incomplete data (connection could be cut half way)
test for duplicate data (app could try to submit the request more than once if it thinks it was not successful - and in some scenarios may lead to lost data)
All of these should fail gracefully (either on the server side or on the client side)

I posed this question to the venerable folks on #macdev IRC channel on freenode.net, and I got a few really good answers.
Mike Ash (of mikeash.com) suggests implementing a local web server inside my app. For complex cases, I'd probably do this. However, I'm just using some of the built in initWithContentsOfURL:(NSURL *)url method of NSData.
For simpler cases, mike says an alternate method is to pass base64 encoded dummy data directly to the NSData initializer. (Use data://dummyDataEncodedAsBase64GoesAfterTheDataProtocolThingy.)
Similarly, "alistra" suggests using local file URLs that point to files containing mock data.

Related

Mocking 3rd party integrations outside of the context of a test

In a lot of the apps I work on, we have this problem where we heavily rely on 1st and 3rd party APIs. So much so, in some of our apps, it is useless to try to login without those APIs being in place. Either critical pieces of data are not there or the entire app is like a server side render SPA where it houses no data on its own but pulls that data from an API at the time of a request (we cache it when we can).
This raises a huge problem when trying to develop the app locally since we do not have a sandbox environment. Our current solution is to create a service layer in between our business logic and the actual HTTP calls. We then, in our local environments, swap out the HTTP implementation for a class that just returns fake data. This works pretty well most of the time except for a couple of issues:
This only really gives us one state of the application at a time. Unlike data in the database, we are not able to easily run different seeders to replicate different scenarios.
If we run into a bug in production, we have no way of replicating the api response without actually diving into the code and adding some conditional to return that specific response. With data that is stored in the database, it is easy to login to TablePlus and manually setup some condition or even pull down select table from production.
In our mocks, our functions can get quite large and nasty if we do try to have it dynamically respond with a different response based on the resource id being request, as an example.
This makes the overhead to create each test for each scenario quite high in my opinion. If we could use something similar to a database factory to generate a bunch of different request-response pairs, we could test a lot more cases and if we could somehow, dynamically, setup certain scenarios when we are trying to replicate bugs we are running into production.
Since our applications are built with Laravel and PHP, unlike the database, mocks don't persist from one request to another. We cannot simple throw open a tinker and start seeding out API integrations with data like we can in the database.
I was trying to think of a way to do it with a cache and set request-response pairs. This could also be move to the database but would prefer not to have that extra table there that is only used locally.
Any ideas?

Integration testing with Web API - non-InMemory-tests or InMemory tests -

I would like to do integration testing on my Web API Controllers.
When the integration test starts the whole request/response pipeline of the Web API should be processed so its a real integration test.
I have read some blogs about non-InMemory-tests or InMemory tests. I need to know what is the difference and what of those approaches matches my above criteria?
I would really be glad about some explanations from people who really dealt with integration testing on Web API for self-hosting or IIS hosting (if there is a difference in testing...)
Not sure what you mean by non-in-memory testing but with integration testing involving an in-memory hosted web API, the requests are sent directly to the HttpServer, which is basically the first component to run in ASP.NET Web API pipeline. This means, the requests do not hit the network stack. So, you don't need to worry about running on specific ports, etc and also if you write good amount of tests, the time it takes to run all your tests will not be too big, since you deal with in-memory and not network. You should get comparable running times as a typical unit test. Look at this excellent post from Kiran for more details on in-memory testing. In-memory testing will exercise all the components you setup to run in the pipeline but one thing to watch out for is formatters. If you send ObjectContent in the request, there is no need to run media-formatters, since the request is already in deserialized format and hence media formatting does not happen.
If you want to get more closer and willing to take a hit on the running time, you can write your tests using a self-host. Is that what you mean by non-in-memory testing? As an example, you can use OWIN self-hosting. You can use Katana hosting APIs and host your web API and hit it with your requests. Of course, this will use the real HttpListener and the requests do traverse the network stack although it is all happening in the same machine. The tests will be comparatively slower but you get much closer to your prod runs probably.
I personally have not seen anyone using web-hosting and doing lots of integration testing. It is technically possible to fire off your requests using HttpClient and inspect the response and assert stuff but you will not have lot of control over arranging your tests programatically.
My choice is to mix and match, that is, use in-memory tests as much as possible and use Katana-based host only for those specific cases I need to really hit the network.

When testing an API - Should I test API methods validation?

I've inherited a project which will be connecting to a CRM system via a SOAP service, written by another dev. My question is: to what level should I be testing the interface with the Soap services?
I set up a test case and wrote some methods to test a Soap update method, and confirmed it failed with a suitable error code for invalid customers or order numbers.
I next tested an invalid order status value (not within in a set of expected parameters) and the service returned a success code, which wasn't expected.
I believe I should report this to the developer, but should I now remove this test from my test suite? Or leave it showing as a fail?
If the soap service chooses not to validate its imput parameters I think it's a poor design, but it's not a fault in MY code, I just need to ensure I clean the input before passing values to the other system, and that validation should be covered under another set of tests anyway.
Should I even be talking to the SOAP service through the Unit Tests in the first place?
You should write at least one test per atomic functional requirement. Now, if you are writing using minimal interface guidelines, then you should have at most one interface per atomic functional requirement. But you can write more than one test for each requirement, since there may be a variety of invariants that can be tested.
In general, you should think of invariants and functional requirements when writing tests, not interface methods. Tests >= Atomic Functional Requirements.
Think of the service "contract", what are it's pre-conditions (i.e. legal inputs), it's post-conditions (i.e. legal outputs) and invariants (legal service state). If these are not clear by the developer, or there's a chance another developer is misusing the service, this should be reported and handled.
One exception though - these is all nice and good in theory, but if there are no other customers to the service (besides maybe the original developer) sometimes excess checking is obsolete. It is quite reasonable to assume that in such cases invalid inputs are checked and eliminated by the customer code.
If the library (or 3rd party code) has a good reputation (for example, Apache Commons, Guava...), I don't retest the API.
However, when I am not sure of the quality of the code, I tend to write a couple of tests verifying my assumptions about the library/API (but I don't retest all the library).
If those simple assumptions fail, it is a very bad sign for me. In this case, I tend to write more tests to check more aspects of the library. In your case, I would write more tests and signal errors to the developer.
These tests are not lost, because if a new version of the library is delivered, you can still use them to check for regressions.

Performance testing application for bottle necks using production data

I have been tasked with looking for a performance testing solution for one of our Java applications running on a Weblogic server. The requirement is to record production requests (both GET and POST including POST data) and then run these requests in a performance test environment with a copy of the production database.
The reasons for using production requests instead of a test script are:
It is a large application with no existing test scripts so it would be a a large amount of work to write scripts to cover the entire application.
Some performance issues only appear when users do a number of actions in a particular order.
To test using actual user interaction with the system not an estimation at how the users may interact with the system. We all know that users will do things we have not thought of.
I want to be able to fix performance issues and rerun the requests against the fixed code before releasing to production.
I have looked at using JMeters Access Log Sampler with server access logs however the access logs do not contain POST data and the access log sampler only looks at the request URL so it cannot simulate users submitting form data.
I have also looked at using the JMeter HTTP Proxy Server however this can record the actions of only one user and requires the user to configure their browser to use the proxy. This same limitation exist with Tsung and The Grinder.
I have looked at using Wireshark and TCReplay but recording at the packet level is excessive and will not give any useful reports at a request level.
Is there a better way to analyze production performance considering I need to be able to test fixes before releasing to production?
That is going to be a hard ask. I work with Visual Studio Test Edition to load test my applications and we are only able to "estimate" the users activity on the site.
It is possible to look at the logs and gather information on the likelyhood of certain paths through your app. You can then look at the production database to look at the likely values entered in any post requests. From that you will have to make load tests that approach the useage patterns of your production site.
With any current tools I don't think it is possible to record and playback actual user interation.
It is possible to alter your web app so that is records and logs every request and post against session and datetime. This custom logging could be then used to generate load test requests against a test website. This would be some serious code change to your existing site and would likely have performance impacts.
That said, I have worked with web apps that do this level of logging and the ability to analyse the exact series of page posts/requests that caused an error is quite valuable to a developer.
So in summary: It is possible, but I have not heard of any off the shelf tools that do it.
Please check out this Whitepaper by Impetus Technologies on this page.. http://www.impetus.com/plabs/sandstorm.html
Honestly, I'm not sure the task you're being asked to do is even possible, let alone a good idea. Depending on how complex the application's backend is, and how perfect you can recreate the state (ie: all the way down to external SOA services or the time/clock), it may not be possible to make those GET and POST requests reproduce the same behavior.
That said, performance testing against production data is always great, but it usually requires application-specific knowledge that will stress said data. Simply repeating HTTP GETs and POSTs will almost certainly not yield useful results.
Good luck!
I would suggest the following to get the production requests and simulate the accurate workload:
1) Use coremetrics: CoreMetrics provides such solutions using which you can know the application usage patterns. This would help in coming up with an accurate workload model. This model can then be converted into test scripts and executed against a masked copy of production database. This will provide you accurate results about the application performance in realtime.
2) Another option would be creating a small utility using AOP (Aspect oriented apporach) so that it can trace all the requests and corresponding method traces. This would help in identifying the production usage pattern and in turn accurate simulation of workload. AOP frameworks such as AspectJ can be used. This would not require any changes in code. The instrumentation can be done on the fly. The other benefit would be that thi cna only be enabled for a specific time window and then it can be turned off.
Regards,
batterywalam

Starting out, any suggestions?

I have started working in C# for almost a few months and I am looking for something more challenging and interesting. I use a media player called media monkey that supports custom vb scripts, well I made one that writes a file to a dir that has the current song playing, and is updated every time a new song is playing by rewriting what was there before.
Now I want to add this information to a database and keep a record of this and possibly add the information on my home page. I know I can hack a way for it to work, but I want to know what would be the "professional way" of doing things.
I came up with the following and got stuck. I would need an ODBC driver to connect to a database which seems messy, would a web service work? How would that work? Can a VbScript call a dll file to call upon a web service to modify data on a seperate server? Is that safe to do?
Many professional C# apps are n-tier. In your case, you would probably layer it like this:
On the server:
-Database Store
-Database Access/Business layer(sometimes two distinct components, depending on how complex the app is)
-Web Service
On the client:
-Web Service Client
-Any other layers to support client functionality.
So the Database Store would be something like some tables in an Oracle or Microsoft SQL Server, and would on your server.
Database Access/Business layer would be your code that retrieves and stores data to/from your database. It might also contain business objects, which are basically classes that have properties representing your data from your database. The benefit of the data access layer is that sometimes reading and writing to a database can require specialized code, and you don't want that code sprinkled throuought your application. So instead you can call functions in your data access layer that loads needed data into objects, so the rest of your application is just interacting with a regular old .NET object/class. These are called POCOs, which stands for something like Plan Old CLR Object. There are lots of variations on this of course, as people have taken different approaches to the problem of isaloting database access. Also it serves the purpose of minimizing breaking changes whenever the database changes. Since the database access logic is not sprinkled throughout the app, then there are fewer places that need to be updated if the database changes (such as adding new columns to a table or changing a name).
Sometimes the business layer will be it's own layer, and would contain most of the "logic" of the application. It would sit between the data access and web service layers. Using concepts from Service Oriented Architecture (SOA), you might have an authentication service, and a web request handling service. These services are a lot like a class that is always instantiated, there waiting to process requests. Your web request handling service would take a request, and maybe first call into the authentication service to verify credentials before honoring the request. SOA is one of those things I think should be used only when appropriate. It some cases just using Object Oriented techniques will give you the same benefits. Not always though. SOA, when done right, is more scalable, so it really depends on whether SOA offers you additional benefits that you need.
The Webservice would be responsible for receiving requests from the web, parsing/interpreting them, and acting on those requests by making calls into your business layer to update or retrieve data.
So the concept here would be that you could have many users of your service who publish their song updates through your service.
Your client would have a "web service client" layer which would be responible for formatting requests into messages, sending them to the web service, and retrieving messages from the web service. You would put very little application "logic" in your web service layer.
Now all this is probably overkill and inefficient for what you are wanting to do since you just want something for yourself, but it's the basic anatomy of a lot of webservice applications and would be a good learning exercise. The whole purpose of the layers is decoupling and simplicity. While more layers/components makes the application overall more complex, it means each component is simpler. This means it's easier to wrap your head around problems when you are only dealing with one component which interacts with only a couple other components(the sourounding layer). So there is a careful balance between few components and many components. Too few and they become monolithic and difficult to manage. Too many, and they become intertwined in complex ways. I have heard it said something along the lines of "If a class is getting too big and too complex, then split it up into a few more classes". In essence, don't start subdividing stuff for the heck of it just because it sounds like the right thing to do. Evaluate how complex your component is going to be before deciding if you want to split it up. Sometimes for simple cases your have a layer serving more than one purpose, for the sake of getting it done faster and making the overall design simpler. The point is, apply these concepts where appropriate. You will learn what is appropriate with experience, and you obviously understand that you can learn the most by "doing".
"Can vbscript call a COM component?" You can compile .NET DLLs with COM support. Many older things can call COM dlls.
I googled: vbscript dll
and got this: VB Script and DLLs
"Is that safe to do?" Your webservice will be where you would be most concerned with security. It's safe only if you design with security in mind and don't screw up. We all screw up sometimes though, which means there is no guarantee of it being perfectly secure.

Resources