How to test external services? - tdd

We have a an application which while processing some data has to query data from a database which is not owned by our application i.e. an external application. We have put this querying logic in an external service project and the main application call this external service via http. For integration testing of the main application we have mocked out the reply of this Http service .
I wanted to know if someone can suggest an optimal way to test this external service which entails things like testing the mapping logic and the logic of the query. This service basically just queries an external database and sends back the result. We have written sql statements to get the data, there is no use of entity framework.
Any suggestions would be much appreciated.

When testing an external service it helps if that service has repeatable behaviour.
As an example, if you can say that a specific query to the external service will always return the same predictable result then you can write a test for that.
This isn't always possible as you do not always have control over the data that the external database contains.
I would say that the best testing approaches in descending order of how good they are would be:
Create Test Data and Validate
The ideal approach is one where your test can create some new test data and then run the service to confirm that the data is returned as expected. At the end it will tidy up by removing the created test data.
The benefit of this approach is that you can be more confident that an external change will not break the test. You can also test specific features of the service by inserting appropriate data.
Validate Known Data
If the external service does not allow the creation of data the next best approach is to test for a result that you know will always be true. For example if your service was doing authentication then you would check using an account that you know exists on the service and does not change.
This approach relies on the particular data not changing. In the authentication example there is always the risk that the password is changed for the account you use to test and hence your test will fail.
Validate Correct Operation
The least good solution is to simply call the service with valid arguments and ensure an error code is not returned.
You can confirm that the service is up and working, but you do not gain confidence in the validity of the data being returned.
Non Functional Testing
In additional to testing the working service you may wish to test the failure cases, such as the service being down or uncontactable. How well does your deal with a failure in the external database? Does it recover gracefully when the database comes back online?

Related

Mocking 3rd party integrations outside of the context of a test

In a lot of the apps I work on, we have this problem where we heavily rely on 1st and 3rd party APIs. So much so, in some of our apps, it is useless to try to login without those APIs being in place. Either critical pieces of data are not there or the entire app is like a server side render SPA where it houses no data on its own but pulls that data from an API at the time of a request (we cache it when we can).
This raises a huge problem when trying to develop the app locally since we do not have a sandbox environment. Our current solution is to create a service layer in between our business logic and the actual HTTP calls. We then, in our local environments, swap out the HTTP implementation for a class that just returns fake data. This works pretty well most of the time except for a couple of issues:
This only really gives us one state of the application at a time. Unlike data in the database, we are not able to easily run different seeders to replicate different scenarios.
If we run into a bug in production, we have no way of replicating the api response without actually diving into the code and adding some conditional to return that specific response. With data that is stored in the database, it is easy to login to TablePlus and manually setup some condition or even pull down select table from production.
In our mocks, our functions can get quite large and nasty if we do try to have it dynamically respond with a different response based on the resource id being request, as an example.
This makes the overhead to create each test for each scenario quite high in my opinion. If we could use something similar to a database factory to generate a bunch of different request-response pairs, we could test a lot more cases and if we could somehow, dynamically, setup certain scenarios when we are trying to replicate bugs we are running into production.
Since our applications are built with Laravel and PHP, unlike the database, mocks don't persist from one request to another. We cannot simple throw open a tinker and start seeding out API integrations with data like we can in the database.
I was trying to think of a way to do it with a cache and set request-response pairs. This could also be move to the database but would prefer not to have that extra table there that is only used locally.
Any ideas?

How do I access data that my microservice does not own?

A have a microservice that needs some data it does not own. It needs a read-only cache of data that is owned by another service. I am looking for guidence on how to implement this.
I dont' want my microserivce to call another microservice. I have too much data that is used in a join for this to be successful. In addition, I don't want my service to be dependent on another service (which may be dependent on another ...).
Currently, I am publishing an event to a queue. Then my service subscribes and maintains a copy of the data. I am haivng problem staying in sync with the source system. Plus, our DBAs are complaining about data duplication. I don't see a lot of informaiton on this topic.
Is there a pattern for this? What the name?
First of all, there are couple of ways to share data and two of them you mention.
One service call another service to get the data when it is required. This is good as you get up to date data and also there is no extra management required on consuming service. Problem is that if you are calling this too many times then other service performance may impact.
Another solution is maintained local copy of that data in consuming service using Pub/Sub mechanism.
Depending on your requirement and architecture you can keep this in actual db of consuming service or some type of cache ( persisted cache)
Here cons is consistency. When working with distributed architecture you will not get strong consistency but you have to depends on Eventual consistency.
Another solution is that and depends on your required you can separate out that tables that needs to join in some separate service. It depends on your use case.
If you still want consistency then at the time when first service call that update the data and then publish. Instead create some mediator component and that will call two service in sync fashion. Here things get complicated as you now try to implement transaction over distributed system.
One another point, when product build around Microservice architecture then it is not only technical move, as a organization and as a team your team needs to understand something that work in Monolith, it is not same in Microservices. DBA needs to understand that part and in Microservices Duplication of data across schema ( other aspect like code) prefer over reusability.
Last but not least, If it is always required to call another service to get data, It is worth checking service boundary as well. It may possible that sometime service needs to merge as business functionality required to stay together.

How should I design my Spring Microservice?

I am trying to create a Microservice architecture for a hobby project and I am confused about some decisions. Can you please help me as I never worked using Microservice before?
One of my requirements is that my AngularJS GUI will need to show some drop-down or List of values (example: a list of countries). This can be fetched using a Microservice REST call, but where should the values come from? Can I fetch these from my Config Server? or should it come from Database? If the latter, then should each of the Microservice have their own Database for lookup value or can it be a common one?
How would server-side validation work in this case? I mean, there will certainly be a Microservice call the GUI will make for validation but should the validation service be a common Microservice for all Use Cases/Screens or should it be one per GUI page or should the CRUD Microservice be reused for validation as well?
How do I deal with a use-case where the back-end is not a Database but a Web-service call? Will I need some local DB still to maintain some state in between these calls (especially to take care of scenario where the Web-service call fails) and finally pass on the status to GUI?
First of all, there is no single way design micro-service , one has to choose according to the use case and project requirement.
Can I keep these in a Config Server? or should it come from Database?
Again, it depends upon the use case and requirement. However, because every MS should have their own DB then you can use DB if the countries have only names. But if they have some relationship with City/State then you should use DB only.
If DB should each of the Microservice have their own DB for lookup
value or can it be a common one?
No, IMO multiple MS should not depend on a single DB.Because if the DB fails then all the MS will fail, which should not be done. Each MS should work alone with depending on other DB or MS.
should the validation service be a common microservice for all
UseCases/Screens
Same as point 2
How do I deal with a use-case where the backend is not a Database call
but another Web-service call? Will I need some local DB still to
maintain some state in between these calls and finally pass on the
status to GUI?
If you are using HTTP then you should not save the state of any request. If you want to redirect the request to another MS then you can use Feign client which provides a very good way to call rest-api and other important features like: Load balancing.
Microservice architecture is simple. Here we divide each task into separate services(like Spring-boot application).
Example in every application there will be login function,registration function so on..each of these will a separate services in micro-service architecture.
1.You can store that in database, since in feature if you want add more values it is easy to add.
You can maintain separate or single db. Single db with separate collections or table for each microservices.
Validation means you are asking about who can use which microservice(Role based access)???
3.I think you have to use local db.
Microservices is a collection loosely coupled services. For example, if you are creating an ecommerce application, user management can be a service, order management can be a service and refund & chargeback management can be another service. Now each of these services can be further divided into smaller units, lets call them API Endpoints. For example - user management can have login as an endpoint and signup as another endpoint.
If you want to leverage the power of Microservice architecture in its true sense, here is what I would suggest. For the above example, create 3 Springboot Applications for each service. First thing that you should do after this, is establish trust between those applications. I would prefer JWTs for trust establishment. After that everything is a piece of cake. Here are the answers you are looking for :
You should ideally use a database, as opposed to keeping the values in config server, for fetching a list of countries so that you need not recompile your code every time a new country is added.
You can easily restrict access using #PreAuthorize if Role based access is what you are referring to.
You can use OkHttp or any other HttpClient in this usecase. And you certainly need not maintain any local db. However, you can cache the output of the webservice call if that is a requirement.
P.S.: Establishing trust between microservices can be a complex task if you dont understand all the delicacies. In which case, I would recommend going ahead with a single Springboot application; which is a monolithic architecture. I would still recommend JWTs though.

Replacing REST calls with GraphQL

I've recently read about the advantages (and disatvanteges) of GraphQL over Rest API.
I am developing a webpage that consumes several different Rest APIs and Soap services. Some of those services are dependent, meaning that a result from Rest1 will be passed as a parameter to Rest2 which will be passed to Soap service for a final return value.
From what I understood, GraphQL deals with multiple data sources and query nesting, but I have not yet understood if it will handle those nested dependent queries.
Can anyone that worked with several data sources that are dependent with GraphQL tell me if it can be done? My project should be up in 2 weeks and investing time in learning and setting up GraphQL and ending up not using it because it's not supporting my case would be a big failure for me.
Note: the APIs and services are not mine, I am consuming them from an outside source
I'm assuming you haven't yet setup a GraphQL server. Once you do, you can see how this isn't too difficult. So, I'd recommend you setup your own server first. The Egghead Course, "Build a GraphQL Server" got me started, but it's not free.
In essence, you'll be setting up your schema then defining how to resolve with data. When you resolve, you can setup an express server to query a database, or you can hit a REST interface, or hit your SOAP interface. How you retrieve the data is up to you, so long as you return it in compliance with your defined schema.
Hope that makes sense. Mocking up a mini app to demonstrate is possible, but since I don't have one handy, this is the best advice I can offer.

How does one unit test network-dependent operations?

Some of my apps grab data from the internet. I'm getting into the habit of writing unit tests, and I suspect that if I write a test for values returned from the internet, I will run into latency and/or reliability problems.
What's a valid way to test data that "lies on the other end of a web request"?
I'm using the stock unit testing toolkit that comes with Xcode, but the question theoretically applies to any testing framework.
Unit test is focused specifically on the business logic of your class. There would no latency, reliability etc as you would use some mock object to simulate what you actually interact.
What you are describing is some form of integration testing and for the OP seems like is not what you intent.
You should "obscure" the other end by mocking and not really access the network, a remote database etc.
Among others:
introduce artificial latency in requests
use another machine on the same network or at least another VM
test connection failures (either by connecting to a non existent server or cutting physically the connection
test for incomplete data (connection could be cut half way)
test for duplicate data (app could try to submit the request more than once if it thinks it was not successful - and in some scenarios may lead to lost data)
All of these should fail gracefully (either on the server side or on the client side)
I posed this question to the venerable folks on #macdev IRC channel on freenode.net, and I got a few really good answers.
Mike Ash (of mikeash.com) suggests implementing a local web server inside my app. For complex cases, I'd probably do this. However, I'm just using some of the built in initWithContentsOfURL:(NSURL *)url method of NSData.
For simpler cases, mike says an alternate method is to pass base64 encoded dummy data directly to the NSData initializer. (Use data://dummyDataEncodedAsBase64GoesAfterTheDataProtocolThingy.)
Similarly, "alistra" suggests using local file URLs that point to files containing mock data.

Resources