My project is based on event-driven microservice architecture. And We are trying to do end to end testing with cucumber, so that feature under test is available in a Business readable format.
Details are as below.
Service Architecture:
There are 4 microservice involved. We send the request to Service A, and request gets processed and stored in DB, and Service A publish the event, which gets consumed by Service B, again Service B process the event and store the result in DB and publish the event to be consumed by Service C and like that Service D.
User(Post Request to Service A) Service A -> (Process, Store in DB and Publish event To Service B) -> Service b(Consume event from A, process and Store result in DB, publish an event to C)...
Testing Strategy:
As part of the end to end testing, we will send the post request to Service A. and Service A will return only response 200 with no response Body.
We need to do verification of data in Each Service DB and assert that, it is as expected.
Something like feature file
Given
The system is in the expected state.
When
Send Request to service A
And
Service returns 200 response
And
Verify Processed Data is present in Service A DB
And
Verify Processed Data is present in Service B DB
And
Verify Processed Data is present in Service C DB
**I want to understand,
1. what should be the right way to do this kind of testing.
Is this the correct approach to do the end to end testing and verification in DB or some other approach should be used.**
Here is your problem:
We need to do verification of data in Each Service DB and assert that, it is as expected.
This is done in unit tests and application tests.
If you need to verify that the data is correct in each database, then you are trying to do a unit test but your unit is a bunch of services combined.
You are doing a gigantic unit test.
Unit tests
unit tests verify that the logic in each service is correct
Application Test in isolation
Tests that the api responds with correct status codes for correct errors. That it reads and writes correctly to a database. Here you test the api of the application.
End to End
You stick a bunch of services together and you post some data, you verify that the data you get back is as expected. You don't go into in detail what each service has done, this is already been verified in earlier tests.
This is a final check that service basically can communicate and return what you expect. You have no interest in how they do it.
Your case
You post something to your service and you get a 200 back. Then you should be happy and the test passed. Because the service did what you expected, you posted something and it returned 200.
The earlier tests have passed (unit tests, application tests), and they tell you the story that each service is following the specification given. So when you are ready for an end to end you have already tested everything up to that point.
You only do end to end when everything is tested in isolation up to that point.
In an end to end test you have no interest at all in how it executes, you are only interested in what it returns.
Don't unit test in an end to end test
Related
Consider an architecture like this:
API Gateway - responsible for aggregating services
Users microservice - CRUD operations on the user (users, addresses, consents, etc)
Notification microservice- sending email and SMS notifications
Security microservice - a service responsible for granting / revoking permissions to users and clients. For example, by connecting to Keycloak, it creates a user account with basic permission
Client - any application that connects to API Gateway in order to perform a given operation, e.g. user registration
Now, we would like to use Camunda for the entire process.
For example:
Client-> ApiGateway-> UsersMicroservice.Register-> SecurityMicroservice.AddDefaultPermition-> NotificationMicroservice.SendEmail
We would like to make this simplified flow with the use of e.g. Camunda.
Should the process start in UsersMicroservice.RegisterUser after receiving "POST api/users/" - that is UsersMicroservice.RegisterUser starts the process in Camunda and how does this endpoint know what specific process is to run in Camunda?
What if the BPMN process in Camunda is designed in such a way that immediately after entering the process there will be a Business Rule Task that will validate the Input and if there is no "Name", for example, it will interrupt the registration process? How UsersMicroservice will find out that the process has been interrupted and it should not perform any further standard operation like return this.usersService.Create (userInput);
Should the call to Camunda be in the Controller or rather in the Service layer?
How in the architecture as above, make a change to use Camunda to change the default Client-> UsersMicroservice-> UsersService-> Database flow, adding e.g. input validation before calling return this.usersService.Create (someInput);
If your intention is to let the process engine orchestrate the business process, then why not start the business process first? Either expose the start process API or a facade, which gets called by the API gateway when the desired business request should be served. Now let the process model decide which steps need to be taken to serve the request and deliver the desired result/business value. The process may start with a service task to create a user. However, like you wrote, the process may evolve and perform additional checks before the user is created. Maybe a DMN validates data. Maybe it is followed by a gateway which lead to a rejection path, a path that call an additional blacklist service, a path with a manual review, and the "happy path' with automated creation of the user. Whatever needs to happen, this is business logic, which you can make flexible by giving control to the process engine first.
The process should be started by the controller via a start process endppoint, before/not form UsersMicroservice.RegisterUser. You use a fixed process definition key to start. From here everything can be changed in the process model. You could potentially have an initial routing process ("serviceRequest") first which determines based on a process data ("request type") what kind of request it is ("createUser", "disableUser",...) and dispatches to the correct specific process for the given request ("createUser" -> "userCreationProcess").
The UsersMicroservice should be stateless (request state is managed in the process engine) and should not need to know. If the process is started first, the request may never reach UsersMicroservice. this.usersService.Create will only be called if the business logic in the process has determined that it is required - same for any subsequent service calls. If a subsequent step fails error handling can include retries, handling of a business error (e.g. "email address already exists") via an exceptional error path in the model (BPMNError), or eventually triggering a 'rollback' of operations already performed (compensation).
Controller - see above. The process will call the service if needed.
Call the process first, then let it decide what needs to happen.
I have a service that receives a request, generates an email, saves the email to message queue (to be sent by other microservice) and returns httpStatus.Ok.
I want to test that for different requests a relevant email will be generated.
According to Contract Tests vs Functional Tests
my tests are functional, not contract tests.
(If my service would return email content as an api response, using Pact for contract tests will certainly be appropriate).
I have an idea to use Pact infrastructure for such functional tests, in particular
1.Save request and expected generated email into Pact Broker
2. In provider Verify tests submit the request and verify generated email with expected email.
Does it make sense to use Pact in such functional tests?
Does anyone know examples of similar usage?
Any alternative technologies ( preferably in .Net Core) to have similar testing?
I am also considering
https://github.com/approvals/ApprovalTests.Net, but Pact infrastructure attracts me more.
Related note: The Pact normally works with http requests/responses, but Pact V3 (not implemented by PackNet yet) Introduces messages for services that communicate via event streams and message queues. An example describes messages pact contract tests is
https://dius.com.au/2017/09/22/contract-testing-serverless-and-asynchronous-applications/ referenced by
Pact for MessageQueue's : Sample Provider test in case of MessageQueues
I have a service that receives a request, generates an email, saves the email to message queue (to be sent by other microservice) and returns httpStatus.Ok.
So as you say, with Pact whilst the intent is not to be a functional testing tool in the traditional sense, it is almost unavoidable to not test functionality. The objective is really about testing the contract between the systems (this creates a small grey area in which you'll need to decide what is most appropriate for your test strategy).
What you don't want to do with Pact, is run the verification tests and then check that the email was actually sent, it was written to the queue, and that the downstream microservice could process it - that would be going beyond the bounds of a "contract test".
As an aside: what you can definitely do, is create a separate contract test between this component that publishes to the queue and the downstream component that receives from it (see this WIP branch for .NET library: https://github.com/pact-foundation/pact-net/pull/175)
I want to test that for different requests a relevant email will be generated.
If for those "different tests", the responses from the API are predictably shaped - then yes, you could definitely do that with Pact.
So rewording the above to "I have a service that receives a request, returns httpStatus.Ok and the email body sent" is an acceptable contract-testing IMO.
You could then expand upon this with various scenarios:
When the user was created successfully ...
When the user already exists...
etc.
Hope that helps!
P.S. Might be worth jumping in to https://slack.pact.io and chatting with the community further there.
I am using OpenCensus in Go to push tracing data to Stackdriver for calls involving a chain of 2 or more micro services and I noticed that I get many traces which contain spans only for certain services but not the entire end to end call.
At the moment I attribute this to the fact that not all calls are traced (only a certain sample) and each service decides whether to trace its current span or not.
Is this the way it is intended to work? Is there any way to make sure when a trace is sampled, it is done so by all services in the call chain?
Architecturally I will say when you are developing your microservices make sure your API Gateway creates a unique id like GUID, which gets propagated through all the microservices & similarly you make sure that you have log aggregator which will be collecting logs from all the services & finally you are getting nice tracebility of request.
I'm using MassTransit 3.4 / RabbitMQ 3.6.5 and I love it, but have come across a problem. I have 2 services, A and B. When service A starts, it immediately calls service B via request/response pattern. If B isn't running at the time, the message will timeout which is ok - service A is still set as running. Later, when service B comes online, if service A makes the same request/response call, I'm expecting that it's going to work, but it's not. The request is made, service B picks it up and does a RespondAsync with the proper message, but the response ends up in the service A's bus Skipped queue. Why? Is this scenario not supported? How can I make sure that I can start up my services in any order and ensure that they'll work?
It's working fine if I start service B first, and then start service A, but I don't want to have to rely on this kind of hacky start sequence.
Each time I do the request/response, I'm getting the IRequestClient as it says in the docs like so...
IRequestClient<TRequest, TResponse> client = this.busControl.CreateRequestClient<TRequest, TResponse>(uri, TimeSpan.FromSeconds(timeout));
return await client.Request(message);
Thanks,
Andy
My workflow needs to wait for either an email approval via Bookmark or a WCF approval via Receive, so I used a Parallel activity. The email approval works just fine but I am trying to test the WCF and cant figure out what URL to use in WCF test client to access the Workflow.
I would be grateful for any leads because I am very new to WCF and am not very sure how to go about solving this problem.
You are using workflow service and then your second receive activity must be correlate with your first one and cancreateinstance check box set to false and service contract name is same as first one.
When you generate proxy for the workflow service, operation method is available for call from client.
You can refer this article
http://www.codeproject.com/Articles/50820/Establishing-Correlation-Between-Multiple-RECEIVE