I want to write test cases for ignored exceptions which will not send to Sentry, how can it be done with or without connecting any Sentry instance ?
Related
I have a simple integration flow that poll data based on a cron job from database, publish on a DirectChannel, then do split and transformations, and publish on another executor service channel, do some operations and finally publish to an output channel, its written using dsl style.
Also, I have an endpoint where I might receive an http request to trigger this flow, at this point I send the messages one of the mentioned channels to trigger the flow.
I want to make sure that the manual trigger doesn’t happen if the flow is already running due to either the cron job or another request.
I have used the isRunning method of the StandardIntegrationFlow, but it seems that it’s not thread safe.
I also tried using .wireTap(myService) and .handle(myService) where this service has an atomicBoolean flag but it got set per every message, which is not a solution.
I want to know if the flow is running without much intervention from my side, and if this is not supported how can I apply the atomic boolean logic on the overall flow and not on every message.
How can I simulate the racing condition in a test in order to make sure my implementation prevent this?
The IntegrationFlow is just a logical container for configuration phase. It does have those lifecycle methods, but only for an internal framework logic. Even if they are there, they don't help because endpoints are always running if you want to do them something by some event or input message.
It is hard to control all of that since it is in an async state as you explain. Even if we can stop a SourcePollingChannelAdapter in the beginning of that flow to let your manual call do do something, it doesn't mean that messages in other threads are not in process any more. The AtomicBoolean cannot help here for the same reason: even if you set it to true in the MessageSourceMutator.beforeReceive() and reset back to false in its afterReceive() when message is null, it still doesn't mean that messages you pushed down in other thread are already processed.
You might consider to use an aggregator for AtomicBoolean resetting in the end of batch since you mention that you pull data from DB, so perhaps there is a number of records per poll you can track downstream. This way your manual call could be skipped until aggregator collects results for that batch.
You also need to think about stopping a SourcePollingChannelAdapter at the moment when manual action is permitted, so there won't be any further race conditions with the cron.
You have a command/operation which means you both need to save something in database end send an event/message to another system. For example you have an OrderService and when a new order is created you want to publish an "OrderCreated"-event for another system/systems to react on (either direct message or using a message broker) and do something.
The easiest (and naive) implementation is to save in db and if successful then send message. But of course this is not bullet proof because the other service/message broker is down or your service crash before sending message.
One (and common?) solution is to implement "outbox pattern", i.e. instead of publish messages directly you save the message to an outbox table in your local database as part of your database transaction (in this example save to outbox table as well as order table) and have a different process (polling db or using change data capture) reading the outbox table and publish messages.
What is your solution to this dilemma, i.e. "update database and send message or do neither"? Note: I am not talking about using SAGAs (could be part of a SAGA though but this is next level).
I have in the past used different approaches:
"Do nothing", i.e just try to send the message and hope it will be sent. Which might be fine in some cases especially with a stable message broker running on same machine.
Using DTC (in my case MSDTC). Beside all the problem with DTC it might not work with your current solution.
Outbox pattern
Using an orchestrator which will retry process if you have not got a "completed" event.
In my current project it is not handled well IMO and I want to change it to be more resilient and self correcting. Sometimes when a service is calling another service and it fails the user might retry and it might work ok. But some operations might require out support to fix it (if it is even discovered).
ATM it is not a Microservice solution but rather two large (legacy) monoliths communicating and is running on same server but moving to a Microservice architecture in the near future and might run on multiple machines.
I have a service that receives a request, generates an email, saves the email to message queue (to be sent by other microservice) and returns httpStatus.Ok.
I want to test that for different requests a relevant email will be generated.
According to Contract Tests vs Functional Tests
my tests are functional, not contract tests.
(If my service would return email content as an api response, using Pact for contract tests will certainly be appropriate).
I have an idea to use Pact infrastructure for such functional tests, in particular
1.Save request and expected generated email into Pact Broker
2. In provider Verify tests submit the request and verify generated email with expected email.
Does it make sense to use Pact in such functional tests?
Does anyone know examples of similar usage?
Any alternative technologies ( preferably in .Net Core) to have similar testing?
I am also considering
https://github.com/approvals/ApprovalTests.Net, but Pact infrastructure attracts me more.
Related note: The Pact normally works with http requests/responses, but Pact V3 (not implemented by PackNet yet) Introduces messages for services that communicate via event streams and message queues. An example describes messages pact contract tests is
https://dius.com.au/2017/09/22/contract-testing-serverless-and-asynchronous-applications/ referenced by
Pact for MessageQueue's : Sample Provider test in case of MessageQueues
I have a service that receives a request, generates an email, saves the email to message queue (to be sent by other microservice) and returns httpStatus.Ok.
So as you say, with Pact whilst the intent is not to be a functional testing tool in the traditional sense, it is almost unavoidable to not test functionality. The objective is really about testing the contract between the systems (this creates a small grey area in which you'll need to decide what is most appropriate for your test strategy).
What you don't want to do with Pact, is run the verification tests and then check that the email was actually sent, it was written to the queue, and that the downstream microservice could process it - that would be going beyond the bounds of a "contract test".
As an aside: what you can definitely do, is create a separate contract test between this component that publishes to the queue and the downstream component that receives from it (see this WIP branch for .NET library: https://github.com/pact-foundation/pact-net/pull/175)
I want to test that for different requests a relevant email will be generated.
If for those "different tests", the responses from the API are predictably shaped - then yes, you could definitely do that with Pact.
So rewording the above to "I have a service that receives a request, returns httpStatus.Ok and the email body sent" is an acceptable contract-testing IMO.
You could then expand upon this with various scenarios:
When the user was created successfully ...
When the user already exists...
etc.
Hope that helps!
P.S. Might be worth jumping in to https://slack.pact.io and chatting with the community further there.
My project is based on event-driven microservice architecture. And We are trying to do end to end testing with cucumber, so that feature under test is available in a Business readable format.
Details are as below.
Service Architecture:
There are 4 microservice involved. We send the request to Service A, and request gets processed and stored in DB, and Service A publish the event, which gets consumed by Service B, again Service B process the event and store the result in DB and publish the event to be consumed by Service C and like that Service D.
User(Post Request to Service A) Service A -> (Process, Store in DB and Publish event To Service B) -> Service b(Consume event from A, process and Store result in DB, publish an event to C)...
Testing Strategy:
As part of the end to end testing, we will send the post request to Service A. and Service A will return only response 200 with no response Body.
We need to do verification of data in Each Service DB and assert that, it is as expected.
Something like feature file
Given
The system is in the expected state.
When
Send Request to service A
And
Service returns 200 response
And
Verify Processed Data is present in Service A DB
And
Verify Processed Data is present in Service B DB
And
Verify Processed Data is present in Service C DB
**I want to understand,
1. what should be the right way to do this kind of testing.
Is this the correct approach to do the end to end testing and verification in DB or some other approach should be used.**
Here is your problem:
We need to do verification of data in Each Service DB and assert that, it is as expected.
This is done in unit tests and application tests.
If you need to verify that the data is correct in each database, then you are trying to do a unit test but your unit is a bunch of services combined.
You are doing a gigantic unit test.
Unit tests
unit tests verify that the logic in each service is correct
Application Test in isolation
Tests that the api responds with correct status codes for correct errors. That it reads and writes correctly to a database. Here you test the api of the application.
End to End
You stick a bunch of services together and you post some data, you verify that the data you get back is as expected. You don't go into in detail what each service has done, this is already been verified in earlier tests.
This is a final check that service basically can communicate and return what you expect. You have no interest in how they do it.
Your case
You post something to your service and you get a 200 back. Then you should be happy and the test passed. Because the service did what you expected, you posted something and it returned 200.
The earlier tests have passed (unit tests, application tests), and they tell you the story that each service is following the specification given. So when you are ready for an end to end you have already tested everything up to that point.
You only do end to end when everything is tested in isolation up to that point.
In an end to end test you have no interest at all in how it executes, you are only interested in what it returns.
Don't unit test in an end to end test
I'm doing concurrent users load test for SignalR Protocol with WebSocket transport, I'm able run the script successfully for single users with more than 1 iteration. If I run it for concurrent users I'm not getting the expected behavior- which is I'm getting Second users response in first user request.
Please guide me here.
Most probably your parameterization or correlation fails somewhere somehow, i.e. you're sending the same session argument for both users.
Use Debug Sampler and View Results Tree listener combination in order to inspect request and response details and associated JMeter Variables values.