Spring Cloud Contract with Spring AMQP - spring

So I've been trying to use Spring Cloud Contract to test RabbitListener.
So far I have found out that by defining "triggeredBy" in contract, the generated test will call the method provided there and so we will need to provide the actual implementation of what that method do in the TestBase.
Another thing is "outputMessage", where we can verify whether the method call before have correctly resulting on some message body sent to certain exchange.
Source: documentation and sample
My question is, is there any way to produce the input message from the contract, instead of triggering own custom method?
Perhaps something similar like Spring Integration or Spring Cloud Stream example in the documentation:
Contract.make {
name("Book Success")
label("book_success")
input {
messageFrom 'input.exchange.and.maybe.route'
messageHeaders {
header('contentType': 'application/json')
header('otherMessageHeader': '1')
}
messageBody ([
bookData: someData
])
}
outputMessage {
sentTo 'output.exchange.and.maybe.route'
headers {
header('contentType': 'application/json')
header('otherMessageHeader': '2')
}
body([
bookResult: true
])
}
}
I couldn't find any examples in their sample project that show how to do this.
Having used spring cloud contract to document and test rest api services, if possible I would like to stay consistent by defining both the input and expected output in contract files for event based services.

Never mind, actually its already supported.
For unknown reason the documentation in "Stub Runner Spring AMQP" does not list the scenario like others previous sample.
Here is how I make it works:
Contract.make {
name("Amqp Contract")
label("amqp_contract")
input {
messageFrom 'my.exchange'
messageHeaders {
header('contentType': 'text/plain')
header('amqp_receivedRoutingKey' : 'my.routing.key')
}
messageBody(file('request.json'))
}
outputMessage {
sentTo 'your.exchange'
headers {
header('contentType': 'text/plain')
header('amqp_receivedRoutingKey' : 'your.routing.key')
}
body(file('response.json'))
}
}
This will create a test that will call your listener based on "my.exchange" and "my.routing.key" triggering the handler method.
It will then capture the message and routing key on your RabbitTemplate call to "your.exchange".
verify(this.rabbitTemplate, atLeastOnce()).send(eq(destination), routingKeyCaptor.capture(),
messageCaptor.capture(), any(CorrelationData.class));
Both message and routing key then will be asserted.

Related

Masstransit (non-DI) configuration to autogenerate an Azure Service Bus Topic with Duplicate Detection enabled

I've discovered no Masstransit configuration that allows a service bus Topic to be created with Duplicate Detection enabled.
You can do it with Queues simply enough. But for Topics it seems a bit of a mystery.
Does anybody have a working sample?
Perhaps it is not possible.
I've been trying to use the IServiceBusBusFactoryConfigurator provided by the Bus.Factory.CreateUsingAzureServiceBus method.
I'd thought that some use of IServiceBusBusFactoryConfigurator.Publish method and IServiceBusBusFactoryConfigurator.SubscriptionEndpoint method would accomplish the task, but after a myriad of trials I've come up with no solution.
To configure your message type topic with duplicate detection, you must configure the publish topology in both the producer and the consumer (it only needs to be configured once per bus instance, but if your producer is a separate bus instance, it would also need the configuration). The topic must also not already exist as it would not be updated once created in Azure.
To configure the publish topology:
namespace DupeDetection
{
public interface DupeCommand
{
string Value { get; }
}
}
var busControl = Bus.Factory.CreateUsingAzureServiceBus(cfg =>
{
cfg.Publish<DupeCommand>(x => x.EnableDuplicateDetection(TimeSpan.FromMinutes(10)));
cfg.ReceiveEndpoint("dupe", e =>
{
e.Consumer<DupeConsumer>();
});
}
The consumer is normal (no special settings required).
class DupeConsumer :
IConsumer<DupeCommand>
{
public Task Consume(ConsumeContext<DupeCommand> context)
{
return Task.CompletedTask;
}
}
I've added a unit test to verify this behavior, and can confirm that when two messages with the same MessageId are published back-to-back, only a single message is delivered to the consumer.
Test log output:
10:53:15.641-D Create send transport: sb://masstransit-build.servicebus.windows.net/MassTransit.Azure.ServiceBus.Core.Tests.DupeDetection/DupeCommand
10:53:15.784-D Topic: MassTransit.Azure.ServiceBus.Core.Tests.DupeDetection/DupeCommand (dupe detect)
10:53:16.375-D SEND sb://masstransit-build.servicebus.windows.net/MassTransit.Azure.ServiceBus.Core.Tests.DupeDetection/DupeCommand dc3a0000-ebb8-e450-949c-08d8e8939c7f MassTransit.Azure.ServiceBus.Core.Tests.DupeDetection.DupeCommand
10:53:16.435-D SEND sb://masstransit-build.servicebus.windows.net/MassTransit.Azure.ServiceBus.Core.Tests.DupeDetection/DupeCommand dc3a0000-ebb8-e450-949c-08d8e8939c7f MassTransit.Azure.ServiceBus.Core.Tests.DupeDetection.DupeCommand
10:53:16.469-D RECEIVE sb://masstransit-build.servicebus.windows.net/MassTransit.Azure.ServiceBus.Core.Tests/input_queue dc3a0000-ebb8-e450-949c-08d8e8939c7f MassTransit.Azure.ServiceBus.Core.Tests.DupeDetection.DupeCommand MassTransit.IConsumer<MassTransit.Azure.ServiceBus.Core.Tests.DupeDetection.DupeCommand>(00:00:00.0017972)
You can see the (dupe detect) attribute shown on the topic declaration.
Here is the solution I finally found. It does not rely on trying any of the ReceiveEndpoint or SubscriptionEndpoint configuration methods which never seemed to give me what I wanted.
IBusControl bus = Bus.Factory.CreateUsingAzureServiceBus(cfg =>
{
cfg.Publish<MembershipNotifications.MembershipSignupMessage>(configure =>
{
configure.EnableDuplicateDetection(_DuplicateDetectionWindow);
configure.AutoDeleteOnIdle = _AutoDeleteOnIdle;
configure.DefaultMessageTimeToLive = _MessageTimeToLive;
});
}
await bus.Publish(new MessageTest());

Spring WebFlux + Kotlin Response Handling

I'm having some trouble wrapping my head around a supposedly simple RESTful WS response handling scenario when using Spring WebFlux in combination with Kotlin coroutines. Suppose we have a simple WS method in our REST controller that is supposed to return a possibly huge number (millions) of response "things":
#GetMapping
suspend fun findAllThings(): Flow<Thing> {
//Reactive DB query, return a flow of things
}
This works as one would expect: the result is streamed to the client as long as a streaming media type (e.g. "application/x-ndjson") is used. In more complex service calls that also accounts for the possibility of errors/warnings I would like to return a response object of the following form:
class Response<T> {
val errors: Flow<String>
val things: Flow<T>
}
The idea here being that a response either is successful (returning an empty error Flow and a Flow of things), or failed (errors contained in the corresponding Flow while the things Flow being empty). In blocking programming this is a quite common response idiom. My question now is how can I adapt this idiom to the reactive approach in Kotlin/Spring WebFlux?
I know its possible to just return the Response as described (or Mono<Response> for Java users), but this somewhat defeats the purpose of being reactive as the entire Mono has to exist in memory at serialization time. Is there any way to solve this? The only possible solution I can think of right now is a custom Spring Encoder that is smart enough to stream both errors or things (whatever is present).
How about returning Success/Error per Thing?
class Result<T> private constructor(val result: T?, val error: String?) {
constructor(data: T) : this(data, null)
constructor(error: String) : this(null, error)
val isError = error != null
}
#GetMapping
suspend fun findAllThings(): Flow<Result<Thing>> {
//Reactive DB query, return a flow of things
}

Spring Cloud Stream access to raw Stream

Here is my use case, user subscribe to my stream using websocket (GraphQl with subscription), I need to return an instance of org.reactivestreams.Publisher (which should be my kafka topic subscription) filtering message by user id.
To illustrate, something like this:
/ **
* I don´t know how to get a instance of Publisher<Balance>
* It should be a consumer from a kafka topic
*/
fun balance(myStream: Publisher<Balance>, userId: String): Publisher<Balance> {
return myStream.filter { it.userId == userId }
}
Maybe you need to write a Spring Cloud Stream consumer and then publish it to WebSocket programmatically. Something along the lines of
public Consumer<Flux<Balance>> myStream() {
//filter here and then publish to websocket.
}
Here is an example of a WebSocket sink implementation that you can potentially use as a guide, but this is not reactive.

Spring Cloud Function - Separate routing-expression for different Consumer

I have a service, which receives different structured messages from different message queues. Having #StreamListener conditions we can choose at every message type how that message should be handled. As an example:
We receive two different types of messages, which have different header fields and values e.g.
Incoming from "order" queue:
Order1: { Header: {catalog:groceries} }
Order2: { Header: {catalog:tools} }
Incoming from "shipment" queue:
Shipment1: { Header: {region:Europe} }
Shipment2: { Header: {region:America} }
There is a binding for each queue, and with according #StreamListener I can process the messages by catalog and region differently
e.g.
#StreamListener(target = OrderSink.ORDER_CHANNEL, condition = "headers['catalog'] == 'groceries'")
public void onGroceriesOrder(GroceryOder order){
...
}
So the question is, how to achieve this with the new Spring Cloud Function approach?
At the documentation https://cloud.spring.io/spring-cloud-static/spring-cloud-stream/3.0.2.RELEASE/reference/html/spring-cloud-stream.html#_event_routing it is mentioned:
Also, for SpEL, the root object of the evaluation context is Message so you can do evaluation on individual headers (or message) as well …​.routing-expression=headers['type']
Is it possible to add the routing-expression to the binding like (in application.yml)
onGroceriesOrder-in-0:
destination: order
routing-expression: "headers['catalog']==groceries"
?
EDIT after first answer
If the above expression at this location is not possible, what the first answer implies, than my question goes as follows:
As far as I understand, an expression like routing-expression: headers['catalog'] must be set globally, because the result maps to certain (consumer) functions.
How can I control that the 2 different messages on each queue will be forwarted to their own consumer function, e.g.
Order1 --> MyOrderService.onGroceriesOrder()
Order2 --> MyOrderService.onToolsOrder()
Shipment1 --> MyShipmentService.onEuropeShipment()
Shipment2 --> MyShipmentService.onAmericaShipment()
That was easy with #StreamListener, because each method gets their own #StreamListener annotation with different conditions. How can this be achieved with the new routing-expression setting?
?
Aside from the fact that the above is not a valid expression, but I think you meant headers['catalog']==groceries. If so, what would you expect to happen from evaluating it as the only two option could be true/false. Anyway, these are rhetorical but helps to understand the problem and how to fix it.
The expression must result in a value of a function to route TO. So. . .
routing-expression: headers['catalog'] - assumes that the actual value of catalog header is the name of the function to invoke
routing-expression: headers['catalog']==groceries ? 'processGroceries' : 'processOther' - maps value 'groceries' to 'processGroceries' function.
For a specific routing, you can use MessageRoutingCallback strategy:
MessageRoutingCallback
The MessageRoutingCallback is a strategy to assist with determining
the name of the route-to function definition.
public interface MessageRoutingCallback {
FunctionRoutingResult routingResult(Message<?> message);
. . .
}
All you need to do is implement and register it as a bean to be picked
up by the RoutingFunction. For example:
#Bean
public MessageRoutingCallback customRouter() {
return new MessageRoutingCallback() {
#Override
FunctionRoutingResult routingResult(Message<?> message) {
return new FunctionRoutingResult((String) message.getHeaders().get("func_name"));
}
};
}
Spring Cloud Function

SQS Listener #Headers getting body content instead of Message Attributes

I am using Spring Cloud SQS messaging for listening to a specified queue. Hence using #SqsListener annotation as below:
#SqsListener(value = "${QUEUE}", deletionPolicy = SqsMessageDeletionPolicy.ALWAYS )
public void receive(#Headers Map<String, String> header, #Payload String message) {
try {
logger.logInfo("Message payload is: "+message);
logger.logInfo("Header from SQS is: "+header);
if(<Some condition>){
//Dequeue the message once message is processed successfully
awsSQSAsync.deleteMessage(header.get(LOOKUP_DESTINATION), header.get(RECEIPT_HANDLE));
}else{
logger.logInfo("Message with header: " + header + " FAILED to process");
logger.logError(FLEX_TH_SQS001);
}
} catch (Exception e) {
logger.logError(FLEX_TH_SQS001, e);
}
}
I am able to connect the specified queue successfully and read the message as well. I am setting a message attribute as "Key1" = "Value1" along with message in aws console before sending the message. Following is the message body:
{
"service": "ecsservice"
}
I am expecting "header" to receive a Map of all the message attributes along with the one i.e. Key1 and Value1. But what I am receiving is:
{service=ecsservice} as the populated map.
That means payload/body of message is coming as part of header, although body is coming correctly.
I wonder what mistake I am doing due to which #Header header is not getting correct message attributes.
Seeking expert advice.
-PC
I faced the same issue in one of my spring projects.
The issue for me was, SQS configuration of QueueMessageHandlerFactory with Setting setArgumentResolvers.
By default, the first argument resolver in spring is PayloadArgumentResolver.
with following behavior
#Override
public boolean supportsParameter(MethodParameter parameter) {
return (parameter.hasParameterAnnotation(Payload.class) || this.useDefaultResolution);
}
Here, this.useDefaultResolution is by default set to true – which means any parameter can be converted to Payload.
And Spring tries to match your method actual parameters with one of the resolvers, (first is PayloadArgumentResolver) - Indeed it will try to convert all the parameters to Payload.
Source code from Spring:
#Nullable
private HandlerMethodArgumentResolver getArgumentResolver(MethodParameter parameter) {
HandlerMethodArgumentResolver result = this.argumentResolverCache.get(parameter);
if (result == null) {
for (HandlerMethodArgumentResolver resolver : this.argumentResolvers) {
if (resolver.supportsParameter(parameter)) {
result = resolver;
this.argumentResolverCache.put(parameter, result);
break;
}
}
}
return result;
}
How I solved this,
The overriding default behavior of Spring resolver
factory.setArgumentResolvers(
listOf(
new PayloadArgumentResolver(converter, null, false),
new HeaderMethodArgumentResolver(null, null)
)
)
Where I set, default flag to false and spring will try to convert to payload only if there is annotation on parameter.
Hope this will help.
Apart from #SqsListener, you need to add #MessageMapping to the method. This annotation will helps to resolve method arguments.
I had this issue working out of a rather large codebase. It turned out that a HandlerMethodArgumentResolver was being added to the list of resolvers that are used to basically parse the message into the parameters. In my case it was the PayloadArgumentResolver, which usually always resolves an argument to be the payload regardless of the annotation. It seems by default it's supposed to come last in the list but because of the code I didn't know about, it ended up being added to the front.
Anyway, if you're not sure take a look around your code and see if you're doing anything regarding spring's QueueMessageHandler or HandlerMethodArgumentResolver.
It helped me to use a debugger and look at HandlerMethodArgumentResolver.resolveArgument method to start tracing what happens.
P.S. I think your #SqsListener code looks fine except that I think #Headers is supposed to technically resolve to a Map of < String, Object >", but I'm not sure that would cause the issue you're seeing.

Resources