Gateway not setting the replyChannel header - spring

I'm currently working on a project built with Spring Integration 4.3.14 , and we decided to go to try to use DSL, but I'm having trouble trying to integrate different subflows.
I have the following IntegrationFlow defined:
#Bean
public IntegrationFlow mainFlow() {
return IntegrationFlows
.from(
databaseSource(),
c -> c.poller(Pollers.fixedDelay(5000).transactional().get()))
.split()
.log()
.gateway(f -> f
.transform(Transformer::transform)
.transform(AnotherTransformer::transform),
e -> e
.errorChannel("transformErrorChannel"))
.gateway(f -> f
.<MyEntity>handle((p, h) -> this.doSomething(p))
.<MyEntity>handle((p, h) -> this.doOtherThing(p)),
e -> e
.errorChannel("doErrorChannel"))
.channel("nullChannel")
.get();
}
All transform and handle invoked methods are non-void and return non-null values. The main reason we went for this approach is to have two different channels to handle errors depending on the part of the flow were they happened, so we can act accordingly.
Yet, when I try to run this code and I insert a record on the DB and the poller picks it up, it never goes beyond the first gateway. I just have this log lines:
2018-06-06 11:43:58.848 INFO 6492 --- [ask-scheduler-1] o.s.i.gateway.GatewayProxyFactoryBean : stopped org.springframework.integration.gateway.GatewayProxyFactoryBean#55d1f065
2018-06-06 11:43:58.848 INFO 6492 --- [ask-scheduler-1] ProxyFactoryBean$MethodInvocationGateway : started org.springframework.integration.gateway.GatewayProxyFactoryBean$MethodInvocationGateway#1863292e
2018-06-06 11:43:58.864 INFO 6492 --- [ask-scheduler-1] c.e.transformation.Transformer : Performing transformation.
2018-06-06 11:43:58.864 INFO 6492 --- [ask-scheduler-1] c.e.transformation.AnotherTransformer : Performing another transformation.
2018-06-06 11:43:58.848 INFO 6492 --- [ask-scheduler-1] o.s.i.gateway.GatewayProxyFactoryBean : started org.springframework.integration.gateway.GatewayProxyFactoryBean#55d1f065
2018-06-06 11:43:58.944 INFO 6492 --- [ask-scheduler-1] o.s.i.gateway.GatewayProxyFactoryBean : stopped org.springframework.integration.gateway.GatewayProxyFactoryBean#f9a5e3f
2018-06-06 11:43:58.944 INFO 6492 --- [ask-scheduler-1] ProxyFactoryBean$MethodInvocationGateway : started org.springframework.integration.gateway.GatewayProxyFactoryBean$MethodInvocationGateway#433a796
2018-06-06 11:43:58.944 INFO 6492 --- [ask-scheduler-1] o.s.i.gateway.GatewayProxyFactoryBean : started org.springframework.integration.gateway.GatewayProxyFactoryBean#f9a5e3f
It seems clear that the message does arrive on the first gateway, but apparently it's not being passed to the second gateway.
During startup, I see that SI creates two subFlows (#0 and #1) and two channels for each one (one for each operation, I guess) with 1 subscriber each.
I'd also tried changing the definition to the following:
#Bean
public IntegrationFlow getRecords() {
return IntegrationFlows
.from(
databaseSource(),
c -> c.poller(Pollers.fixedDelay(5000).transactional().get()))
.split()
.log()
.gateway(f -> f
.transform(Transformer::transform)
.transform(AnotherTransformer::transform),
e -> e
.errorChannel("transformErrorChannel")
.replyChannel("doThingsChannel"))
.get();
}
#Bean
public IntegrationFlow doThings() {
return IntegrationFlows
.from(
"doThingsChannel")
.gateway(f -> f
.<MyEntity>handle((p, h) -> this.doSomehting(p))
.<MyEntity>handle((p, h) -> this.doOtherThing(p)),
e -> e
.errorChannel("doErrorChannel"))
.get();
}
But eventually got the same problem, both setting the replyChannel on the GatewayEndpointSpec or adding an explicit .channel to getRecords flow after the gateway.

I've just done this test-case in the Spring Integration Java DSL project:
#Test
public void testGateways() {
IntegrationFlow flow = f -> f
.gateway(sf -> sf
.transform(p -> "foo#" + p)
.transform(p -> "bar#" + p))
.gateway(sf -> sf
.handle((p, h) -> "handle1:" + p)
.handle((p, h) -> "handle2:" + p))
.handle(System.out::println);
IntegrationFlowRegistration flowRegistration = this.integrationFlowContext.registration(flow).register();
flowRegistration.getInputChannel()
.send(new GenericMessage<>("test"));
flowRegistration.destroy();
}
My output is like this:
GenericMessage [payload=handle2:handle1:bar#foo#test, headers={id=ae09df5c-f63e-4b68-d73c-29b85f3689a8, timestamp=1528314852110}]
So, both gateways work as expected and all the transformers and handlers are applied. Plus the result of the last gateway is polled to the main flow for the last System.out step.
Not sure what's going on in your case: only an idea that your .transform(AnotherTransformer::transform) doesn't return value or anything else happens there.
Regarding a replyChannel option. It is not where to send a result of the gateway. This is where to wait for the reply to return:
/**
* Specify the channel from which reply messages will be received; overrides the
* encompassing gateway's default reply channel.
* #return the channel name.
*/
String replyChannel() default "";

Related

Spring webflux websocket closed when receive message too fast

enter code herepublic Mono<Void> handle(#Nonnull WebSocketSession session) {
final WebSocketContext webSocketContext = new WebSocketContext(session);
Mono<Void> output = session.send(Flux.create(webSocketContext::setSink));
Mono<Void> input = session.receive()
.timeout(Duration.ofSeconds(adapterProperties.getSessionTimeout()))
.doOnSubscribe(subscription -> subscription.request(64))
.doOnNext(WebSocketMessage::retain)
.publishOn(Schedulers.boundedElastic())
.concatMap(msg -> {
// ....blocking operation
return Flux.empty();
}).then();
return Mono.zip(input, output).then();
When I use ws client to send information, because the message sending speed is too fast, when 2000 pieces of data are received, the connection is disconnected and there is no exception message,
After I slow down the sending speed of the message on the client side, there is no problem. How can I solve it?
Below is the Flux log information:
2022-07-14 17:19:40.295 adapter-iat [boundedElastic-5] INFO reactor.Flux.PublishOn.5 - | onNext(WebSocket TEXT message (13765 bytes))
2022-07-14 17:19:40.296 adapter-iat [boundedElastic-5] INFO reactor.Flux.PublishOn.5 - | onNext(WebSocket TEXT message (13765 bytes))
2022-07-14 17:19:40.300 adapter-iat [boundedElastic-5] INFO reactor.Flux.PublishOn.5 - | onComplete()????why

Flux.subscribe finishes before the last element in processed

Strange behavior of Spring + Flux. I have Python server code (using Flask, but that's not important, treat it as pseudo-code) which is streaming response:
def generate():
for row in range(0,10):
time.sleep(1)
yield json.dumps({"count": row}) + '\n'
return Response(generate(), mimetype='application/json')
With that, I simulate processing some tasks from the list and sending me results as soon as they are ready, instead of waiting for everything to be done, mostly to avoid keeping that everything in memory first of the server and then of the client. Now I want to consume that with Spring WebClient:
Flux<Count> alerts = webClient
.post()
.uri("/testStream")
.accept(MediaType.APPLICATION_JSON)
.retrieve()
.bodyToFlux( Count.class )
.log();
alerts.subscribe(a -> log.debug("Received count: " + a.count));
Mono<Void> mono = Mono.when(alerts);
mono.block();
log.debug("All done in method");
Here is what I'm getting in log:
2019-07-03 18:45:08.330 DEBUG 16256 --- [ctor-http-nio-4] c.k.c.restapi.rest.Controller : Received count: 8
2019-07-03 18:45:09.323 INFO 16256 --- [ctor-http-nio-2] reactor.Flux.MonoFlatMapMany.4 : onNext(com.ksftech.chainfacts.restapi.rest.Controller$Count#55d09f83)
2019-07-03 18:45:09.324 INFO 16256 --- [ctor-http-nio-2] reactor.Flux.MonoFlatMapMany.4 : onComplete()
2019-07-03 18:45:09.325 DEBUG 16256 --- [io-28088-exec-4] c.k.c.restapi.rest.Controller : All done in method
2019-07-03 18:45:09.331 INFO 16256 --- [ctor-http-nio-4] reactor.Flux.MonoFlatMapMany.4 : onNext(com.ksftech.chainfacts.restapi.rest.Controller$Count#da447dd)
2019-07-03 18:45:09.332 DEBUG 16256 --- [ctor-http-nio-4] c.k.c.restapi.rest.Controller : Received count: 9
2019-07-03 18:45:09.333 INFO 16256 --- [ctor-http-nio-4] reactor.Flux.MonoFlatMapMany.4 : onComplete()
Notice how last object is processed by subscribe after mono.block returns. I understand that Reactor is asynchronous, and once it sees no more objects, it releases Mono and calls my code in subscribe in parallel. Then it is a mercy of scheduler to see what runs first.
I came up with quite ugly kludge of having subscribe with completeConsumer, and using good old wait/notify. Then it works fine. But is there more elegant way of making sure my method waits until all elements of Flux are processed?
OK, I have studied this area and realized that Reactor is for asynchronous execution. If I need it synchronously, I have to use synchronization. And to have a code which executes after everything has been fed to subscribe, I need to use doOnComplete:
public class FluxResult {
public boolean success = true;
public Exception ex = null;
public void error() {success = false;}
public void error(Exception e) {success = false; ex = e;}
public synchronized void waitForFluxCompletion() throws InterruptedException {
wait();
}
public synchronized void notifyAboutFluxCompletion() {
notify();
}
}
.... // do something which returns Flux
myflux
.doFirst(() -> {
// initialization
})
.doOnError(e -> {
log.error("Exception", e);
})
.doOnComplete(() -> {
try {
// finalization. If we were accumulating objects, now flush them
}
catch (Exception e) {
log.error("Exception", e);
flux_res.error(e);
}
finally {
flux_res.notifyAboutFluxCompletion();
}
})
.subscribe(str -> {
// something which must be executed for each item
});
And then wait for object to be signaled:
flux_res.waitForFluxCompletion();
if (!flux_res.success) {
if (flux_res.ex != null) {

spring reactive retry with exponential backoff conditionally

Using spring reactive WebClient, I consume an API and in case of response with 500 status I need to retry with exponential backoff. But in Mono class, I don't see any retryBackoff with Predicate as input parameter.
This is the kind of function I search for:
public final Mono<T> retryBackoff(Predicate<? super Throwable> retryMatcher, long numRetries, Duration firstBackoff)
Right now my implementation is as following (I don't have retry with backOff mechanism):
client.sendRequest()
.retry(e -> ((RestClientException) e).getStatus() == 500)
.subscribe();
You might want to have a look at the reactor-extra module in the reactor-addons project. In Maven you can do:
<dependency>
<groupId>io.projectreactor.addons</groupId>
<artifactId>reactor-extra</artifactId>
<version>3.2.3.RELEASE</version>
</dependency>
And then use it like this:
client.post()
.syncBody("test")
.retrieve()
.bodyToMono(String.class)
.retryWhen(Retry.onlyIf(ctx -> ctx.exception() instanceof RestClientException)
.exponentialBackoff(firstBackoff, maxBackoff)
.retryMax(maxRetries))
Retry.onlyIf is now deprecated/removed.
If anyone is interested in the up-to-date solution:
client.post()
.syncBody("test")
.retrieve()
.bodyToMono(String.class)
.retryWhen(Retry.backoff(maxRetries, minBackoff).filter(ctx -> {
return ctx.exception() instanceof RestClientException && ctx.exception().statusCode == 500;
}))
It's worth mentioning that retryWhen wraps the source exception into the RetryExhaustedException. If you want to 'restore' the source exception you can use the reactor.core.Exceptions util:
.onErrorResume(throwable -> {
if (Exceptions.isRetryExhausted(throwable)) {
throwable = throwable.getCause();
}
return Mono.error(throwable);
})
I'm not sure, what spring version you are using, in 2.1.4 I have this:
client.post()
.syncBody("test")
.retrieve()
.bodyToMono(String.class)
.retryBackoff(numretries, firstBackoff, maxBackoff, jitterFactor);
... so that's exactly what you want, right?
I'm currently trying it with Kotlin Coroutines + Spring WebFlux:
It seems the following is not working:
suspend fun ClientResponse.asResponse(): ServerResponse =
status(statusCode())
.headers { headerConsumer -> headerConsumer.addAll(headers().asHttpHeaders()) }
.body(bodyToMono(DataBuffer::class.java), DataBuffer::class.java)
.retryWhen {
Retry.onlyIf { ctx: RetryContext<Throwable> -> (ctx.exception() as? WebClientResponseException)?.statusCode in retryableErrorCodes }
.exponentialBackoff(ofSeconds(1), ofSeconds(5))
.retryMax(3)
.doOnRetry { log.error("Retry for {}", it.exception()) }
)
.awaitSingle()
AtomicInteger errorCount = new AtomicInteger();
Flux<String> flux =
Flux.<String>error(new IllegalStateException("boom"))
.doOnError(e -> {
errorCount.incrementAndGet();
System.out.println(e + " at " + LocalTime.now());
})
.retryWhen(Retry
.backoff(3, Duration.ofMillis(100)).jitter(0d)
.doAfterRetry(rs -> System.out.println("retried at " + LocalTime.now() + ", attempt " + rs.totalRetries()))
.onRetryExhaustedThrow((spec, rs) -> rs.failure())
);
We will log the time of errors emitted by the source and count them.
We configure an exponential backoff retry with at most 3 attempts and no jitter.
We also log the time at which the retry happens, and the retry attempt number (starting from 0).
By default, an Exceptions.retryExhausted exception would be thrown, with the last failure() as a cause. Here we customize that to directly emit the cause as onError.

How do I enable CORS in Giraffe?

I am unable to successfully perform a Post operation using the Giraffe framework on the server with an Elm client sending the request.
I receive the following message when attempting to test an http request:
info: Microsoft.AspNetCore.Hosting.Internal.WebHost1
Request starting HTTP/1.1 OPTIONS http://localhost:5000/register 0
Microsoft.AspNetCore.Hosting.Internal.WebHost:Information: Request
starting HTTP/1.1 OPTIONS http://localhost:5000/register 0 dbug:
Microsoft.AspNetCore.StaticFiles.StaticFileMiddleware1
OPTIONS requests are not supported
The service implementation is the following:
let private registrationHandler =
fun(context: HttpContext) ->
async {
let! data = context.BindJson<RegistrationRequest>()
match register data with
| Success profile -> return! json profile context
| Failure -> return! (setStatusCode 400 >=> json "registration failed") context
}
I then attempted the following and observed the same result:
let private registrationHandler =
fun(context: HttpContext) ->
async {
return! text "hello world" context
}
Appendix:
POST >=>
choose [
route "/register" >=> registrationHandler
]
The source file can be found here.
Elm and CORS
WebAPI enable Cors
Here's a Giraffe sample that shows the code for supporting Cors.
Add package: Microsoft.AspNetCore.Cors
In .fs file add:
open Microsoft.AspNetCore.Cors
Add UseCors e.g.:
let configureApp (app : IApplicationBuilder) =
app.UseGiraffeErrorHandler errorHandler
app.UseStaticFiles() |> ignore
app.UseAuthentication() |> ignore
app.UseCors(Action<_>(fun (b: Infrastructure.CorsPolicyBuilder) -> b.AllowAnyHeader() |> ignore; b.AllowAnyMethod() |> ignore)) |> ignore
app.UseGiraffe webApp
In services add cors:
let configureServices (services : IServiceCollection) =
let sp = services.BuildServiceProvider()
let env = sp.GetService<IHostingEnvironment>()
let viewsFolderPath = Path.Combine(env.ContentRootPath, "Views")
services
.AddCors()
.AddAuthentication(authScheme)
.AddCookie(cookieAuth)
|> ignore

keeping connection alive to websocket when using ServerWebSocketContainer

I was trying to create a websocket based application where the server needs to keep the connection alive with the clients using heartbeat.
I checked the server ServerWebSocketContainer.SockJsServiceOptions class for the same, but could not use it. I am using the code from the spring-integration sample
#Bean
ServerWebSocketContainer serverWebSocketContainer() {
return new ServerWebSocketContainer("/messages").withSockJs();
}
#Bean
MessageHandler webSocketOutboundAdapter() {
return new WebSocketOutboundMessageHandler(serverWebSocketContainer());
}
#Bean(name = "webSocketFlow.input")
MessageChannel requestChannel() {
return new DirectChannel();
}
#Bean
IntegrationFlow webSocketFlow() {
return f -> {
Function<Message , Object> splitter = m -> serverWebSocketContainer()
.getSessions()
.keySet()
.stream()
.map(s -> MessageBuilder.fromMessage(m)
.setHeader(SimpMessageHeaderAccessor.SESSION_ID_HEADER, s)
.build())
.collect(Collectors.toList());
f.split( Message.class, splitter)
.channel(c -> c.executor(Executors.newCachedThreadPool()))
.handle(webSocketOutboundAdapter());
};
}
#RequestMapping("/hi/{name}")
public void send(#PathVariable String name) {
requestChannel().send(MessageBuilder.withPayload(name).build());
}
Please let me know how can I set the heartbeat options ensure the connection is kept alive unless the client de-registers itself.
Thanks
Actually you got it right, but missed a bit of convenience :-).
You can configure it like this:
#Bean
ServerWebSocketContainer serverWebSocketContainer() {
return new ServerWebSocketContainer("/messages")
.withSockJs(new ServerWebSocketContainer.SockJsServiceOptions()
.setHeartbeatTime(60_000));
}
Although it isn't clear for me why you need to configure it at all because of this:
/**
* The amount of time in milliseconds when the server has not sent any
* messages and after which the server should send a heartbeat frame to the
* client in order to keep the connection from breaking.
* <p>The default value is 25,000 (25 seconds).
*/
public SockJsServiceRegistration setHeartbeatTime(long heartbeatTime) {
this.heartbeatTime = heartbeatTime;
return this;
}
UPDATE
In the Spring Integration Samples we have something like stomp-chat application.
I have done there something like this to the stomp-server.xml:
<int-websocket:server-container id="serverWebSocketContainer" path="/chat">
<int-websocket:sockjs heartbeat-time="10000"/>
</int-websocket:server-container>
Added this to the application.properties:
logging.level.org.springframework.web.socket.sockjs.transport.session=trace
And this to the index.html:
sock.onheartbeat = function() {
console.log('heartbeat');
};
After connecting the client I see this in the server log:
2015-10-13 19:03:06.574 TRACE 7960 --- [ SockJS-3] s.w.s.s.t.s.WebSocketServerSockJsSession : Writing SockJsFrame content='h'
2015-10-13 19:03:06.574 TRACE 7960 --- [ SockJS-3] s.w.s.s.t.s.WebSocketServerSockJsSession : Cancelling heartbeat in session sogfe2dn
2015-10-13 19:03:06.574 TRACE 7960 --- [ SockJS-3] s.w.s.s.t.s.WebSocketServerSockJsSession : Scheduled heartbeat in session sogfe2dn
2015-10-13 19:03:16.576 TRACE 7960 --- [ SockJS-8] s.w.s.s.t.s.WebSocketServerSockJsSession : Preparing to write SockJsFrame content='h'
2015-10-13 19:03:16.576 TRACE 7960 --- [ SockJS-8] s.w.s.s.t.s.WebSocketServerSockJsSession : Writing SockJsFrame content='h'
2015-10-13 19:03:16.576 TRACE 7960 --- [ SockJS-8] s.w.s.s.t.s.WebSocketServerSockJsSession : Cancelling heartbeat in session sogfe2dn
2015-10-13 19:03:16.576 TRACE 7960 --- [ SockJS-8] s.w.s.s.t.s.WebSocketServerSockJsSession : Scheduled heartbeat in session sogfe2dn
In the browser's console I see this after:
So, looks like heart-beat feature works well...

Resources