How to get headers from one route to another route - Camel JavaDsl - spring-boot

I have camel rest endpoint with two params and when I send request it activates first route ("direct:amq"), where I get message from activeMq.
The headers here are okay, but this route activates another route ("direct:post)" and the headers there are missing.
I want to get the urlToPost Header from the first route in to the second.
rest("/getFromActiveMq").produces("application/json")
.get()
.param()
.name("urlToPost")
.type(RestParamType.query)
.dataType("String")
.endParam()
.param()
.name("getactivemq")
.type(RestParamType.query)
.dataType("String")
.endParam()
.to("direct:amq");
from("direct:amq").streamCaching()
.startupOrder(2)
.log("My activemq is " + "${in.header.getactivemq}")
.log("My urlToPost is " + "${in.header.urlToPost}")
.setHeader("myHeader")
.header("${in.header.urlToPost}")
.log("My urlToPost Changed header is " + "${header.myHeader}")
.process(exchange -> {
String header = exchange.getIn().getHeader("urlToPost", String.class);
System.out.println(header);
exchange.getIn().setHeader("myShittyHeader", header);
Map<String, Object> hdr = exchange.getIn()
.getHeaders();
for (Map.Entry<String, Object> entry : hdr.entrySet()) {
System.out.println(entry.getKey() + "/" + entry.getValue());
}
})
.pollEnrich()
.simple("activemq://${in.header.getactivemq}")
.onCompletion()
.log("My body is : " + "${body}")
.to("direct:post");
from("direct:post").tracing()
.process(exchange -> exchange.getIn()
.setBody(exchange.getIn()
.getBody()))
.convertBodyTo(String.class)
.process(exchange -> {
Map<String, Object> hdr = exchange.getIn()
.getHeaders();
for (Map.Entry<String, Object> entry : hdr.entrySet()) {
System.out.println(entry.getKey() + "/" + entry.getValue());
}
})
.log("My urlToPost BEFORE SETTING HEADERS is " + "${in.header.urlToPost}")
.setHeader("Content-Type", constant("application/json"))
.setHeader("Accept", constant("application/json"))
.setHeader(Exchange.HTTP_METHOD, constant("POST"))
.log("My urlToPost AFTER SETTING HEADERS is " + "${in.header.urlToPost}")
// .log("My HTTP_URI is: " + "${in.header.urlToPost}")
// .to("http4://urlToPost")
// .to("direct:nothing");
.enrich()
.simple("http4://urlToPost");
I found that after:
.pollEnrich()
.simple("activemq://${in.header.getactivemq}")
Headers are gone

The pollEnrich merges your current Exchange with another message. That means it is in effect an Aggregator.
If you don't provide an aggregation strategy, Camel uses by default a simple body aggregation. This is the reason why you lose your headers.
You have to configure a pre-existing or implement your own aggregation strategy that respects the headers of one or both messages during aggregation.

Related

KTOR Client and Spring Switchuser

I'm trying to implement a client for spring-security's SwitchUserFilter (server-side). As client I'm using KTOR (with OKHttp inside).
SwitchUserFilter requires me to log in, then drop the Authorization header and use the Cookie tinstead. If I send the Authorization header together with the Cookie header, spring's SecurityContext coming from SwitchUserFilter will be overwritten with my admin user again.
Is there something I can configure in KTOR, so that the [Authorization] header is removed, once I have switched the user?
KTOR has to be setup with two things:
SwitchUserFilter will send a redirect (HTTP 302) that we need to ignore. For this a HttpResponseValidator needs to be configured.
Auth needs to be removed similar to the comment from #Delta_George
HttpClient(OkHttp) {
HttpResponseValidator {
// for 302 don't react - so we can switch user successfully. If we follow, this doesn't work anymore.
validateResponse { response ->
val statusCode = response.status.value
val originCall = response.call
if (statusCode < 300 || originCall.attributes.contains(ValidateMark)) {
return#validateResponse
}
val exceptionCall = originCall.save().apply {
attributes.put(ValidateMark, Unit)
}
val excResp = exceptionCall.response
val excRespTxt = excResp.readText()
when (statusCode) {
302 -> {} // do nothing on "Found" statuscode
in 300..399 -> throw RedirectResponseException(excResp, excRespTxt)
in 400..499 -> throw ClientRequestException(excResp, excRespTxt)
in 500..599 -> throw ServerResponseException(excResp, excRespTxt)
else -> throw ResponseException(excResp, excRespTxt)
}
}
}
... // other configurations
}
and impersonate(...):
suspend fun impersonate(impersonateWithUser: PersonEntity): Impersonation<PersonEntity> {
return runCatching {
val toImpersonate = impersonateWithUser.login.replace(Regex("^\\+"), "%2B")
client.get<HttpResponse>("$BASE_URL/login/impersonate?username=${toImpersonate}") // with baseauth again
}.map {
when (it.status) {
HttpStatusCode.Found -> {
client.feature(Auth)!!.providers.removeAll { true }
Impersonation.ok(impersonateWithUser)
}
else -> Impersonation.failure(impersonateWithUser, it)
}
}.getOrElse {
Log.e(TAG, "impersonate: ", it)
Impersonation.communicationError(impersonateWithUser, it)
}
}
to end the impersonation you call the respective endpoint given in SwitchUserFilter on the serverside.

SFTP Adapter is skipping alternate files

I have a SFTP Adapter it download the file from remote location and transform it . However its skipping the alternate file. i.e if in SFTP HOST files are 1.zip,2.zip,3.zip ,Then it only process 1.zip and 3.zip
#Bean
#Primary
public IntegrationFlow sftpInboundFlow(){
...
..
SftpInboundChannelAdapterSpec messageSourceBuilder =
...
..
IntegrationFlowBuilder flowBuilder = IntegrationFlows
.from(messageSourceBuilder, consumerSpec())
.log(Level.INFO, m -> "INBOUND: " + m.getPayload() + " HEADERS: " + m.getHeaders()
);
return flowBuilder.channel(INBOUND_CHANNEL).handle(new MessageHandler());
// Works fine if changed to
// flowBuilder.channel(INBOUND_CHANNEL).get();
//
}
#Bean
public IntegrationFlow uncompressionfileFlow() {
UnZipTransformer unZipTransformer = new UnZipTransformer();
IntegrationFlowBuilder flowBuilder = IntegrationFlows.from(INBOUND_CHANNEL).transform(unZipTransformer)
.split(new UnZipAbstractMessageSplitter(prop1, prop2))
.log(Level.INFO, m -> "OUTBOUND: " + m.getPayload() + " HEADERS: " + m.getHeaders())
.enrichHeaders(h -> h.headerExpression(FileHeaders.ORIGINAL_FILE,
"payload.headers['" + FileHeaders.FILENAME + "']"));
return flowBuilder.channel(OUTBOUND_CHANNEL).get();
}
What you describe is fully related to round-robin dispatching strategy on the DirectChannel: https://docs.spring.io/spring-integration/docs/current/reference/html/core.html#channel-implementations-directchannel. And according your config we indeed have two subscribers to the same INBOUND_CHANNEL:
channel(INBOUND_CHANNEL).handle(new MessageHandler()).
from(INBOUND_CHANNEL).transform(unZipTransformer)
I'm not sure what is your goal, but logic in that code snippet is much complicated than just poll files from SFTP and process them.
You should revise what you have so far, but that's not an SFTP Inbound Channel Adapter problem, but more like two competing consumers on the same direct channel.

Spring Data Redis Streams, Cannot figure out what is happening to my unacknowleded messages?

I am using the following code to consume a Redis stream using a Spring Data Redis consumer group, but even though I have commented out the acknowledge command, my messages are not re-read after a server restart.
I would expect that if I didn't acknowledge the message, it should be re-read when the server gets killed and restarted. What am I missing here?
#Bean
#Autowired
public StreamMessageListenerContainer eventStreamPersistenceListenerContainerTwo(RedisConnectionFactory streamRedisConnectionFactory, RedisTemplate streamRedisTemplate) {
StreamMessageListenerContainer.StreamMessageListenerContainerOptions<String, MapRecord<String, String, String>> containerOptions = StreamMessageListenerContainer.StreamMessageListenerContainerOptions
.builder().pollTimeout(Duration.ofMillis(100)).build();
StreamMessageListenerContainer<String, MapRecord<String, String, String>> container = StreamMessageListenerContainer.create(streamRedisConnectionFactory,
containerOptions);
container.receive(Consumer.from("my-group", "my-consumer"),
StreamOffset.create("event-stream", ReadOffset.latest()),
message -> {
System.out.println("MessageId: " + message.getId());
System.out.println("Stream: " + message.getStream());
System.out.println("Body: " + message.getValue());
//streamRedisTemplate.opsForStream().acknowledge("my-group", message);
});
container.start();
return container;
}
After reading the Redis documentation on how streams work, I came up with the following to automatically process any unacknowledged but previously delivered messages for the consumer:
// Check for any previously unacknowledged messages that were delivered to this consumer.
log.info("STREAM - Checking for previously unacknowledged messages for " + this.getClass().getSimpleName() + " event stream listener.");
String offset = "0";
while ((offset = processUnacknowledgedMessage(offset)) != null) {
log.info("STREAM - Finished processing one unacknowledged message for " + this.getClass().getSimpleName() + " event stream listener: " + offset);
}
log.info("STREAM - Finished checking for previously unacknowledged messages for " + this.getClass().getSimpleName() + " event stream listener.");
And the method that processes the messages:
/**
* Processes, and acknowledges the next previously delivered message, beginning
* at the given message id offset.
*
* #param offset The last read message id offset.
* #return The message that was just processed, or null if there are no more messages.
*/
public String processUnacknowledgedMessage(String offset) {
List<MapRecord> messages = streamRedisTemplate.opsForStream().read(Consumer.from(groupName(), consumerName()),
StreamReadOptions.empty().noack().count(1),
StreamOffset.create(streamKey(), ReadOffset.from(offset)));
String lastMessageId = null;
for (MapRecord message : messages) {
if (log.isDebugEnabled()) log.debug(String.format("STREAM - Processing event(%s) from stream(%s) during startup: %s", message.getId(), message.getStream(), message.getValue()));
processRecord(message);
if (log.isDebugEnabled()) log.debug(String.format("STREAM - Finished processing event(%s) from stream(%s) during startup.", message.getId(), message.getStream()));
streamRedisTemplate.opsForStream().acknowledge(groupName(), message);
lastMessageId = message.getId().getValue();
}
return lastMessageId;
}

spring reactive retry with exponential backoff conditionally

Using spring reactive WebClient, I consume an API and in case of response with 500 status I need to retry with exponential backoff. But in Mono class, I don't see any retryBackoff with Predicate as input parameter.
This is the kind of function I search for:
public final Mono<T> retryBackoff(Predicate<? super Throwable> retryMatcher, long numRetries, Duration firstBackoff)
Right now my implementation is as following (I don't have retry with backOff mechanism):
client.sendRequest()
.retry(e -> ((RestClientException) e).getStatus() == 500)
.subscribe();
You might want to have a look at the reactor-extra module in the reactor-addons project. In Maven you can do:
<dependency>
<groupId>io.projectreactor.addons</groupId>
<artifactId>reactor-extra</artifactId>
<version>3.2.3.RELEASE</version>
</dependency>
And then use it like this:
client.post()
.syncBody("test")
.retrieve()
.bodyToMono(String.class)
.retryWhen(Retry.onlyIf(ctx -> ctx.exception() instanceof RestClientException)
.exponentialBackoff(firstBackoff, maxBackoff)
.retryMax(maxRetries))
Retry.onlyIf is now deprecated/removed.
If anyone is interested in the up-to-date solution:
client.post()
.syncBody("test")
.retrieve()
.bodyToMono(String.class)
.retryWhen(Retry.backoff(maxRetries, minBackoff).filter(ctx -> {
return ctx.exception() instanceof RestClientException && ctx.exception().statusCode == 500;
}))
It's worth mentioning that retryWhen wraps the source exception into the RetryExhaustedException. If you want to 'restore' the source exception you can use the reactor.core.Exceptions util:
.onErrorResume(throwable -> {
if (Exceptions.isRetryExhausted(throwable)) {
throwable = throwable.getCause();
}
return Mono.error(throwable);
})
I'm not sure, what spring version you are using, in 2.1.4 I have this:
client.post()
.syncBody("test")
.retrieve()
.bodyToMono(String.class)
.retryBackoff(numretries, firstBackoff, maxBackoff, jitterFactor);
... so that's exactly what you want, right?
I'm currently trying it with Kotlin Coroutines + Spring WebFlux:
It seems the following is not working:
suspend fun ClientResponse.asResponse(): ServerResponse =
status(statusCode())
.headers { headerConsumer -> headerConsumer.addAll(headers().asHttpHeaders()) }
.body(bodyToMono(DataBuffer::class.java), DataBuffer::class.java)
.retryWhen {
Retry.onlyIf { ctx: RetryContext<Throwable> -> (ctx.exception() as? WebClientResponseException)?.statusCode in retryableErrorCodes }
.exponentialBackoff(ofSeconds(1), ofSeconds(5))
.retryMax(3)
.doOnRetry { log.error("Retry for {}", it.exception()) }
)
.awaitSingle()
AtomicInteger errorCount = new AtomicInteger();
Flux<String> flux =
Flux.<String>error(new IllegalStateException("boom"))
.doOnError(e -> {
errorCount.incrementAndGet();
System.out.println(e + " at " + LocalTime.now());
})
.retryWhen(Retry
.backoff(3, Duration.ofMillis(100)).jitter(0d)
.doAfterRetry(rs -> System.out.println("retried at " + LocalTime.now() + ", attempt " + rs.totalRetries()))
.onRetryExhaustedThrow((spec, rs) -> rs.failure())
);
We will log the time of errors emitted by the source and count them.
We configure an exponential backoff retry with at most 3 attempts and no jitter.
We also log the time at which the retry happens, and the retry attempt number (starting from 0).
By default, an Exceptions.retryExhausted exception would be thrown, with the last failure() as a cause. Here we customize that to directly emit the cause as onError.

How do you use WebFlux to parse an event stream that does not conform to Server Sent Events?

I am trying to use WebClient to deal with the Docker /events endpoint. However, it does not conform to the text/eventstream contract in that each message is separated by 2 LFs. It just sends it as one JSON document followed by another.
It also sets the MIME type to application/json rather than text/eventstream.
What I am thinking of but not implemented yet is to create a node proxy that will add the required line feed and put that in between but I was hoping to avoid that kind of workaround.
Instead of trying to handle a ServerSentEvent, just receive it as a String. Then attempt to parse it as JSON (ignoring the ones that fail which I am presuming may happen but I haven't hit it myself)
#PostConstruct
public void setUpStreamer() {
final Map<String, List<String>> filters = new HashMap<>();
filters.put("type", Collections.singletonList("service"));
WebClient.create(daemonEndpoint)
.get()
.uri("/events?filters={filters}",
mapper.writeValueAsString(filters))
.retrieve()
.bodyToFlux(String.class)
.flatMap(Mono::justOrEmpty)
.map(s -> {
try {
return mapper.readValue(s, Map.class);
} catch (IOException e) {
log.warn("unable to parse {} as JSON", s);
return null;
}
})
.flatMap(Mono::justOrEmpty)
.subscribe(
event -> {
log.trace("event={}", event);
refreshRoutes();
},
throwable -> log.error("Error on event stream: {}", throwable.getMessage(), throwable),
() -> log.warn("event stream completed")
);
}

Resources