SpringCloudGateway - configuration parameter IO_SELECT_COUNT - spring

I'm using SpringCloudGateway, I have an doubt on config parameter IO_SELECT_COUNT
/**
* Default selector thread count, fallback to -1 (no selector thread)
*/
public static final String IO_SELECT_COUNT = "reactor.netty.ioSelectCount";
Is this the thread count of netty boss eventloop group? why could this be -1 by default?
And is there any suggestion for the configuration of this parameter IO_SELECT_COUNT ?

Related

#KafkaListener per specific header value

I have #KafkaListener:
#KafkaListener(topicPattern = "SameTopic")
public void onMessage(Message<String> message, Acknowledgment acknowledgment) {
String eventType = new String((byte[]) message.getHeaders().get("Event-Type"), StandardCharsets.UTF_8);
switch (eventType) {
case "create" -> doCreate(message);
case "update" -> doUpdate(message);
case "delete" -> doDelete(message);
}
}
Producer sets custom header Event-Type with three possible values: create, update, delete. Currently I'm reading this header value from Message and then invoke rest of the logic according to the header value.
Is there any way to create three #KafkaListeners where each of them will consume message filtered by some criteria - for my case filtered by header Event-Type value?
#KafkaListener(topicPattern = "SameTopic", ...)
public void onCreate(Message<String> message, Acknowledgment acknowledgment) {
doCreate(message);
}
#KafkaListener(topicPattern = "SameTopic", ...)
public void onUpdate(Message<String> message, Acknowledgment acknowledgment) {
doUpdate(message);
}
#KafkaListener(topicPattern = "SameTopic", ...)
public void onDelete(Message<String> message, Acknowledgment acknowledgment) {
doDelete(message);
}
I'm aware of RecordFilterStrategy, but couldn't get any help of it.
Consider to have those types mapped to the partition on the topic.
This way you definitely can have different #KafkaListener with the specific partition assigned:
/**
* The topicPartitions for this listener when using manual topic/partition
* assignment.
* <p>
* Mutually exclusive with {#link #topicPattern()} and {#link #topics()}.
* #return the topic names or expressions (SpEL) to listen to.
*/
TopicPartition[] topicPartitions() default {};
The doc is here: https://docs.spring.io/spring-kafka/docs/current/reference/html/#manual-assignment
It's probably not going to work well with several instances of your app, since with manual assignment there is no consumer group involved. You may consider to refine the logic to 3 different topics. Or if that is not possible from produce side, use Kafka Streams to split() the original topic to other topics according the record key.

Use micrometer timer with manual counter increment

I have a KafkaListener which receives messages containing a list of objects.
#KafkaListener(
id = "dataConsumer",
topics = "data.topic",
groupId = "${spring.kafka.consumer.group-id}",
containerFactory = "dataKafkaListenerContainerFactory")
public void consumeData(DataContainer message) {
List<Data> data = message.getList();
...
}
The list of objects can vary in size so the metrics for each message may not be useful.
I can get the timer metrics for this method by going to /actuator/metrics/spring.kafka.listener?tag=name:dataConsumer-0 but the count is for the message not the list of elements in the message. How can I switch this metric or make a similar metric for the time and count of the data elements in the message?
You can register your own Meter with the MeterRegistry - refer to the Micrometer Documentation.

java 8 parallel stream with ForkJoinPool and ThreadLocal

We are using java 8 parallel stream to process a task, and we are submitting the task through ForkJoinPool#submit. We are not using jvm wide ForkJoinPool.commonPool, instead we are creating our own custom pool to specify the parallelism and storing it as static variable.
We have validation framework, where we subject a list of tables to a List of Validators, and we submit this job through the custom ForkJoinPool as follows:
static ForkJoinPool forkJoinPool = new ForkJoinPool(4);
List<Table> tables = tableDAO.findAll();
ModelValidator<Table, ValidationResult> validator = ValidatorFactory
.getInstance().getTableValidator();
List<ValidationResult> result = forkJoinPool.submit(
() -> tables.stream()
.parallel()
.map(validator)
.filter(result -> result.getValidationMessages().size() > 0)
.collect(Collectors.toList())).get();
The problem we are having is, in the downstream components, the individual validators which run on separate threads from our static ForkJoinPool rely on tenant_id, which is different for every request and is stored in an InheritableThreadLocal variable. Since we are creating a static ForkJoinPool, the threads pooled by the ForkJoinPool will only inherit the value of the parent thread, when it is created first time. But these pooled threads will not know the new tenant_id for the current request. So for subsequent execution these pooled threads are using old tenant_id.
I tried creating a custom ForkJoinPool and specifying ForkJoinWorkerThreadFactory in the constructor and overriding the onStart method to feed the new tenant_id. But that doesnt work, since the onStart method is called only once at creation time and not during individual execution time.
Seems like we need something like the ThreadPoolExecutor#beforeExecute which is not available in case of ForkJoinPool. So what alternative do we have if we want to pass the current thread local value to the statically pooled threads?
One workaround would be to create the ForkJoinPool for each request, rather than make it static but we wouldn't want to do it, to avoid the expensive nature of thread creation.
What alternatives do we have?
I found the following solution that works without changing any underlying code. Basically, the map method takes a functional interface which I am representing as a lambda expression. This expression adds a preExecution hook to set the new tenantId in the current ThreadLocal and cleaning it up in postExecution.
forkJoinPool.submit(tables.stream()
.parallel()
.map((item) -> {
preExecution(tenantId);
try {
return validator.apply(item);
} finally {
postExecution();
}
}
)
.filter(validationResult ->
validationResult.getValidationMessages()
.size() > 0)
.collect(Collectors.toList())).get();
The best option in my view would be to get rid of the thread local and pass it as an argument instead. I understand that this could be a massive undertaking though. Another option would be to use a wrapper.
Assuming that your validator has a validate method you could do something like:
public class WrappingModelValidator implements ModelValidator<Table. ValidationResult> {
private final ModelValidator<Table. ValidationResult> v;
private final String tenantId;
public WrappingModelValidator(ModelValidator<Table. ValidationResult> v, String tenantId) {
this.v = v;
this.tenantId = tenantId;
}
public ValidationResult validate(Table t) {
String oldValue = YourThreadLocal.get();
YourThreadLocal.set(tenantId);
try {
return v.validate(t);
} finally {
YourThreadLocal.set(oldValue);
}
}
}
Then you simply wrap your old validator and it will set the thread local on entry and restore it when done.

What is good way to register adjacent HTTP requests with the Spring integration flow

What is good way to register adjacent HTTP requests with the Spring integration flow?
My application is:
For the every customer (has it's own flow, which start is scheduled by the poller)
GET operation 1 in the source application and the result is JSON_1
POST JSON_1 to the remote system and the result is JSON_1B
POST JSON_1B to the source application and the result is JSON_1C
GET operation 2 in the source application and the result is JSON_2
POST JSON_2 to the remote system and the result is JSON_2B
POST JSON_2B to the source application and the result is JSON_2C
...
GET operation n in the source application and the result is JSON_N
POST JSON_N to the remote system and the result is JSON_NB
POST JSON_NB to the source application and the result is JSON_NC
The operations 1, 2, ..., n must be in the order
Here is my example program (for the simplicity all the REST calls are for the same class)
#Configuration
public class ApplicationConfiguration {
#Autowired
private IntegrationFlowContext flowContext;
#Bean
public MethodInvokingMessageSource integerMessageSource() {
final MethodInvokingMessageSource source = new MethodInvokingMessageSource();
source.setObject(new AtomicInteger());
source.setMethodName("getAndIncrement");
return source;
}
#PostConstruct
public void createAndRegisterFlows() {
IntegrationFlowBuilder integrationFlowBuilder = createFlowBuilder();
for (int i = 1; i <= 2; i++) {
integrationFlowBuilder = flowBuilder(integrationFlowBuilder, i);
}
integrationFlowBuilder.handle(CharacterStreamWritingMessageHandler.stdout());
flowContext.registration(integrationFlowBuilder.get()).register();
}
private IntegrationFlowBuilder createFlowBuilder() {
return IntegrationFlows.from(this.integerMessageSource(), c -> c.poller(Pollers.fixedRate(5000)));
}
private IntegrationFlowBuilder flowBuilder(final IntegrationFlowBuilder integrationFlowBuilder, final int number) {
return integrationFlowBuilder
.handle(Http.outboundGateway("http://localhost:8055/greeting" + number).httpMethod(HttpMethod.GET)
.expectedResponseType(String.class))
.channel("getReceive" + number)
.handle(Http.outboundGateway("http://localhost:8055/greeting" + number).httpMethod(HttpMethod.POST)
.expectedResponseType(String.class))
.channel("postReceive" + number)
.handle(Http.outboundGateway("http://localhost:8055/greeting-final" + number)
.httpMethod(HttpMethod.POST).expectedResponseType(String.class))
.channel("postReceiveFinal" + number);
}
}
This program runs the integration flow
GET http://localhost:8055/greeting1
POST http://localhost:8055/greeting1 (previous result as an input)
POST http://localhost:8055/greeting-final1 (previous result as an input)
GET http://localhost:8055/greeting2
POST http://localhost:8055/greeting2 (previous result as an input)
POST http://localhost:8055/greeting-final2 (previous result as an input)
This is working as expected. But I'm wondering is this good way to do this, because the response from the call POST http://localhost:8055/greeting-final1 is not used in the call GET http://localhost:8055/greeting2. I only want that these calls are in this order.
Actually you have everything you needed with that loop to create to sub-flow calls to the REST services. Only what you are missing is a payload as a result from the greeting-final1 which is going to be published with the message to the .channel("postReceiveFinal" + number). The second iteration will subscribe the greeting2" to that channel and the payload is available for processing here. Not sure how to rework your flowBuilder() method, but you just need to use a payload from the message for whatever is your requirements, e.g. you can use it as an:
/**
* Specify an {#link Expression} to evaluate a value for the uri template variable.
* #param variable the uri template variable.
* #param expression the expression to evaluate value for te uri template variable.
* #return the current Spec.
* #see AbstractHttpRequestExecutingMessageHandler#setUriVariableExpressions(Map)
* #see ValueExpression
* #see org.springframework.expression.common.LiteralExpression
*/
public S uriVariable(String variable, Expression expression) {
to inject a payload to some request param since it is just HttpMethod.GET:
handle(Http.outboundGateway("http://localhost:8055/greeting2?param1={param1}")
.httpMethod(HttpMethod.GET)
.uriVariable("param1", "payload")
.expectedResponseType(String.class))

Can a single Spring's KafkaConsumer listener listens to multiple topic?

Anyone know if a single listener can listens to multiple topic like below? I know just "topic1" works, what if I want to add additional topics? Can you please show example for both below? Thanks for the help!
#KafkaListener(topics = "topic1,topic2")
public void listen(ConsumerRecord<?, ?> record, Acknowledgment ack) {
System.out.println(record);
}
or
ContainerProperties containerProps = new ContainerProperties(new TopicPartitionInitialOffset("topic1, topic2", 0));
Yes, just follow the #KafkaListener JavaDocs:
/**
* The topics for this listener.
* The entries can be 'topic name', 'property-placeholder keys' or 'expressions'.
* Expression must be resolved to the topic name.
* Mutually exclusive with {#link #topicPattern()} and {#link #topicPartitions()}.
* #return the topic names or expressions (SpEL) to listen to.
*/
String[] topics() default {};
/**
* The topic pattern for this listener.
* The entries can be 'topic name', 'property-placeholder keys' or 'expressions'.
* Expression must be resolved to the topic pattern.
* Mutually exclusive with {#link #topics()} and {#link #topicPartitions()}.
* #return the topic pattern or expression (SpEL).
*/
String topicPattern() default "";
/**
* The topicPartitions for this listener.
* Mutually exclusive with {#link #topicPattern()} and {#link #topics()}.
* #return the topic names or expressions (SpEL) to listen to.
*/
TopicPartition[] topicPartitions() default {};
So, your use-case should be like:
#KafkaListener(topics = {"topic1" , "topic2"})
If we have to fetch multiple topics from the application.properties file :
#KafkaListener(topics = { "${spring.kafka.topic1}", "${spring.kafka.topic2}" })

Resources