Spring Boot Undertow add RequestLimitingHandler to DeploymentInfo - spring

I am using Spring Boot with Undertow and trying to implement some limits on the number of requests Undertow will accept so as not to become overloaded under stress.
I've seen the answer to the question at Spring Boot Undertow add both blocking handler and NIO handler in the same application, and it appears promising, but I'm not clear what HttpHandler should be passed as the argument to the RequestLimitingHandler constructor.
Is there an easy way to add a RequestLimitingHandler to the UndertowEmbeddedServletContainerFactory bean, perhaps using the addDeploymentInfoCustomizers method?
Alternatively, if I look deeper and get into the Xnio code on which Undertow is based, it looks like there is an option to set Options.WORKER_TASK_LIMIT, but upon further investigation, it looks like the XnioWorker class ignores this setting after the 3.0.10.GA release and simply sets taskQueue to an unbounded LinkedBlockingQueue. Am I mistaken and could this also be an option?

Answering my own question in case it helps others in the future. Solution is to create a new Undertow HandlerWrapper and instantiate the new RequestLimitingHandler object within the wrap() method, like so:
#Bean
public UndertowEmbeddedServletContainerFactory embeddedServletContainerFactory(RootHandler rootHandler) {
UndertowEmbeddedServletContainerFactory factory = new UndertowEmbeddedServletContainerFactory();
factory.addDeploymentInfoCustomizers(deploymentInfo -> deploymentInfo.addInitialHandlerChainWrapper(new HandlerWrapper() {
#Override
public HttpHandler wrap(HttpHandler handler) {
return new RequestLimitingHandler(maxConcurrentRequests, queueSize, handler);
}
}));
return factory;
}

Related

traceId is lost in a coroutine?

I am upgrading an application with Kotlin, Webflux to Spring Boot 3.
Now I noticed that our logs are lacking traceIds.
My suspicion is that this is due to coroutines, since I observed that the logs of the controller contain a traceId and in the logs of the service it is not included. The controller method as well as the service method both have the suspend keyword.
#PostMapping("/upload-file")
#Observed
suspend fun uploadFile(
serverWebExchange: ServerWebExchange,
): ResponseEntity<MessageResponse> {
logger.debug("uploadFile")
importService.handle(getMultipartDataPart("file", serverWebExchange))
return ResponseEntity(MessageResponse("Success."), OK)
}
#Service
class ImportService() {
suspend fun handle(fileAsByteArray: ByteArray) {
log.debug("Import file.")
...
}
I followed the documentation and added depdencies
implementation("org.springframework.boot:spring-boot-starter-actuator")
implementation("io.micrometer:micrometer-tracing-bridge-brave")
Bean
#Bean
fun observedAspect(observationRegistry: ObservationRegistry?): ObservedAspect? {
return ObservedAspect(observationRegistry)
}
and Annotation on the controller method
#GetMapping("/keys/{key}")
#Observed
suspend fun isKeyInKeyStore(
Another interesting observation I made is that I don't observe the problem on my local machine, but only on our dev K8s cluster. Could a different JVM (OpenJDK vs AWS Corretto) cause this?
I am quite sure that the context is lost, but not sure how to deal with it in the best way since I am pretty new to coroutines and Webflux.

Can I use Apache Camel in an AWS Lambda?

Apache Camel has a number of features which make event processing elegant and easy to code. It would be useful to be able to exploit this in an AWS Lambda.
Of course not all features are appropriate, especially anything requiring a long lived process.
Also managing persistant state, for example idempotent repositories and throttling would need thinkng about.
But it would be really useful in simple cases.
It turns out that this is simple using Redhat's Quarkus framework.
I've made a simple example: https://github.com/jcable/SampleCamelLambda
The Camel Route is trivial:
from("direct:input").to("log:input")
.process(new Processor() {
public void process(Exchange exchange) throws Exception {
InputObject input = exchange.getIn().getBody(InputObject.class);
String result = input.getGreeting() + " " + input.getName();
OutputObject out = new OutputObject();
out.setResult(result);
out.setRequestId("aws-request-1");
exchange.getIn().setBody(out);
}
});
Adapting the route to the Lambda makes use of a Quarkus RequestHandler.
public class Lambda implements RequestHandler<InputObject, OutputObject> {
#Inject
CamelContext camelContext;
#Override
public OutputObject handleRequest(InputObject input, Context context) {
return camelContext.createProducerTemplate().requestBody("direct:input", input, OutputObject.class);
}
}
CDI is used to inject the CamelContext into the request handler and then the camelContext object is used to create a
ProducerTemplate which can be used to invoke the Camel route.
The Maven project for the example is derived from the Quarkus lambda example with Apache Camel dependencies from the Camel Quarkus examples.

How to correctly instantiate RestTemplate without leaking resources

Please I have the following bean definitions
#Bean
public RestTemplate produceRestTemplate(ClientHttpRequestFactory requestFactory){
RestTemplate restTemplate = new RestTemplate(requestFactory);
restTemplate.setErrorHandler(restTemplateErrorHandler);
return restTemplate;
}
#Bean
public ClientHttpRequestFactory createRequestFactory() {
PoolingHttpClientConnectionManager connectionManager = new PoolingHttpClientConnectionManager();
connectionManager.setMaxTotal(maxTotalConn);
connectionManager.setDefaultMaxPerRoute(maxPerChannel);
RequestConfig config = RequestConfig.custom().setConnectTimeout(100000).build();
CloseableHttpClient httpClient = HttpClients.createDefault();
return new HttpComponentsClientHttpRequestFactory(httpClient);
}
The code works well but the problem is that fortify flags the code above as being potentially problematic with the following
"The function createRequestFactory() sometimes
fails to release a socket allocated by createDefault() on line 141."
Please anyone with any ideas as to how to correctly do this without fortify raising alarms
Thanks in advance
I am pretty sure that you don't need to do anything. It looks to be a fortify issue that it might not be updated to this usage scenario. There is a mechanism to take exceptions when working with code analyzers - these tools are not always correct.
A Bit of Discussion
Imagine , you are using CloseableHttpClient in a scenario where there would be no #Bean or HttpComponentsClientHttpRequestFactory , then I would say that fortify is correct because that is the very intention of using a java.io.Closeable .
Spring beans are usually singleton with an intention of instance reuse so fortify should know that you are not creating multiple instances and close() method on AutoCloseable would be called when factory is destroyed at shutdown.
if you look at code of - org.springframework.http.client.HttpComponentsClientHttpRequestFactory , this is there.
/**
* Shutdown hook that closes the underlying
* {#link org.apache.http.conn.HttpClientConnectionManager ClientConnectionManager}'s
* connection pool, if any.
*/
#Override
public void destroy() throws Exception {
if (this.httpClient instanceof Closeable) {
((Closeable) this.httpClient).close();
}
}
Your fortify is looking at code in isolation and not in integrated way so its flagging.
Check this 2 points, for solving the problem.
If you never call the httpClient.close() method, sometime you can effectivlely
run out of socket.
If you're code call this method automatically somewhere there is no vuln and problem.
Anyway, this could be a FalsePositiv depending of the Version of the Java and Lib you use

How to set a Message Handler programmatically in Spring Cloud AWS SQS?

maybe someone has an idea to my following problem:
I am currently on a project, where i want to use the AWS SQS with Spring Cloud integration. For the receiver part i want to provide a API, where a user can register a "message handler" on a queue, which is an interface and will contain the user's business logic, e.g.
MyAwsSqsReceiver receiver = new MyAwsSqsReceiver();
receiver.register("a-queue-name", new MessageHandler(){
#Override
public void handle(String message){
//... business logic for the received message
}
});
I found examples, e.g.
https://codemason.me/2016/03/12/amazon-aws-sqs-with-spring-cloud/
and read the docu
http://cloud.spring.io/spring-cloud-aws/spring-cloud-aws.html#_sqs_support
But the only thing i found there to "connect" a functionality for processing a incoming message is a annotation on a method, e.g. #SqsListener or #MessageMapping.
These annotations are fixed to a certain queue-name, though. So now i am at a loss, how to dynamically "connect" my provided "MessageHandler" (from my API) to the incoming message for the specified queuename.
In the Config the example there is a SimpleMessageListenerContainer, which gets a QueueMessageHandler set, but this QueueMessageHandler does not seem
to be the right place to set my handler or to override its methods and provide my own subclass of QueueMessageHandler.
I already did something like this with the Spring Amqp integration and RabbitMq and thought, that it would be also similar here with AWS SQS.
Does anyone have an idea, how to accomplish this?
thx + bye,
Ximon
EDIT:
I found, that Spring JMS could actually do that, e.g. www.javacodegeeks.com/2016/02/aws-sqs-spring-jms-integration.html. Does anybody know, what consequences using JMS protocol has here, good or bad?
I am facing the same issue.
I am trying to go in an unusual way where I set up an Aws client bean at build time and then instead of using sqslistener annotation to consume from the specific queue I use the scheduled annotation which I can programmatically pool (each 10 secs in my case) from which queue I want to consume.
I did the example that iterates over queues defined in properties and then consumes from each one.
Client Bean:
#Bean
#Primary
public AmazonSQSAsync awsSqsClient() {
return AmazonSQSAsyncClientBuilder
.standard()
.withRegion(Regions.EU_WEST_1.getName())
.build();
}
Consumer:
// injected in the constructor
private final AmazonSQSAsync awsSqsClient;
#Scheduled(fixedDelay = 10000)
public void pool() {
properties.getSqsQueues()
.forEach(queue -> {
val receiveMessageRequest = new ReceiveMessageRequest(queue)
.withWaitTimeSeconds(10)
.withMaxNumberOfMessages(10);
// reading the messages
val result = awsSqsClient.receiveMessage(receiveMessageRequest);
val sqsMessages = result.getMessages();
log.info("Received Message on queue {}: message = {}", queue, sqsMessages.toString());
// deleting the messages
sqsMessages.forEach(message -> {
val deleteMessageRequest = new DeleteMessageRequest(queue, message.getReceiptHandle());
awsSqsClient.deleteMessage(deleteMessageRequest);
});
});
}
Just to clarify, in my case, I need multiple queues, one for each tenant, with the queue URL for each one passed in a property file. Of course, in your case, you could get the queue names from another source, maybe a ThreadLocal which has the queues you have created in runtime.
If you wish, you can also try the JMS approach where you create message consumers and add a listener to each one you wish (See the doc Aws Jms documentation).
When we do Spring and SQS we use the spring-cloud-starter-aws-messaging.
Then just create a Listener class
#Component
public class MyListener {
#SQSListener(value="myqueue")
public void listen(MyMessageType message) {
//process the message
}
}

how to modify tomcat8 acceptCount in spring boot

how to modify the tomcat default thread count using spring boot?
when i use spring mvc,i can find the tomcat,and modify the in conf/server.xml,then i modify the maxProcessors and acceptCount,but in spring boot i can't do that.
in org.apache.catalina.connector, i can't find the properties.
try to check what everything you can modify via properties: http://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#common-application-properties
server.tomcat.max-threads = 0 # number of threads in protocol handler
otherwise you will have to get your hands dirty with programmatic configuration - http://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#howto-configure-tomcat by providing your own TomcatEmbeddedServletContainerFactory
acceptCount not support to modify in properties files, you can you following code to modify:
#Bean
public TomcatEmbeddedServletContainerFactory tomcatEmbeddedServletContainerFactory() {
TomcatEmbeddedServletContainerFactory tomcatFactory = new TomcatEmbeddedServletContainerFactory();
tomcatFactory.addConnectorCustomizers(new TomcatConnectorCustomizer() {
#Override
public void customize(Connector connector) {
//tomcat default nio connector
Http11NioProtocol handler = (Http11NioProtocol)connector.getProtocolHandler();
//acceptCount is backlog, default value is 100, you can change which you want value in here
handler.setBacklog(100);
}
});
return tomcatFactory;
}
In current spring boot it should be possible through server.tomcat.accept-count application property, see: https://docs.spring.io/spring-boot/docs/current-SNAPSHOT/reference/htmlsingle/#server-properties

Resources