Springboot not closing http connections , the connections are in CLOSE_WAIT state - spring-boot

Most of the connections to the springboot application are in CLOSE_WAIT state , and are note being closed.
Tried configuration:
server:
servlet:
session:
timeout: 60s
Could anyone suggest a workaround

Related

Refresh JMS Connection in Spring Project using ActiveMQ

I have a Spring project in which I am using Apache Camel and ActiveMQ as message broker. When I started my Java application which receives from the topic and when the broker is not started it gives an error (which is valid), but even after starting the broker JMS connection is not refreshed.
I tried the same case in the Spring Boot app and it automatically refreshed the JMS connection. Something needs to be handled manually in Spring but not sure of that thing.
This is the error from my Spring app:
org.apache.camel.component.jms.DefaultJmsMessageListenerContainer: Could not refresh JMS Connection for destination 'test.events.topic' - retrying using FixedBackOff{interval=5000, currentAttempts=1, maxAttempts=unlimited}. Cause: Could not connect to broker URL: tcp://activemq:61616. Reason: java.net.UnknownHostException: activemq
This is what the Spring Boot application reports:
Successfully refreshed JMS Connection
Below is the broker configuration:
broker:
host: localhost
port: 61616
protocol: tcp
endpoint: ${broker.protocol}://${broker.host}:${broker.port}
url: failover:(${broker.endpoint}?wireFormat.maxInactivityDurationInitalDelay=30000)?timeout=3000&jms.useCompression=true&startupMaxReconnectAttempts=0&jms.redeliveryPolicy.maximumRedeliveries=${maxConsumers}

Mosquitto refuses WebSocket connection after update

after i´ve updated my mosquitto broker to 2.0.2, it refuses the ws connection from my webapp (8083).
tcp connection from java backend still works fine.
i´ve added
port 1883
listener 8083
protocol websockets
to my mosquitto.conf
With version 2.0.0 of mosquitto the broker is configured by default to deny any connection which does not provide a username:
All listeners now default to allow_anonymous false unless explicitly set
to true in the configuration file. This means that when configuring a
listener the user must either configure an authentication and access control
method, or set allow_anonymous true. When the broker is run without a
configured listener, and so binds to the loopback interface, anonymous
connections are allowed.
see mosquitto changelog
Change your mosquitto.conf to allow_anonymous true to see if that's the problem in your case.
The connection from the backend might still work because you are connecting locally - although to my understanding of the changelog, this should only work if there is no listener configured at all. For sure this could also be a bug

upstream prematurely closed connection while reading response header from upstream - Jetty

I am using Spring boot application with embedded Jetty.
Spring boot version - 2.2.1.RELEASE
Jetty Version - 9.4.25.v20191220
Applications are running on multiple hosts(Say 5 hosts) and with a load balancer in front of it.
Load balancer setting:-
Keep Alive conections - 20
Keepalive Timeout - 60s
Jetty Settings
server.jetty.connection-idle-timeout=62000 (62 Secs)
Load balancer is throwing 502 sometime with the logs below.
upstream prematurely closed connection while reading response header from upstream, client: ******, server: xyz.test.com, request: "GET /health/check HTTP/1.1", upstream: "https://XXXXX:8443/health/check", host: "xyz.test.com"
upstream prematurely closed connection while reading response header from upstream, client: ******, server: xyz.test.com, request: "GET /api/v1/weather/2485 HTTP/1.1", upstream: "https://XXXXXX:8443/api/v1/weather/2485", host: "xyz.test.com"
We are using all default setting for Spring boot embedded Jetty. Is there anything config to be changed?
We have not enabled access log on the server side and no logs related to 502 in our logs.

Can I define retry mechanism for RabbitMQ message producer in SpringBoot?

I have the following RabbitMQ setup in the application.yml in my SpringBoot app which can consume (receive) messages:
spring:
rabbitmq:
host: localhost
port: 5672
username: admin
password: password
listener:
simple:
retry:
enabled: true
initial-interval: 3s
max-interval: 10s
multiplier: 2
max-attempts: 3
I want to create a different SpringBoot app where I can only send the messages.
My questions:
is it possible to define retry setup for message-sending?
if yes, is it the same as my example shows? since it is named listener:
spring.rabbitmq.listener...
Thank you!
See the Boot Documentation about template.retry properties.
spring.rabbitmq.template.retry.enabled
false
Whether publishing retries are enabled.
spring.rabbitmq.template.retry.initial-interval
1000ms
Duration between the first and second attempt to deliver a message.
spring.rabbitmq.template.retry.max-attempts
3.0
Maximum number of attempts to deliver a message.
spring.rabbitmq.template.retry.max-interval
10000ms
Maximum duration between attempts.
spring.rabbitmq.template.retry.multiplier
1.0
Multiplier to apply to the previous retry interval.

spring xd connection pool for module/job/filejdbc

My use case is to process 50 csv files and load the same to MySQL DB using spring xd filejdbc job module.
For this I have configured datasource connection pool in spring-xd-1.0.0.RELEASE\xd\config\servers.yml like:
spring:
datasource:
url: jdbc:mysql://localhost:3306/spring_xd
username: springxd
password: springxd
driverClassName: com.mysql.jdbc.Driver
maxActive: 75
maxIdle: 10
minIdle: 10
initialSize: 10
maxWait: 30000
validationQuery: select 1
validationInterval: 30000
testOnBorrow: true
testOnReturn: false
testWhileIdle: false
timeBetweenEvictionRunsMillis: 10000
minEvictableIdleTimeMillis: 60000
removeAbandoned: true
removeAbandonedTimeout: 300
logAbandoned: true
When singlenode starts spring xd is initializing this pool.
But the problem is that, when the filejdbc job launches its once again initializing a connection pool with the above configs. Since \spring-xd-1.0.0.RELEASE\xd\config\modules\job\filejdbc\filejdbc.properties is using the same connection pool configs from servers.yml
The csv files data are loaded to the DB only using the filejdbc connectionpool and not the spring xd.
Because of this my db connections are running out.
Now, when I reduce the connection in servers.yml to max active 20, and when a job launches, spring xd is throwing connection pool exhaust exception, since i have 50 csv files to load.
I simply can't configure the servers.xml max active to 10 for the spring xd database and filejdbc maxactive to 60 for my custom database, so that 50 files can be loaded when the filejdbc job launches. This always throws connection exhaust exception.
Please advise.
Details as below:
spring xd version - spring-xd-1.0.0.RELEASE in singlenode
java version - jdk 7
environment - windows
There is a bug in the local message bus where all partitions get executed simultaneously. In your case that causes the connection pool to get exhausted. We have opened a bug for this - https://jira.spring.io/browse/XD-2868.
Until this is fixed you could run with an another message bus like Redis or RabbitMQ and you should see your job complete. You can try by starting a Redis server and then start your singlenode with:
xd-singlenode --transport redis

Resources