I am using https://www.testcontainers.org/modules/kafka/ and would like configure the container, so that before my test get started, make sure Kafka is up and running.
How to archive it?
Testcontainers already has built-in waits for readiness:
https://www.testcontainers.org/features/startup_and_waits/
KafkaContainer waits until it is listening on Kafka's port (and ZooKeeper's port, if you use the build-in one), it won't start otherwise.
Related
I am trying to build a batch service in an existing application that has server.port=8080 property configured in application.properties file. When I run the batch process and Spring Batch trying to bring up remote partitions(separate JVMs), spring cloud deployer local throws error saying
"\r\n\r\n***************************\r\nAPPLICATION FAILED TO START\r\n***************************\r\n\r\nDescription:\r\n\r\nThe Tomcat connector configured to listen on port 8080 failed to start. The port may already be in use or the connector may be misconfigured.\r\n\r\nAction:\r\n\r\nVerify the connector's configuration, identify and stop any process that's listening on port 8080, or configure this application to listen on another port.
Is there a way to make the framework generate random ports for worker partitions being the server.port property that is already configured in the application.properties as is?
Thanks.
A Spring Batch remote partitioning setup requires a message broker for the communication between the manager and workers, but it does not require any web capabilities. You seem to be deploying all your apps locally (manager and workers) as web applications, hence the port conflict when multiple workers are deployed.
You have at least two options:
Either set a random server port for each app (See how Spring Boot allows you to do that here)
Or, if the number of workers is fixed, set ports to distinct values statically.
I'm a consul neophyte. There exists certain job servers in which consul is not configured. And these job servers has to inform consul server that it is about to fail and then fail. Is there a class that I can use using rickfast's consul client?
Currently there's a simple listener on the consul server. This listener uses ConsulCache.Listener :- https://github.com/rickfast/consul-client/blob/master/src/main/java/com/orbitz/consul/cache/ConsulCache.java
Haven't figured out a way yet. Any suggestions will be most welcome.
Currently, we have four JMS listener containers that are started during the application start. They all connect through Apache ZooKeeper and are manually started. This becomes problematic when a connection to ZooKeeper cannot be established. The (Wicket) application cannot start, even though it is not necessary for the JMS listeners be active to use the application. They simply need to listen to messages in the background, save them and a cron job will process them in batches.
Goals:
Allow the application to start and not be prevented by the message containers not being able to connect.
After the application starts, start the message listeners.
If the connection to one or any of the message listeners goes down, it should attempt to automatically reconnect.
On application shutdown (such as the Tomcat being shutdown), the application should stop the message listeners and the cron job that processes the saved messages.
Make all of this testable (as in, be able to write integration tests for this setup).
Current Setup:
Spring Boot 1.5.6
Apache ZooKeeper 3.4.6
Apache ActiveMQ 5.7
Wicket 7.7.0
Work done so far:
Define a class that implements ApplicationListener<ApplicationReadyEvent>.
Setting the autoStart property of the DefaultMessageListenerContainer to false and start each container in the onApplicationEvent in a separate thread.
Questions:
Is it necessary to start each message container in its own thread? This seems to be overkill, but the way the "start" process works is that the DefaultMessageListenerContainer is built for that listener and then it is started. There is a UI component that a user can use to start/stop the message listeners if need be, and if these are started sequentially in one thread, then the latter three message containers could be null if the first one has yet to connect on startup.
How do I accomplish goals 4 and 5?
Of course, any commments on whether I am on the right track would be helpful.
If you do not start them in a custom thread then the whole application cannot be fully started. It is not just Wicket, but the Servlet container won't change the application state from STARTING to STARTED due to the blocking request to ZooKeeper.
Another option is to use a non-blocking request to ZooKeeper but this is done by the JMS client (ActiveMQ), so you need to check whether this is supported in their docs (both ActiveMQ and ZooKeeper). I haven't used those in several years, so I cannot help you more.
Is there any recommended way to gracefully shutdown a Spring:boot 2 app in Kubernetes.
Catch a termination signal SIGTERM
Tell Tomcat to stop taking new requests. (or Jetty, Undertow or Netty/WebFlux depending on the embedded web server used). Or tell SCS to stop sending/listening for messages on Kafka.
Tell Actuator health endpoint to go SERVICE_UNAVAILABLE (503)
And then after a X seconds shutdown the application or (SIGKILL)
I'm trying to do a graceful shutdown Rest apps and SCS (kafka consumer&producer) apps
If you are on the latest version of spring-boot i.e. 2.3.5.RELEASE, then all you need to do it add the below properties to the application.properties file and you are done.
server.shutdown=graceful
spring.lifecycle.timeout-per-shutdown-phase=30s
In Kubernetes world, you can use the preStop hook. But use this when you actually want a hold before SIGTERM is initiated.
This hook is called immediately before a container is terminated. No parameters are passed to the handler. This event handler is blocking, and must complete before the call to delete the container is sent to the Docker daemon. The SIGTERM notification sent by Docker is also still sent. A more complete description of termination behavior can be found in Termination of Pods.
I wrote a Spring Boot webservice that uses an embedded tomcat as container.
In case the system reboots I want to backup some information to a mysql database.
In my webservice I use #Scheduled() and #PreDestroy to run the backup.
This goes well when I stop the server with ^C.
But when I kill the process with an sysV skript (/etc/init.d) and the kill command - even though the daemon has a dependency on mysql, the mysql server is shut down before the backup is finished (resulting in SQL Exceptions in my log).
The reason for this is of course, that kill only sends a signal to stop the process.
How can I (from my sysv skript) synchroneously stop the running spring boot tomcat server?
If you include spring-boot-starter-actuator then that provides a REST endpoint for management. One of the endpoints provided is /shutdown. By hitting that endpoint, you will get a controlled shutdown of all resources, which ensures that #PreDestroy will be called. As this could be dangerous to have enabled by default, to use it you will need to add the following to your application.properties file:
endpoints.shutdown.enabled=true
Of course, once you have exposed that endpoint you need to ensure that there's a teeny bit of security applied to prevent just anybody shutting down your server.
On a related note, you may find my answer to Spring Boot application as a Service useful, where I provided the code for a full init.d script which makes use of this.
As an alternative to the "/shutdown" endpoint the Actuator also has an ApplicationPidListener (not enabled by default) that you can use to create a pidfile (which is commonly used in "init.d" style scripts to kill a process when you want to stop it). The JVM should respond to a kill (sigint) and Spring will shutdown gracefully.