How to terminate jar file spring boot in commend [duplicate] - spring-boot

In the Spring Boot Document, they said that 'Each SpringApplication will register a shutdown hook with the JVM to ensure that the ApplicationContext is closed gracefully on exit.'
When I click ctrl+c on the shell command, the application can be shutdown gracefully. If I run the application in a production machine, I have to use the command
java -jar ProApplicaton.jar. But I can't close the shell terminal, otherwise it will close the process.
If I run command like nohup java -jar ProApplicaton.jar &, I can't use ctrl+c to shutdown it gracefully.
What is the correct way to start and stop a Spring Boot Application in the production environment?

If you are using the actuator module, you can shutdown the application via JMX or HTTP if the endpoint is enabled.
add to application.properties:
Spring Boot 2.0 and newer:
management.endpoints.shutdown.enabled=true
Following URL will be available:
/actuator/shutdown - Allows the application to be gracefully shutdown (not enabled by default).
Depending on how an endpoint is exposed, the sensitive parameter may be used as a security hint.
For example, sensitive endpoints will require a username/password when they are accessed over HTTP (or simply disabled if web security is not enabled).
From the Spring boot documentation

Here is another option that does not require you to change the code or exposing a shut-down endpoint. Create the following scripts and use them to start and stop your app.
start.sh
#!/bin/bash
java -jar myapp.jar & echo $! > ./pid.file &
Starts your app and saves the process id in a file
stop.sh
#!/bin/bash
kill $(cat ./pid.file)
Stops your app using the saved process id
start_silent.sh
#!/bin/bash
nohup ./start.sh > foo.out 2> foo.err < /dev/null &
If you need to start the app using ssh from a remote machine or a CI pipeline then use this script instead to start your app. Using start.sh directly can leave the shell to hang.
After eg. re/deploying your app you can restart it using:
sshpass -p password ssh -oStrictHostKeyChecking=no userName#www.domain.com 'cd /home/user/pathToApp; ./stop.sh; ./start_silent.sh'

As to #Jean-Philippe Bond 's answer ,
here is a maven quick example for maven user to configure HTTP endpoint to shutdown a spring boot web app using spring-boot-starter-actuator so that you can copy and paste:
1.Maven pom.xml:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
2.application.properties:
#No auth protected
endpoints.shutdown.sensitive=false
#Enable shutdown endpoint
endpoints.shutdown.enabled=true
All endpoints are listed here:
3.Send a post method to shutdown the app:
curl -X POST localhost:port/shutdown
Security Note:
if you need the shutdown method auth protected, you may also need
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-security</artifactId>
</dependency>
configure details:

You can make the springboot application to write the PID into file and you can use the pid file to stop or restart or get the status using a bash script. To write the PID to a file, register a listener to SpringApplication using ApplicationPidFileWriter as shown below :
SpringApplication application = new SpringApplication(Application.class);
application.addListeners(new ApplicationPidFileWriter("./bin/app.pid"));
application.run();
Then write a bash script to run the spring boot application . Reference.
Now you can use the script to start,stop or restart.

All of the answers seem to be missing the fact that you may need to complete some portion of work in coordinated fashion during graceful shutdown (for example, in an enterprise application).
#PreDestroy allows you to execute shutdown code in the individual beans. Something more sophisticated would look like this:
#Component
public class ApplicationShutdown implements ApplicationListener<ContextClosedEvent> {
#Autowired ... //various components and services
#Override
public void onApplicationEvent(ContextClosedEvent event) {
service1.changeHeartBeatMessage(); // allows loadbalancers & clusters to prepare for the impending shutdown
service2.deregisterQueueListeners();
service3.finishProcessingTasksAtHand();
service2.reportFailedTasks();
service4.gracefullyShutdownNativeSystemProcessesThatMayHaveBeenLaunched();
service1.eventLogGracefulShutdownComplete();
}
}

Use the static exit() method in the SpringApplication class for closing your spring boot application gracefully.
public class SomeClass {
#Autowired
private ApplicationContext context
public void close() {
SpringApplication.exit(context);
}
}

As of Spring Boot 2.3 and later, there's a built-in graceful shutdown mechanism.
Pre-Spring Boot 2.3, there is no out-of-the box graceful shutdown mechanism.
Some spring-boot starters provide this functionality:
https://github.com/jihor/hiatus-spring-boot
https://github.com/gesellix/graceful-shutdown-spring-boot
https://github.com/corentin59/spring-boot-graceful-shutdown
I am the author of nr. 1. The starter is named "Hiatus for Spring Boot". It works on the load balancer level, i.e. simply marks the service as OUT_OF_SERVICE, not interfering with application context in any way. This allows to do a graceful shutdown and means that, if required, the service can be taken out of service for some time and then brought back to life. The downside is that it doesn't stop the JVM, you will have to do it with kill command. As I run everything in containers, this was no big deal for me, because I will have to stop and remove the container anyway.
Nos. 2 and 3 are more or less based on this post by Andy Wilkinson. They work one-way - once triggered, they eventually close the context.

I don't expose any endpoints and start (with nohup in background and without out files created through nohup) and stop with shell script(with KILL PID gracefully and force kill if app is still running after 3 mins). I just create executable jar and use PID file writer to write PID file and store Jar and Pid in folder with same name as of application name and shell scripts also have same name with start and stop in the end. I call these stop script and start script via jenkins pipeline also. No issues so far. Perfectly working for 8 applications(Very generic scripts and easy to apply for any app).
Main Class
#SpringBootApplication
public class MyApplication {
public static final void main(String[] args) {
SpringApplicationBuilder app = new SpringApplicationBuilder(MyApplication.class);
app.build().addListeners(new ApplicationPidFileWriter());
app.run();
}
}
YML FILE
spring.pid.fail-on-write-error: true
spring.pid.file: /server-path-with-folder-as-app-name-for-ID/appName/appName.pid
Here is the start script(start-appname.sh):
#Active Profile(YAML)
ACTIVE_PROFILE="preprod"
# JVM Parameters and Spring boot initialization parameters
JVM_PARAM="-Xms512m -Xmx1024m -Dspring.profiles.active=${ACTIVE_PROFILE} -Dcom.webmethods.jms.clientIDSharing=true"
# Base Folder Path like "/folder/packages"
CURRENT_DIR=$(readlink -f "$0")
BASE_PACKAGE="${CURRENT_DIR%/bin/*}"
# Shell Script file name after removing path like "start-yaml-validator.sh"
SHELL_SCRIPT_FILE_NAME=$(basename -- "$0")
# Shell Script file name after removing extension like "start-yaml-validator"
SHELL_SCRIPT_FILE_NAME_WITHOUT_EXT="${SHELL_SCRIPT_FILE_NAME%.sh}"
# App name after removing start/stop strings like "yaml-validator"
APP_NAME=${SHELL_SCRIPT_FILE_NAME_WITHOUT_EXT#start-}
PIDS=`ps aux |grep [j]ava.*-Dspring.profiles.active=$ACTIVE_PROFILE.*$APP_NAME.*jar | awk {'print $2'}`
if [ -z "$PIDS" ]; then
echo "No instances of $APP_NAME with profile:$ACTIVE_PROFILE is running..." 1>&2
else
for PROCESS_ID in $PIDS; do
echo "Please stop the process($PROCESS_ID) using the shell script: stop-$APP_NAME.sh"
done
exit 1
fi
# Preparing the java home path for execution
JAVA_EXEC='/usr/bin/java'
# Java Executable - Jar Path Obtained from latest file in directory
JAVA_APP=$(ls -t $BASE_PACKAGE/apps/$APP_NAME/$APP_NAME*.jar | head -n1)
# To execute the application.
FINAL_EXEC="$JAVA_EXEC $JVM_PARAM -jar $JAVA_APP"
# Making executable command using tilde symbol and running completely detached from terminal
`nohup $FINAL_EXEC </dev/null >/dev/null 2>&1 &`
echo "$APP_NAME start script is completed."
Here is the stop script(stop-appname.sh):
#Active Profile(YAML)
ACTIVE_PROFILE="preprod"
#Base Folder Path like "/folder/packages"
CURRENT_DIR=$(readlink -f "$0")
BASE_PACKAGE="${CURRENT_DIR%/bin/*}"
# Shell Script file name after removing path like "start-yaml-validator.sh"
SHELL_SCRIPT_FILE_NAME=$(basename -- "$0")
# Shell Script file name after removing extension like "start-yaml-validator"
SHELL_SCRIPT_FILE_NAME_WITHOUT_EXT="${SHELL_SCRIPT_FILE_NAME%.*}"
# App name after removing start/stop strings like "yaml-validator"
APP_NAME=${SHELL_SCRIPT_FILE_NAME_WITHOUT_EXT:5}
# Script to stop the application
PID_PATH="$BASE_PACKAGE/config/$APP_NAME/$APP_NAME.pid"
if [ ! -f "$PID_PATH" ]; then
echo "Process Id FilePath($PID_PATH) Not found"
else
PROCESS_ID=`cat $PID_PATH`
if [ ! -e /proc/$PROCESS_ID -a /proc/$PROCESS_ID/exe ]; then
echo "$APP_NAME was not running with PROCESS_ID:$PROCESS_ID.";
else
kill $PROCESS_ID;
echo "Gracefully stopping $APP_NAME with PROCESS_ID:$PROCESS_ID..."
sleep 5s
fi
fi
PIDS=`/bin/ps aux |/bin/grep [j]ava.*-Dspring.profiles.active=$ACTIVE_PROFILE.*$APP_NAME.*jar | /bin/awk {'print $2'}`
if [ -z "$PIDS" ]; then
echo "All instances of $APP_NAME with profile:$ACTIVE_PROFILE has has been successfully stopped now..." 1>&2
else
for PROCESS_ID in $PIDS; do
counter=1
until [ $counter -gt 150 ]
do
if ps -p $PROCESS_ID > /dev/null; then
echo "Waiting for the process($PROCESS_ID) to finish on it's own for $(( 300 - $(( $counter*5)) ))seconds..."
sleep 2s
((counter++))
else
echo "$APP_NAME with PROCESS_ID:$PROCESS_ID is stopped now.."
exit 0;
fi
done
echo "Forcefully Killing $APP_NAME with PROCESS_ID:$PROCESS_ID."
kill -9 $PROCESS_ID
done
fi

Spring Boot provided several application listener while try to create application context one of them is ApplicationFailedEvent. We can use to know weather the application context initialized or not.
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.context.event.ApplicationFailedEvent;
import org.springframework.context.ApplicationListener;
public class ApplicationErrorListener implements
ApplicationListener<ApplicationFailedEvent> {
private static final Logger LOGGER =
LoggerFactory.getLogger(ApplicationErrorListener.class);
#Override
public void onApplicationEvent(ApplicationFailedEvent event) {
if (event.getException() != null) {
LOGGER.info("!!!!!!Looks like something not working as
expected so stoping application.!!!!!!");
event.getApplicationContext().close();
System.exit(-1);
}
}
}
Add to above listener class to SpringApplication.
new SpringApplicationBuilder(Application.class)
.listeners(new ApplicationErrorListener())
.run(args);

SpringApplication implicitly registers a shutdown hook with the JVM to ensure that ApplicationContext is closed gracefully on exit. That will also call all bean methods annotated with #PreDestroy. That means we don't have to explicitly use the registerShutdownHook() method of a ConfigurableApplicationContext in a boot application, like we have to do in spring core application.
#SpringBootConfiguration
public class ExampleMain {
#Bean
MyBean myBean() {
return new MyBean();
}
public static void main(String[] args) {
ApplicationContext context = SpringApplication.run(ExampleMain.class, args);
MyBean myBean = context.getBean(MyBean.class);
myBean.doSomething();
//no need to call context.registerShutdownHook();
}
private static class MyBean {
#PostConstruct
public void init() {
System.out.println("init");
}
public void doSomething() {
System.out.println("in doSomething()");
}
#PreDestroy
public void destroy() {
System.out.println("destroy");
}
}
}

Spring Boot now supports graceful shut down (currently in pre-release versions, 2.3.0.BUILD-SNAPSHOT)
When enabled, shutdown of the application will include a grace period
of configurable duration. During this grace period, existing requests
will be allowed to complete but no new requests will be permitted
You can enable it with:
server.shutdown.grace-period=30s
https://docs.spring.io/spring-boot/docs/2.3.0.BUILD-SNAPSHOT/reference/html/spring-boot-features.html#boot-features-graceful-shutdown

They are many ways to shutdown a spring application. One is to call close() on the ApplicationContext:
ApplicationContext ctx =
SpringApplication.run(HelloWorldApplication.class, args);
// ...
ctx.close()
Your question suggest you want to close your application by doing Ctrl+C, that is frequently used to terminate a command. In this case...
Use endpoints.shutdown.enabled=true is not the best recipe. It means you expose an end-point to terminate your application. So, depending on your use case and your environment, you will have to secure it...
A Spring Application Context may have register a shutdown hook with the JVM runtime. See ApplicationContext documentation.
Spring Boot configure this shutdown hook automatically since version 2.3 (see jihor's answer). You may need to register some #PreDestroy methods that will be executed during the graceful shutdown (see Michal's answer).
Ctrl+C should work very well in your case. I assume your issue is caused by the ampersand (&) More explanation:
On Ctrl+C, your shell sends an INT signal to the foreground application. It means "please interrupt your execution". The application can trap this signal and do cleanup before its termination (the hook registered by Spring), or simply ignore it (bad).
nohup is command that execute the following program with a trap to ignore the HUP signal. HUP is used to terminate program when you hang up (close your ssh connexion for example). Moreover it redirects outputs to avoid that your program blocks on a vanished TTY. nohupdoes NOT ignore INT signal. So it does NOT prevent Ctrl+C to work.
I assume your issue is caused by the ampersand (&), not by nohup. Ctrl+C sends a signal to the foreground processes. The ampersand causes your application to be run in background. One solution: do
kill -INT pid
Use kill -9 or kill -KILL is bad because the application (here the JVM) cannot trap it to terminate gracefully.
Another solution is to bring back your application in foreground. Then Ctrl+C will work. Have a look on Bash Job control, more precisely on fg.

I am able to do it on Spring Boot Version >=2.5.3 using these steps.
1. Add following actuator dependency
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
</dependencies>
2. Add these properties in application.properties to do a graceful shutdown
management.endpoint.shutdown.enabled=true
management.endpoints.web.exposure.include=shutdown
server.shutdown=GRACEFUL
3. When you start the application, you should see this in the console
(based on number of endpoints you have exposed)
Exposing 1 endpoint(s) beneath base path '/actuator'
4. To shutdown the application do:
POST: http://localhost:8080/<context-path-if-any>/actuator/shutdown

A lot of the actuator answers are mostly correct. Unfortunately, the configuration and endpoint information has changed so they aren't 100% correct. To enable the actuator, add for Maven
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
</dependencies>
or for Gradle
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-actuator'
}
For configuration, add the following to the application.properties. This will expose all endpoints in the actuator:
management.endpoints.web.exposure.include=*
management.endpoint.shutdown.enabled=true
To expose just the shutdown endpoint, change to:
management.endpoints.web.exposure.include=shutdown
management.endpoint.shutdown.enabled=true
Finally the shutdown endpoint is not available using GET - only POST. So you have to use something like:
curl -X POST localhost:8080/actuator/shutdown

If you are using maven you could use the Maven App assembler plugin.
The daemon mojo (which embed JSW) will output a shell script with start/stop argument. The stop will shutdown/kill gracefully your Spring application.
The same script can be used to use your maven application as a linux service.

If you are in a linux environment all you have to do is to create a symlink to your .jar file from inside /etc/init.d/
sudo ln -s /path/to/your/myboot-app.jar /etc/init.d/myboot-app
Then you can start the application like any other service
sudo /etc/init.d/myboot-app start
To close the application
sudo /etc/init.d/myboot-app stop
This way, application will not terminate when you exit the terminal. And application will shutdown gracefully with stop command.

For Spring boot web apps, Spring boot provides the out-of-box solution for graceful shutdown from version 2.3.0.RELEASE.
An excerpt from Spring doc
Refer this answer for the Code Snippet

If you are using spring boot version 2.3 n up , There is an in-build way to shutdown app gracefully.Add below in application.properties
server.shutdown=graceful
spring.lifecycle.timeout-per-shutdown-phase=20s
If you are using lower spring boot version, You can write a custom shutdown hook and handle different beans, how they should shutdown or in which order they should shutdown. Example code below.
#Component
public class AppShutdownHook implements ApplicationListener<ContextClosedEvent> {
private static final Logger logger = LoggerFactory.getLogger(AppShutdownHook.class);
#Override
public void onApplicationEvent(ContextClosedEvent event) {
logger.info("shutdown requested !!!");
try {
//TODO Add logic to shutdown, diff elements of your application
} catch (Exception e) {
logger.error("Exception occcured while shutting down Application:", e);
}
}
}

try to use following command under the server running cmd or bash terminal.
kill $(jobs -p)
Recommendation get from Microservices With Spring Boot And Spring Cloud - Build Resilient And Scalable Microservices book.

Try this : Press ctrl+C
- [INFO]
------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO]
------------------------------------------------------------------------ [INFO] Total time: 04:48 min [INFO] Finished at:
2022-09-07T18:17:35+05:30 [INFO]
------------------------------------------------------------------------
Terminate batch job (Y/N)?
Type Y to terminate

Related

Spring Scheduler not working in google cloud run with cpu throttling off

Hello All I have a spring scheduler job running which has to be run on google cloud run with a scheduled time gap.
It works perfectly fine with docker-compose local deployment. It gets triggered without any issue.
Although it works fine locally in google cloud run service with CPU throttling off which keeps CPU 100% on always it is not working after the first run.
I will paste the docker file for any once reference but am pretty sure it is working fine
FROM maven:3-jdk-11-slim AS build-env
# Set the working directory to /app
WORKDIR /app
COPY pom.xml ./
COPY src ./src
COPY css-common ./css-common
RUN echo $(ls -1 css-common/src/main/resources)
# Build and create the common jar
RUN cd css-common && mvn clean install
# Build and the job
RUN mvn package -DskipTests
# It's important to use OpenJDK 8u191 or above that has container support enabled.
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM openjdk:11-jre-slim
# Copy the jar to the production image from the builder stage.
COPY --from=build-env /app/target/css-notification-job-*.jar /app.jar
# Run the web service on container startup
CMD ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
And below is the terraform script used for the deployment
resource "google_cloud_run_service" "job-staging" {
name = var.cloud_run_job_name
project = var.project
location = var.region
template {
spec {
containers {
image = "${var.docker_registry}/${var.project}/${var.cloud_run_job_name}:${var.docker_tag_notification_job}"
env {
name = "DB_HOST"
value = var.host
}
env {
name = "DB_PORT"
value = 3306
}
}
}
metadata {
annotations = {
"autoscaling.knative.dev/maxScale" = "4"
"run.googleapis.com/vpc-access-egress" = "all-traffic"
"run.googleapis.com/cpu-throttling" = false
}
}
}
timeouts {
update = "3m"
}
}
Something I noticed in the logs itself
2022-01-04T00:19:39.178057Z2022-01-04 00:19:39.177 INFO 1 --- [ionShutdownHook] j.LocalContainerEntityManagerFactoryBean : Closing JPA EntityManagerFactory for persistence unit 'default'
Standard
2022-01-04T00:19:39.182017Z2022-01-04 00:19:39.181 INFO 1 --- [ionShutdownHook] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown initiated...
Standard
2022-01-04T00:19:39.194117Z2022-01-04 00:19:39.193 INFO 1 --- [ionShutdownHook] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown completed.
It is shutting down the entity manager. I provided -Xmx1024m heap memory to make sure it has enough memory.
Although in google documentation it has mentioned it should work I am not sure for some reason the scheduler not getting triggered. Any help would be really nice.
TL;DR: Using Spring Scheduler on Cloud Run is a bad idea. Prefer Cloud Scheduler instead
In fact, you have to understand what is the lifecycle of a Cloud Run instance. First of all, CPU is allocated to the process ONLY when a request is processed.
The immediate effect of that is that background process, like a scheduler, can't work, because there isn't CPUs allocated out of request processing.
Except if you set the CPU Throttling to off. You did it? Yes great, but there are another caveats!
An instance is created when a request comes in, and live up to 15 minutes without any request processing. Then the instance is offloaded and you scale to 0.
Here again, the scheduler can't work if the instance is shut down. The solution is to set the min instance to 1 AND the CPU throttling to false to keep 1 instance 100% up and let the scheduler do its job.
Final issue with Cloud Run, is the scalability. You set 4 in your terraform, that means, you can have up to 4 instances in parallel, and therefore 4 scheduler running in parallel, one on each instance. Is it really what you want? If not, you can set the max instance to 1 to limit the number of parallel instance to 1.
At the end, you have 1 instance, full time up, and that can't scale up and down. Because it can't scale, I don't recommend you to perform processing on the current instance but to call another API which run on another Cloud Run instance and that will be able to scale up and down according to the scheduler requirement.
And so, you will have only 1 scheduler that will perform API call to another Cloud Run services to perform task. That's the purpose of Cloud Scheduler.

Task execution is not working after lunching the task in spring cloud data flow

I have created one Spring boot application with #EnablesTask annotation and try to print the arguments in log.
package com.custom.samplejob;
import org.springframework.boot.CommandLineRunner;
import org.springframework.cloud.task.configuration.EnableTask;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
#Configuration
#EnableTask
public class TaskConfiguration {
#Bean
public CommandLineRunner commandLineRunner() {
return args -> {
System.out.println(args);
};
}
}
After I have run that mvn clean install to have the jar in local maven repo.
com.custom:samplejob:0.0.1-SNAPSHOT
Using custom docker-compose to run spring cloud data flow locally on windows using the below parameters
set HOST_MOUNT_PATH=C:\Users\user\.m2 (Local maven repository mounting)
set DOCKER_MOUNT_PATH=/root/.m2/
set DATAFLOW_VERSION=2.7.1
set SKIPPER_VERSION=2.6.1
docker-compose up
Using the below commend to register the app
app register --type task --name custom-task-trail-1 --uri maven://com.custom:samplejob:0.0.1-SNAPSHOT
Created task using UI(below URL) and lunch the task. Task was successfully launched.
http://localhost:9393/dashboard/#/tasks-jobs/tasks
These are the logs I can see in the docker-compose up terminal,
dataflow-server | 2021-02-15 13:20:41.673 INFO 1 --- [nio-9393-exec-9] o.s.c.d.spi.local.LocalTaskLauncher : Preparing to run an application from com.custom:samplejob:jar:0.0.1-SNAPSHOT. This may take some time if the artifact must be downloaded from a remote host.
dataflow-server | 2021-02-15 13:20:41.693 INFO 1 --- [nio-9393-exec-9] o.s.c.d.spi.local.LocalTaskLauncher : Command to be executed: /usr/lib/jvm/jre-11.0.8/bin/java -jar /root/.m2/repository/com/custom/samplejob/0.0.1-SNAPSHOT/samplejob-0.0.1-SNAPSHOT.jar --name=dsdsds --spring.cloud.task.executionid=38
dataflow-server | 2021-02-15 13:20:41.702 INFO 1 --- [nio-9393-exec-9] o.s.c.d.spi.local.LocalTaskLauncher : launching task custom-task-trail-1-48794885-9a0a-4c46-a2a1-299bf91763ad
dataflow-server | Logs will be in /tmp/4921907601400/custom-task-trail-1-48794885-9a0a-4c46-a2a1-299bf91763ad
But in task execution list, it's doesn't show the status and start date and end date of that task executions,
can some one help me to resolve this or am I missing anything here in local installation or task spring boot implementation wise?
I have enabled kubernetes on docker desktop and installed spring data flow server top of that.
And I tried with docker uri to register app and generate docker image using the jib-maven-plugin.
Now its works the sample task application in my case.

unable to obtain detail trace message on jaeger ui (using quarkus)

I am playing with quarkus and jaeger by opentracing integration. After run the jaeger server and the https://github.com/quarkusio/quarkus-quickstarts/tree/master/opentracing-quickstart repo I found the traces at http://localhost:16686/search. But I only found the Resource class, arguments, and Process name , but the "Logs" is not shown on trace detail expand.
The steps are easy:
1.Run jaeger server docker run --rm=true --name erp_jaeger_server -e COLLECTOR_ZIPKIN_HTTP_PORT=9411 -p 5775:5775/udp -p 6831:6831/udp -p 6832:6832/udp -p 5778:5778 -p 16686:16686 -p 14268:14268 -p 9411:9411 jaegertracing/all-in-one:latest
clone the example repo and run it
https://github.com/quarkusio/quarkus-quickstarts/tree/master/opentracing-quickstart
(no further configuration)
run-> mvn quarkus:dev
visit http://localhost:8080/hello/
5.Explore on jaeger ui 'http://localhost:16686/'
6.Found the traces Tags, and Process Details but detailes content Log.info('hello') is not shown
I was trying with #Slfj but i got the same result
Thanks in advance.
By default, OpenTracing doesn't log automatically into span logs, only important messages that Jaeger feels it needs to be logged and is needed for tracing would be there :). The idea is to separate responsibilities between Tracing and Log management, Check this GitHub discussion.
An alternative would be to use centralized log management and print traceId & spanId into your logs for troubleshooting and correlating logs and tracing.
As iabughosh said, the main focus on jaeger is traceability, monitoring and performance, not for logging.
Anyway, i found that using the #traced Bean injection you can insert a tag into the current span, that will be printed on Jaeger UI. this example will added the first 4 lines of an excpetion to the Tags seccion. (I use it on my global ExceptionHandler to add more info about the error):
public abstract class MyExceptionHandler {
#Inject
io.opentracing.Tracer tracer;
/**
* Will trace the message at jaeger metrics service, only for monitoring and profiling use, not for Logger.
* #param e
*/
protected void monitorTrace(Exception e) {
if(tracer!= null && e!=null) {
StringBuffer sb = new StringBuffer();
sb.append(e.getCause()+ "\n");
int deep = 3;
for(int i=0; i< deep;i++) {
sb.append(e.getStackTrace()[i]+ "\n");
}
tracer.activeSpan().setTag("stack ",sb.toString());
}
}
}
and you will see the little stacktrace at JaegerUI.
Hope helps

Spring Batch Integration job instance already exists on start up

I am using spring batch integration to poll for a file and process it and was looking for some guidance on the job parameters aspect of it. I am using the following to create a job launch request and turn a file into the request
#Transformer
public JobLaunchRequest toRequest(Message<File> message) {
JobParametersBuilder jobParametersBuilder =
new JobParametersBuilder();
jobParametersBuilder.addString(fileParameterName,
message.getPayload().getAbsolutePath());
jobParametersBuilder.addLong("time", new Date().getTime());
return new JobLaunchRequest(job, jobParametersBuilder.toJobParameters());
}
On starting up the application for the first time there is only one parameter run.id. If i add a file to repository that the file poller is looking in it creates 2 parameters in the db: fileParameterName and time. If I start the application again it will use the previous values for parameters fileParameterName and time and add a new run.id. The message on the initial start up is :
Job: ... launched with the following parameters: [{run.id=1}]
If I add a file my application handles the file correctly:
Job: ... launched with the following parameters:[{input.file.name=C:\Temp\test.csv, time=1472051531556}]
but if I stop and start the application again I get the following message:
Job: ... launched with the following parameters: [{time=1472051531556, run.id=1, input.file.name=C:\Temp\test.csv}]
My question is why on this start up it is looking at the previous parameters? Is there a way to add the current time as a parameter on start up instead of the previous time so I dont get "A job instance already exists and is complete for parameters={}"? Or to stop the jobs running on start up?
Also if the application is running and I add a file it will enter the toRequest method but it does not on start up.
Any help would be great.
Thanks
We should have a parameter as 'run.id' with 'current timestamp' to where we kick off Spring Batch Job. This is how we kick off a Spring Batch job from shell script.
RUN_ID=$(date +"%Y-%m-%d %H:%M:%S") JOB_PARAMS="filename=XXX"
$JAVA_HOME
org.springframework.batch.core.launch.support.CommandLineJobRunner
springbatch_XXX.xml SpringBatchJob run.id="$RUN_ID" ${JOB_PARAMS}

Configuring Jetty 9 + Spring 4 add PropertySource file

How to add External PropertySource file to jetty 9 jetty.xml ?
i use spring annotation and External PropertySource file
#PropertySources({
#PropertySource(name = "arm", value = "${propertySource}")
})
public class SecurityConfig extends WebSecurityConfigurerAdapter {
when I run the application through maven I use paramentr like propertySource:
mvn -DpropertySource=file:/etc/jetty/arm.properties jetty:stop jetty:run
Its work perfect jetty start with /etc/jetty/arm.properties config params.
How to add -DpropertySource=/etc/jetty/arm.properties like a parametn to start jetty ? hoe to configurable jetty.xml ?
I read docs
http://www.eclipse.org/jetty/documentation/current/jetty-xml-usage.html
and add string to jetty.xml :
<SystemProperty name="propertySource" default="file:/etc/jetty/arm.properties"/>
but this does not work, and jetty fail.
The use of <SystemProperty> is to evaluate a named system property and use it in an XML file as a value for whatever thing you are attempting to configure.
There's 2 ways you can accomplish this using the jetty-distribution.
1: Simply use the Java JVM command line to add the property
$ cd /path/to/mybase
$ java -DpropertySource=file:/etc/jetty/arm.properties -jar /path/to/jetty-dist/start.jar
2: Allow the ${jetty.base} to manage the Java JVM properties
In your ${jetty.base} add the following 2 lines to your start.ini
--exec
-DpropertySource=file:/etc/jetty/arm.properties
Then you can just run Jetty normally ...
$ cd /path/to/mybase
$ java -jar /path/to/jetty-dist/start.jar
Bonus: Alternate technique
Since this is Spring, you could probably switch to using a classloader resource instead.
Run this command to enable resources classpath
$ cd /path/to/mybase
$ java -jar /path/to/jetty-dist/start.jar --add-to-start=resources
Then put your properties file in the new ${jetty.base}/resources
Lastly, reference your PropertySources via the Spring classloader resource reference instead.
In your ${jetty.base} add the following 2 lines to your start.ini
--exec
-DpropertySource=file:/etc/jetty/arm.properties
#service jetty9 start
[FAIL] Starting Jetty 9 Servlet Engine: jetty9 failed!
# sudo service jetty9 check
[ ok ] Checking arguments for Jetty:.
. ok
[ ok ] PIDFILE = /var/run/jetty9.pid.
[....] JAVA_OPTIONS = -Xmx256m -Djava.awt.headless=true -Djava.io.tmpdir=/var/cache/jetty9/data -Djava.library.path=/usr/lib -Djetty.home=/usr/share/jetty9 -Djetty.logs=/var/log/jetty9 -Djetty.state=/var/lib/jetty[ ok ty.state.
[ ok ] JAVA = /usr/lib/jvm/java-8-openjdk-amd64/bin/java.
[ ok ] JETTY_USER = jetty.
[ ok ] ARGUMENTS =.
[ ok ] Jetty 9 Servlet Engine is running with pid 23749.
solution 2
add to start.ini line --add-to-start=resourcesc
# sudo service jetty9 start
[warn] Starting Jetty 9 Servlet Engine: jetty9[....] /var/run/jetty9.pid exists, but jetty was not running. Ignoring /var/run/jetty9.pid ... (warning).
failed!
Where can I see the log ?

Resources