Unable to use command line arguments to start spring boot aplication - gradle

I have the following things setup on my system.
Ubuntu 16.04
Gradle 3.0
Java 1.8.0_91
springBootVersion : 1.4.0.RELEASE
I am running the spring boot application from command line with the following arguments.
gradle -Dserver.port=8090 -Dspring.profiles.active=dev bootRun
following are the logs
Starting a Gradle Daemon, 3 stopped Daemons could not be reused, use --status for details
No active profile set, falling back to default profiles: default
Registering beans for JMX exposure on startup
2016-10-26 18:36:00.463 INFO 27743 --- [ restartedMain] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 0
2016-10-26 18:36:00.584 INFO 27743 --- [ restartedMain] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8080 (http)
when i do gradle --status the result is
No Gradle daemons are running.
PID STATUS INFO
26929 STOPPED (client disconnected)
27086 STOPPED (client disconnected)
27202 STOPPED (client disconnected)
27367 STOPPED (client disconnected)
I am not sure what's gone wrong here. I had been able to run this with no issues previously on older versions of spring boot and gradle.
However when i do
java -jar -Dspring.profiles.active=dev -Dserver.port=8090 build/libs/demo-0.0.1-SNAPSHOT.jar
I am able to run the application with desired arguments, on port 8090 and with dev profile.

Try using:
java -Dspring.profiles.active=dev -Dserver.port=8090 -jar build/libs/demo-0.0.1-SNAPSHOT.jar

Related

Starting springboot application from IntelliJ community edition

How can we start a spring boot application in IntelliJ community edition. I don't see an embedded tomcat Included here. When I start the application from SpringBootApplication annotated class, getting the below messages only in the logger
2021-06-23 20:02:45.933 INFO 1086 --- [ main] r.e.r.RestapplicationApplication : Starting RestapplicationApplication using Java 11.0.11 on Antonys-MBP.home with PID 1086 (/Users/robin/Documents/work/workspace/restapplication/target/classes started by robin in /Users/robin/Documents/work/workspace/restapplication)
2021-06-23 20:02:45.938 INFO 1086 --- [ main] r.e.r.RestapplicationApplication : No active profile set, falling back to default profiles: default
2021-06-23 20:02:48.342 INFO 1086 --- [ main] r.e.r.RestapplicationApplication : Started RestapplicationApplication in 3.621 seconds (JVM running for 4.888)
Please help on how can I start the application in tomcat and test in Community Editon of IntelliJ
r.e.r.RestapplicationApplication : Started RestapplicationApplication
in 3.621 seconds (JVM running for 4.888)
The above line tells you that the embedded tomcat has already started at default port 8080 ( unless you have overridden port in configuration). You can try hitting the application on the port.

Why is upgrading to Tomcat 10.0.5 causing spring boot to shutdown after boot?

I have a spring boot project and I am trying to use Tomcat 10 embedded instead of Tomcat 7. I add the following to my POM...
<properties>
<tomcat.version>10.0.5</tomcat.version>
...
</properties>
Then I run the same command I was running before...
mvn clean package -U && java -cp target\my.jar;props -Dloader.main=com.my.Main org.springframework.boot.loader.PropertiesLauncher
But now it just starts and then shuts itself down. The final messages are...
2021-05-13 15:35:42.105 INFO 10084 --- [ main] com.my.Main : Started Main in 42.918 seconds (JVM running for 44.009)
2021-05-13 15:35:42.190 INFO 10084 --- [extShutdownHook] j.LocalContainerEntityManagerFactoryBean : Closing JPA EntityManagerFactory for persistence unit 'default'
Why would this happen and how can I upgrade without this side effect?
Tomcat 10 is a Jakarta EE 9 servlet container. It basically means, that all javax.* packages were renamed to jakarta.* for copyright reasons (Oracle didn't allow the Eclipse Foundation to use the javax.* names).
Spring Boot 2 and Spring 5 support only the previous Java EE 8 specification, you need to wait for Spring Boot 3 and Spring 6 for Tomcat 10 support. Alternatively you can pass Spring libraries through the Apache Tomcat Migration Tool, which just reached version 1.0 or downgrade to Tomcat 9.0.
See also
Tomcat 10.0.4 doesn't load servlets (#WebServlet classes) with 404 error

systemctl spring boot application The web application [ROOT] appears to have started a thread named

I developed the spring boot micro service on my windows 10 and it is running with no problems. after i deploying to my aws ec2 instance, i can run with
java -jar app.jar
however if i use systemctl to run it, it gave me
The web application [ROOT] appears to have started a thread named [RxIoScheduler-1 (Evictor)] but has failed to stop
i installed two java 11,
one is using yum install java,
the other is sdk install java
the reason why i installed two java is because systemctl cannot run the app with the java installed by sdkman.
I suspect it is because systemctl is always run under root.
i also have tried put it into .sh file
my app.service file:
[Unit]
Description=zuul service
[Service]
# The configuration file application.properties should be here:
#change this to your workspace
User=root
WorkingDirectory=/home/ec2-user
#path to executable.
#executable is a bash script which calls jar file
ExecStart=/usr/bin/java -jar /home/ec2-user/zuul/target/zuul.jar
SuccessExitStatus=143
TimeoutStopSec=10
RestartSec=5
[Install]
WantedBy=multi-user.target
this is the error message:
Stopping service [Tomcat]
2019-10-31 03:22:14.811 WARN 6973 --- [ main] o.a.c.loader.WebappClassLoaderBase : The web application [ROOT] appears to have started a thread named [spring.cloud.inetutils] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.base#11.0.5/jdk.internal.misc.Unsafe.park(Native Method)
java.base#11.0.5/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
java.base#11.0.5/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
java.base#11.0.5/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
java.base#11.0.5/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
java.base#11.0.5/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
java.base#11.0.5/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
java.base#11.0.5/java.lang.Thread.run(Thread.java:834)
2019-10-31 03:22:14.811 WARN 6973 --- [ main] o.a.c.loader.WebappClassLoaderBase : The web application [ROOT] appears to have started a thread named [RxIoScheduler-1 (Evictor)] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.base#11.0.5/jdk.internal.misc.Unsafe.park(Native Method)
java.base#11.0.5/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
java.base#11.0.5/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2123)
java.base#11.0.5/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1182)
java.base#11.0.5/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:899)
java.base#11.0.5/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
java.base#11.0.5/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
java.base#11.0.5/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
java.base#11.0.5/java.lang.Thread.run(Thread.java:834)
2019-10-31 03:22:14.818 INFO 6973 --- [ main] ConditionEvaluationReportLoggingListener :

Grails 4 - Google Cloud Platform deployment keeps restarting

My Grails 4 application, deployed to GCP appears to be trying to start up after being deployed but never comes up properly. Application requests return a 500 response. There are no errors or clues with DEBUG log level output at the root level.
The same application runs fine locally in development mode.
The production configuration is as per the Grails 3 deployment (to GCP) guide except for the adjustments that were necessary to make it work for Grails 4/Java 11.
Most of the bootstrapping appears to carry out as expected;
Spring Security configures successfully
Spring Security REST configures successfully
Spring beans are registered
Connects to Cloud SQL instance
Database schema is created (by Liquibase)
Plugins are loaded successfully
Then it gets to the following familiar lines of output logging;
INFO --- [main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
INFO --- [main] o.a.coyote.http11.Http11NioProtocol : Initializing ProtocolHandler ["http-nio-8080"]
INFO --- [main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
INFO --- [main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.17]
and restarts..
while normally, the next phase of bootstrapping (happens locally) would be;
[restartedMain] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
[restartedMain] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 42018 ms
Probably a long shot question but any level of clues or suggestions would be much appreciated.
I've run out of doors to open. :-(
You could try to update Tomcat version just in case.
But looking at Grails documentation:
https://docs.grails.org/latest/guide/upgrading.html
There's some configuration made when you upgrade to Grails 4 from
Grails 3.3.x to prevents the server from restarting when views or
message bundles are changed.
Either modifying a gsp in Grails 4 M2 cause the application to restart.
https://github.com/grails/grails-core/issues/11284

Java 10 Spring Boot Infinispan org.jgroups.logging.Slf4jLogImpl not found

I have a Spring Boot application which I'm building and running with Java 10. If I run the app using
java -jar
Everything works fine. The app starts just OK.
But if I put my app inside a Docker container with the exactly same Java version, my app throws this exception:
Caused by: java.lang.NoClassDefFoundError: Could not initialize class org.jgroups.logging.Slf4jLogImpl
at org.jgroups.logging.LogFactory.getLog(LogFactory.java:101)
at org.jgroups.conf.XmlConfigurator.<clinit>(XmlConfigurator.java:33)
at org.jgroups.conf.ConfiguratorFactory.getStackConfigurator(ConfiguratorFactory.java:62)
at org.jgroups.JChannel.<init>(JChannel.java:122)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.buildChannel(JGroupsTransport.java:591)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.initChannel(JGroupsTransport.java:405)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.start(JGroupsTransport.java:389)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at org.infinispan.commons.util.SecurityActions.lambda$invokeAccessibly$0(SecurityActions.java:79)
... 104 common frames omitted
I'm using this version of Java:
java version "10.0.2" 2018-07-17
Java(TM) SE Runtime Environment 18.3 (build 10.0.2+13)
Java HotSpot(TM) 64-Bit Server VM 18.3 (build 10.0.2+13, mixed mode)
Docker version is:
Client:
Version: 18.06.1-ce
API version: 1.38
Go version: go1.10.3
Git commit: e68fc7a
Built: Tue Aug 21 17:21:31 2018
OS/Arch: darwin/amd64
Experimental: false
Server:
Engine:
Version: 18.06.1-ce
API version: 1.38 (minimum version 1.12)
Go version: go1.10.3
Git commit: e68fc7a
Built: Tue Aug 21 17:29:02 2018
OS/Arch: linux/amd64
Experimental: true
My Docker is using an Alpine base image alpine:latest. I'm installing java in my container from this link:
curl -jksSLH "Cookie: oraclelicense=accept-securebackup-cookie" -o /tmp/java.tar.gz \
http://download.oracle.com/otn-pub/java/jdk/10.0.2+13/19aef61b38124481863b1413dce1855f/jdk-10.0.2_linux-x64_bin.tar.gz
I'm really confused because from outside a docker container my app works fine, but inside a docker container it doesn't. Either case I'm using the same Java version.
UPDATE
We tried Oracle JDK and OpenJDK, same behavior
UPDATE 2
We even tried java -jar from inside the container, no luck
TL;DR
There are three options to resolve.
Upgrade to version 4.0.16 of JGroups, currently in SNAPSHOT. Edit: Now released here.
Make sure java properties for "user.language" and "user.country" are set.
Force JDKLogImpl with -Djgroups.use.jdk_logger=true. (mentioned by Perimosh)
Explanation
Ran into this issue in the following scenario.
Apache Camel + JGroups worked fine in a local environment. Deployed it elsewhere in a Docker instance, where we got the following stacktrace:
2018-11-19 13:38:03.063 INFO 582 --- [ main] o.a.camel.spring.boot.RoutesCollector : Loading additional Camel XML routes from: classpath:camel/*.xml
2018-11-19 13:38:03.064 INFO 582 --- [ main] o.a.camel.spring.boot.RoutesCollector : Loading additional Camel XML rests from: classpath:camel-rest/*.xml
2018-11-19 13:38:03.107 INFO 582 --- [ main] o.a.camel.spring.SpringCamelContext : Apache Camel 2.22.2 (CamelContext: camel-1) is starting
2018-11-19 13:38:03.111 INFO 582 --- [ main] o.a.c.m.ManagedManagementStrategy : JMX is enabled
2018-11-19 13:38:03.480 INFO 582 --- [ main] o.a.camel.spring.SpringCamelContext : StreamCaching is not in use. If using streams then its recommended to enable stream caching. See more details at http://camel.apache.org/stream-caching.html
2018-11-19 13:38:03.597 INFO 582 --- [ main] o.a.camel.spring.SpringCamelContext : Apache Camel 2.22.2 (CamelContext: camel-1) is shutting down
2018-11-19 13:38:03.616 WARN 582 --- [ main] o.a.camel.spring.SpringCamelContext : Error occurred while shutting down service: org.apache.camel.component.jgroups.cluster. JGroupsLockClusterService#10fa5af5. This exception will be ignored.
java.lang.NullPointerException: null
at org.apache.camel.component.jgroups.cluster.JGroupsLockClusterView.doStop(JGroupsLockClusterView.java:109)
at org.apache.camel.support.ServiceSupport.stop(ServiceSupport.java:102)
at org.apache.camel.impl.cluster.AbstractCamelClusterService.lambda$doStop$2(AbstractCamelClusterService.java:134)
at org.apache.camel.util.concurrent.LockHelper.doWithReadLockT(LockHelper.java:54)
at org.apache.camel.impl.cluster.AbstractCamelClusterService.doStop(AbstractCamelClusterService.java:130)
at org.apache.camel.support.ServiceSupport.stop(ServiceSupport.java:102)
at org.apache.camel.util.ServiceHelper.stopService(ServiceHelper.java:142)
at org.apache.camel.util.ServiceHelper.stopAndShutdownService(ServiceHelper.java:205)
at org.apache.camel.impl.DefaultCamelContext.shutdownServices(DefaultCamelContext.java:3663)
at org.apache.camel.impl.DefaultCamelContext.shutdownServices(DefaultCamelContext.java:3688)
at org.apache.camel.impl.DefaultCamelContext.shutdownServices(DefaultCamelContext.java:3676)
at org.apache.camel.impl.DefaultCamelContext.doStop(DefaultCamelContext.java:3567)
at org.apache.camel.support.ServiceSupport.stop(ServiceSupport.java:102)
at org.apache.camel.impl.DefaultCamelContext.stop(DefaultCamelContext.java:3220)
at org.apache.camel.spring.SpringCamelContext.stop(SpringCamelContext.java:148)
...
2018-11-19 13:38:03.679 INFO 582 --- [ main] o.a.camel.spring.SpringCamelContext : Apache Camel 2.22.2 (CamelContext: camel-1) uptime 0.570 seconds
2018-11-19 13:38:03.680 INFO 582 --- [ main] o.a.camel.spring.SpringCamelContext : Apache Camel 2.22.2 (CamelContext: camel-1) is shutdown in 0.082 seconds
2018-11-19 13:38:03.716 INFO 582 --- [ main] org.mongodb.driver.connection : Closed connection [connectionId{localValue:2, serverValue:2}] to localhost:43115 because the pool has been closed.
As you can see, Apache Camel attempts to start, but never does and ends up shutting down. Thus, JGroups gets a NPE because it expects Camel to be up.
After debugging the code, it appeared that there was an exception being thrown during the Camel start up process which was getting eaten.
From there, discovered that the creation of an instance of Slf4jLogImpl in org.jgroups.logging.LogFactory#getLog(java.lang.Class<?>) (new Slf4jLogImpl(clazz)) was a problem Method threw 'java.lang.ExceptionInInitializerError' exception.:
java.lang.NullPointerException: null
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.jgroups.logging.LogFactory.getLog(LogFactory.java:101)
at org.jgroups.conf.XmlConfigurator.<clinit>(XmlConfigurator.java:33)
at org.jgroups.conf.ConfiguratorFactory.getXmlConfigurator(ConfiguratorFactory.java:210)
at org.jgroups.conf.ConfiguratorFactory.getStackConfigurator(ConfiguratorFactory.java:91)
at org.jgroups.JChannel.<init>(JChannel.java:130)
...
Running (new Slf4jLogImpl(clazz)) the second time onward in the debugger results in the following stacktrace that mirrors the original posted issue:
java.lang.NoClassDefFoundError: Could not initialize class org.jgroups.logging.Slf4jLogImpl
at org.jgroups.logging.LogFactory.getLog(LogFactory.java:101)
at org.jgroups.conf.XmlConfigurator.<clinit>(XmlConfigurator.java:33)
at org.jgroups.conf.ConfiguratorFactory.getXmlConfigurator(ConfiguratorFactory.java:210)
at org.jgroups.conf.ConfiguratorFactory.getStackConfigurator(ConfiguratorFactory.java:91)
at org.jgroups.JChannel.<init>(JChannel.java:130)
This difference in results is due to the class loader caching the result of the Class.forName() call previously to determine that the class definition was not found.
Finally, we tracked the previous NPE to be thrown from java.util.Locale#Locale(java.lang.String, java.lang.String, java.lang.String), since country was null. This is because JGroup's org.jgroups.logging.Slf4jLogImpl is definine a LOCALE field using java properties for "user.language" and "user.country". The former was not set in our docker instance, so Locale.java threw the NPE. Adding both of these java properties should fix this issue. Alternatively you can force using the JDKLogImpl so that the Slf4jLogImpl is never attempted to be instantiated. This was mentioned in the previous answer, by passing in -Djgroups.use.jdk_logger=true.
Edit: Fixed in the latest version released here.
Now, it looks like this will be fixed in the upcoming JGroup release of 4.0.16.Final (https://github.com/belaban/JGroups/commit/61578c657138f02178c32a564ac9eae7c3976093#diff-93eb0f6a8a4953312098be459bd7ce76). Until then, you can get the snapshot version with the fix at https://repository.jboss.org/nexus/content/repositories/snapshots/org/jgroups/jgroups/4.0.16-SNAPSHOT/.
This is not a real solution, but since it unblocked us, I will share it. Also, maybe somebody can think about the real problem by looking at this workaround. We added this JVM argument to by pass SLF4J for jgroups and use JDKLogImpl
-Djgroups.use.jdk_logger=true

Resources