Cannot deploy Spring Boot application - gradle

I'm currently evaluating CloudControl as platform provider for my Java based applications.
I created a very simple Spring Boot (https://github.com/mhmpl/gradle-example-app) app with Gradle but I'm unable to deploy the app.
There are no errors in the Error log which could give me some information. However, this is the output of the Deploy log:
8/3/14 12:53 PM lxc-1272 INFO Container did not come up within 120 seconds.
8/3/14 12:53 PM lxc-1250 INFO Waiting for the container to be reachable...
8/3/14 12:53 PM lxc-1272 INFO Waiting for the container to be reachable...
8/3/14 12:52 PM lxc-1250 INFO Waiting for the container to be reachable...
8/3/14 12:52 PM lxc-1272 INFO Waiting for the container to be reachable...
8/3/14 12:52 PM lxc-1250 INFO Waiting for the container to be reachable...
8/3/14 12:52 PM lxc-1272 INFO Waiting for the container to be reachable...
8/3/14 12:51 PM lxc-1250 INFO Deploying ...
Finally, the app is not deployed and I cannot see an error which I've potentially made. I already tried to set the memory to 1024MB and added a second container, but that did not change anything at all.

You need to bind the webserver to the correct port, which is defined in the PORT environment variable.

Related

Failing to deploy war file to tomcat server from manager portal

I am trying to deploy my .war file to tomcat server using the manager/html portal. However, when hitting the deploy button, the upload won't start and an error page comes in at the browser window. I am in need of clarification about what is actually creating the issue.
I tried changing the port address manually, and tried out with multiple .war files. The problem still persists.
17-Feb-2019 17:18:59.396 INFO [main] org.apache.coyote.AbstractProtocol.pause Pausing ProtocolHandler ["http-nio-8080"]
17-Feb-2019 17:18:59.396 INFO [main] org.apache.coyote.AbstractProtocol.pause Pausing ProtocolHandler ["ajp-nio-8009"]
17-Feb-2019 17:18:59.396 INFO [main] org.apache.catalina.core.StandardService.stopInternal Stopping service [Catalina]
17-Feb-2019 17:18:59.426 INFO [main] org.apache.coyote.AbstractProtocol.stop Stopping ProtocolHandler ["http-nio-8080"]
17-Feb-2019 17:18:59.426 INFO [main] org.apache.coyote.AbstractProtocol.destroy Destroying ProtocolHandler ["http-nio-8080"]
17-Feb-2019 17:18:59.426 INFO [main] org.apache.coyote.AbstractProtocol.stop Stopping ProtocolHandler ["ajp-nio-8009"]
17-Feb-2019 17:18:59.427 INFO [main] org.apache.coyote.AbstractProtocol.destroy Destroying ProtocolHandler ["ajp-nio-8009"]
I am getting the above server trace when the upload fails. Any help will be much appreciated.

Tomcat 9 takes 1 minute to stop

I installed tomcat 9.0.14 on my system(Windows 10, Windows server 2016 R2)
I've no issue while starting the tomcat service(start in 2-3 sec).
However, it takes 1 minute to stop.
I thought one of my project residing under webapps is taking time so I removed all my project but result is same.
After that I make it empty webapps folder empty to check further still tomcat took 1 min to stop.
I check the log file and their are no errors.Tomcat is idle for 1 minute while stopping.
Common-deamon.log-------
[2019-01-08 16:30:02] [info] [13948] Stopping service...
[2019-01-08 16:30:03] [info] [13948] Service stop thread completed.
[2019-01-08 16:31:03] [info] [ 1940] Run service finished.
[2019-01-08 16:31:03] [info] [ 1940] Commons Daemon procrun finished
catalina.log--------
08-Jan-2019 16:30:02.399 INFO [Thread-6] org.apache.coyote.AbstractProtocol.pause Pausing ProtocolHandler ["http-nio-8080"]
08-Jan-2019 16:30:02.431 INFO [Thread-6] org.apache.coyote.AbstractProtocol.pause Pausing ProtocolHandler ["ajp-nio-8009"]
08-Jan-2019 16:30:02.453 INFO [Thread-6] org.apache.catalina.core.StandardService.stopInternal Stopping service [Catalina]
08-Jan-2019 16:30:02.453 INFO [Thread-6] org.apache.coyote.AbstractProtocol.stop Stopping ProtocolHandler ["http-nio-8080"]
08-Jan-2019 16:30:02.453 INFO [Thread-6] org.apache.coyote.AbstractProtocol.stop Stopping ProtocolHandler ["ajp-nio-8009"]
Is their any way I can reduce the sopping time of tomcat 9.
In tomcat 8 stopping time was 3-5 sec
Any help is appreciated.....
I was abel to reproduce this by
Downloading and extracting the apache-tomcat-9.0.14-windows-x64.zip
cd to apache-tomcat/bin
service.bat install
Starting the Service is quick, stopping it delays exactly 60 seconds.
This seemes to be an issue of Tomcat, but current developer snapchot (trunk) changelog suggests it has been already fixed for not yet released Tomcat 9.0.15+ without explicit bug report assigned:
Tomcat 9.0.15 (markt) in development / Catalina:
Correct a bug exposed in 9.0.14 and ensure that the Tomcat terminates in a timely manner when running as a service. (markt)
We had the same problem with Tomcat v9.0.26. Tomcat took exactly 60 seconds to finish once you terminated the server. We tried hard to close and shutdown everything we had in our application and in the end we realized we had a ThreadPoolExecutor that created a newCachedThreadPool() and this cachepool has a "keepAliveTime" of 60 seconds.
So after terminating the tomcat the threadpool was waiting 60 seconds to check if the threads are still needed to be reused. Only after this time it really shut down. So the solution was to shut down the cached thread pool once we shut down the application.

HDFS yarn nodemanager not starting completely

I am facing an issue from past few days,
If I use Hadoop 2.7.4, I am not able to start nodemanager's at slaves with this version, because of this I can't run any mapred jobs.
When I use Hadoop 2.8.2, everything is starting well(start-dfs.sh and start-yarn.sh) and I am able to see the content, nodes activity at http://:50070, but while running my programs that use data from HDFS, it keeps on displaying
17/11/15 12:51:46 WARN hdfs.DFSClient: zero
but the job runs well. I am not aware of this issue and how it is effected. So, I tried 2.7.3
When I tried 2.7.3, I am not facing the above error, but I see that the yarn are not starting completing and when I start yarn, it throws more line on screen that usual way and it starts. The main issue, in this case, I am not able to watch watch going on in hadoop from web url as here it doesn't display anything on web, except for those files present in HDFS(So, I am only able see the data from Utilities --> Browse files )
The error is similar to this:
starting yarn daemons
starting resourcemanager, logging to
/usr/local/hadoop/logs/yarn--resourcemanager-hadoop-master.out
Nov 20, 2017 8:01:28 AM
com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory
register
INFO: Registering
org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver
as a provider class
Nov 20, 2017 8:01:28 AM
com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory
register
INFO: Registering
org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices as
a root resource class
Nov 20, 2017 8:01:28 AM
com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory
register
INFO: Registering
org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider
class
Nov 20, 2017 8:01:28 AM
com.sun.jersey.server.impl.application.WebApplicationImpl _initiate
INFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011
11:17 AM'
Nov 20, 2017 8:01:28 AM
com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory
getComponentProvider
INFO: Binding
org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver
to GuiceManagedComponentProvider with the scope "Singleton"
hadoop-slave1: Warning: Permanently added 'hadoop-slave1,10.40.0.0'
(ECDSA) to the list of known hosts.
hadoop-slave1: starting nodemanager, logging to
/usr/local/hadoop/logs/yarn-root-nodemanager-hadoop-slave1.weave.local.out
Any idea how to set it up completely without any issue?

Liferay startup takes way too long

I'm new to Liferay developing and I’m facing troubles with the startup of my Liferay Tomcat server. It takes almost 3 minutes (169048 ms) which is unacceptable for development. I’d like to get it down to about one minute.
Here are the specs of my machine:
Intel Core Duo T2300 # 1.66GHz
4GB RAM (3.24GB in use)
Windows 7 Enterprise 32 bit with Service Pack 1
I’m using:
Liferay 6.1.1-ce-ga2 bundled with Tomcat 7
Eclipse IDE Juno Release
In order to speed things up, I’ve:
removed all unnecessary portlets from the tomcat\webapps folder.
put the Tomcat native library 1.1.24 in the tomcat\bin folder
tweaked my portal-ext.properties as shown below
#disable some filters
com.liferay.portal.servlet.filters.sso.cas.CASFilter = false
com.liferay.portal.servlet.filters.sso.ntlm.NtlmFilter = false
com.liferay.portal.servlet.filters.sso.ntlm.NtlmPostFilter = false
com.liferay.portal.servlet.filters.sso.opensso.OpenSSOFilter= false
com.liferay.portal.sharepoint.SharepointFilter = false
com.liferay.portal.servlet.filters.gzip.GZipFilter = false
#disable indexing
index.on.startup=false
Here’s my startup log:
Jan 30, 2013 8:39:49 AM org.apache.catalina.core.AprLifecycleListener init
INFO: Loaded APR based Apache Tomcat Native library 1.1.24.
Jan 30, 2013 8:39:49 AM org.apache.catalina.core.AprLifecycleListener init
INFO: APR capabilities: IPv6 [true], sendfile [true], accept filters [false], random [true].
Jan 30, 2013 8:39:51 AM org.apache.coyote.AbstractProtocol init
INFO: Initializing ProtocolHandler ["http-apr-8080"]
Jan 30, 2013 8:39:51 AM org.apache.coyote.AbstractProtocol init
INFO: Initializing ProtocolHandler ["ajp-apr-8009"]
Jan 30, 2013 8:39:51 AM org.apache.catalina.startup.Catalina load
INFO: Initialization processed in 2620 ms
Jan 30, 2013 8:39:51 AM org.apache.catalina.core.StandardService startInternal
INFO: Starting service Catalina
Jan 30, 2013 8:39:51 AM org.apache.catalina.core.StandardEngine startInternal
INFO: Starting Servlet Engine: Apache Tomcat/7.0.27
Jan 30, 2013 8:39:51 AM org.apache.catalina.startup.HostConfig deployDescriptor
INFO: Deploying configuration descriptor C:\Liferay\portal-6.1.1-ce-ga2\tomcat-7.0.27\conf\Catalina\localhost\Hi-portlet.xml
Jan 30, 2013 8:39:51 AM org.apache.catalina.startup.HostConfig deployDescriptor
WARNING: A docBase C:\Liferay\portal-6.1.1-ce-ga2\tomcat-7.0.27\webapps\Hi-portlet inside the host appBase has been specified, and will be ignored
Jan 30, 2013 8:39:51 AM org.apache.catalina.startup.SetContextPropertiesRule begin
WARNING: [SetContextPropertiesRule]{Context} Setting property 'source' to 'org.eclipse.jst.jee.server:Hi-portlet' did not find a matching property.
Jan 30, 2013 8:39:52 AM org.apache.catalina.startup.HostConfig deployDescriptor
INFO: Deploying configuration descriptor C:\Liferay\portal-6.1.1-ce-ga2\tomcat-7.0.27\conf\Catalina\localhost\ROOT.xml
Loading jar:file:/C:/Liferay/portal-6.1.1-ce-ga2/tomcat-7.0.27/webapps/ROOT/WEB-INF/lib/portal-impl.jar!/system.properties
Loading jar:file:/C:/Liferay/portal-6.1.1-ce-ga2/tomcat-7.0.27/webapps/ROOT/WEB-INF/lib/portal-impl.jar!/portal.properties
Loading file:/C:/Liferay/portal-6.1.1-ce-ga2/portal-ide.properties
Loading file:/C:/Liferay/portal-6.1.1-ce-ga2/tomcat-7.0.27/webapps/ROOT/WEB-INF/classes/portal-developer.properties
Loading file:/C:/Liferay/portal-6.1.1-ce-ga2/portal-ext.properties
Jan 30, 2013 8:39:59 AM org.apache.catalina.core.ApplicationContext log
INFO: Initializing Spring root WebApplicationContext
08:40:16,321 INFO [pool-2-thread-1][DialectDetector:71] Determine dialect for HSQL Database Engine 2
08:40:16,330 WARN [pool-2-thread-1][DialectDetector:86] Liferay is configured to use Hypersonic as its database. Do NOT use Hypersonic in production. Hypersonic is an embedded database useful for development and demo'ing purposes. The database settings can be changed in portal-ext.properties.
08:40:16,484 INFO [pool-2-thread-1][DialectDetector:136] Found dialect org.hibernate.dialect.HSQLDialect
Starting Liferay Portal Community Edition 6.1.1 CE GA2 (Paton / Build 6101 / July 31, 2012)
08:41:36,974 INFO [pool-2-thread-1][BaseDB:452] Database supports case sensitive queries
08:41:37,828 INFO [pool-2-thread-1][ServerDetector:154] Server supports hot deploy
08:41:37,850 INFO [pool-2-thread-1][PluginPackageUtil:1030] Reading plugin package for the root context
08:42:19,657 INFO [pool-2-thread-1][AutoDeployDir:106] Auto deploy scanner started for C:\Liferay\portal-6.1.1-ce-ga2\deploy
08:42:24,410 INFO [pool-2-thread-1][HotDeployImpl:178] Deploying Hi-portlet from queue
08:42:24,415 INFO [pool-2-thread-1][PluginPackageUtil:1033] Reading plugin package for Hi-portlet
Jan 30, 2013 8:42:24 AM org.apache.catalina.core.ApplicationContext log
INFO: Initializing Spring root WebApplicationContext
Jan 30, 2013 8:42:30 AM org.apache.catalina.core.ApplicationContext log
INFO: Initializing Spring FrameworkServlet 'Remoting Servlet'
Jan 30, 2013 8:42:34 AM org.apache.catalina.startup.HostConfig deployDirectory
INFO: Deploying web application directory C:\Liferay\portal-6.1.1-ce-ga2\tomcat-7.0.27\webapps\resources-importer-web
08:42:35,522 INFO [pool-2-thread-1][HotDeployImpl:178] Deploying resources-importer-web from queue
08:42:35,523 INFO [pool-2-thread-1][PluginPackageUtil:1033] Reading plugin package for resources-importer-web
Jan 30, 2013 8:42:36 AM org.apache.catalina.core.ApplicationContext log
INFO: Initializing Spring root WebApplicationContext
Jan 30, 2013 8:42:36 AM org.apache.catalina.startup.HostConfig deployDirectory
INFO: Deploying web application directory C:\Liferay\portal-6.1.1-ce-ga2\tomcat-7.0.27\webapps\welcome-theme
08:42:36,609 INFO [pool-2-thread-1][HotDeployEvent:109] Plugin welcome-theme requires resources-importer-web
08:42:37,305 INFO [pool-2-thread-1][HotDeployImpl:178] Deploying welcome-theme from queue
08:42:37,306 INFO [pool-2-thread-1][PluginPackageUtil:1033] Reading plugin package for welcome-theme
Jan 30, 2013 8:42:37 AM org.apache.catalina.core.ApplicationContext log
INFO: Initializing Spring root WebApplicationContext
08:42:37,787 INFO [pool-2-thread-1][ThemeHotDeployListener:87] Registering themes for welcome-theme
08:42:39,764 INFO [pool-2-thread-1][ThemeHotDeployListener:100] 1 theme for welcome-theme is available for use
Jan 30, 2013 8:42:40 AM org.apache.coyote.AbstractProtocol start
INFO: Starting ProtocolHandler ["http-apr-8080"]
08:42:40,167 INFO [liferay/hot_deploy-1][HotDeployMessageListener:142] Group or layout set prototype already exists for company liferay.com
Jan 30, 2013 8:42:40 AM org.apache.coyote.AbstractProtocol start
INFO: Starting ProtocolHandler ["ajp-apr-8009"]
Jan 30, 2013 8:42:40 AM org.apache.catalina.startup.Catalina start
INFO: Server startup in 169048 ms
Any suggestions?
The comments already gave some hints. I'd say, the most important issue is to check if virtual memory (paging) is used - as soon as the OS has to page memory to disk, you have lost: There's a potentially huge performance hit.
When you upgrade your memory (e.g. if you hit the virtual memory) you might want to consider upgrading the OS to a 64bit OS - 32bit can only address 4G and you might hit limits with appserver memory as each process can only get a limited amount of memory.
You could also test if Liferay starts up faster before you run so many other applications - this is another hint that you're running into a memory issue.
The SSD option will further accelerate your system, but for a much higher price than RAM. Also, virtual memory on SSD is not really recommended - it will wear out the drive quicker. And instead of using virtual memory on SSD, rather don't use virtual memory - this will be quicker AND cheaper.
This problem is solved by upgrading to Liferay 7.
While Liferay 7 does not start faster, developers really never need to restart it, as everything can be overridden by deploying new OSGi components. That is actually the biggest difference between Liferay 6 and Liferay 7.
I have been developing for Liferay 7 for 3 months, including very deep customization (for instance intercepting all file reads for audit), and have never needed to restart the Liferay server.
The server speed depends so much on a well configured JVM (memory, garbage collector type, etc) and Tomcat connector thread pool. depending of available server resources. Liferay provide a recommended configuration:
`-server -XX:NewSize=1024m -XX:MaxNewSize=1024m -Xms4096m
-Xmx4096m -XX:MetaspaceSize=300m -XX:MaxMetaspaceSize=300m
-XX:SurvivorRatio=12 –XX:TargetSurvivorRatio=90 –
XX:MaxTenuringThreshold=15 -XX:+UseLargePages
-XX:LargePageSizeInBytes=256m -XX:+UseParNewGC
-XX:ParallelGCThreads=16 -XX:+UseConcMarkSweepGC
-XX:+CMSParallelRemarkEnabled -XX:+CMSCompactWhenClearAllSoftRefs
-XX:CMSInitiatingOccupancyFraction=85 -XX:+CMSScavengeBeforeRemark
-XX:+UseLargePages -XX:LargePageSizeInBytes=256m
-XX:+UseCompressedOops -XX:+DisableExplicitGC -XX:-UseBiasedLocking
-XX:+BindGCTaskThreadsToCPUs -XX:+UseFastAccessorMethods
-XX:InitialCodeCacheSize=32m -XX:ReservedCodeCacheSize=96m`
The above JVM settings should formulate a starting point for your
performance tuning. Each system’s final parameters will vary due to a variety of
factors including number of current users and transaction speed.
In tomcat servers you define this configuration like CATALINA_OPTS environment variable in /[tomcat_server]/bin/setenv.[sh or bat] file.

TeamCIty Server on EC2 Linux instance dies before it's done starting up

I'm installing TeamCity in EC2, starting with the Server then moving on the agents. I'm starting with the Amazon Linux AMI, running on a micro instance. Then I did:
sudo yum update
wget http://download.jetbrains.com/teamcity/TeamCity-7.1.1.tar.gz
tar -xvzf TeamCity-7.1.1.tar.gz
cd TeamCity
bin/teamcity-server.sh start
When I start it using bin/teamcity-server.sh start, things happen. I can connect using a web browser which shows the 'TeamCity is starting' page. The teamcity-server.log shows a bunch of activity, unzipping plugins etc.
But then suddently, the server process just disappears. The port's no longer listened to, ps shows no java process running, and the browser can't connect.
There's no error messages in the catalina or teamcity logs. After much trial and error though, I ran bin/teamcity-server.sh run (instead of start) to get console output, and got the following:
Using CATALINA_BASE: /home/ec2-user/TeamCity
Using CATALINA_HOME: /home/ec2-user/TeamCity
Using CATALINA_TMPDIR: /home/ec2-user/TeamCity/temp
Using JRE_HOME: /usr/lib/jvm/jre
Using CLASSPATH: /home/ec2-user/TeamCity/bin/bootstrap.jar:/home/ec2-user/TeamCity/bin/tomcat-juli.jar
Nov 1, 2012 7:22:25 PM org.apache.catalina.core.AprLifecycleListener init
INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/lib/amd64/server:/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/lib/amd64:/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
Nov 1, 2012 7:22:26 PM org.apache.coyote.AbstractProtocol init
INFO: Initializing ProtocolHandler ["http-bio-8111"]
Nov 1, 2012 7:22:26 PM org.apache.catalina.startup.Catalina load
INFO: Initialization processed in 2742 ms
Nov 1, 2012 7:22:26 PM org.apache.catalina.core.StandardService startInternal
INFO: Starting service Catalina
Nov 1, 2012 7:22:26 PM org.apache.catalina.core.StandardEngine startInternal
INFO: Starting Servlet Engine: Apache Tomcat/7.0.23
Nov 1, 2012 7:22:26 PM org.apache.catalina.startup.HostConfig deployDirectory
INFO: Deploying web application directory /home/ec2-user/TeamCity/webapps/ROOT
Log4J configuration file /home/ec2-user/TeamCity/bin/../conf/teamcity-server-log4j.xml will be monitored with interval 10 seconds.
Nov 1, 2012 7:22:30 PM org.apache.coyote.AbstractProtocol start
INFO: Starting ProtocolHandler ["http-bio-8111"]
Nov 1, 2012 7:22:30 PM org.apache.catalina.startup.Catalina start
INFO: Server startup in 3786 ms
=======================================================================
TeamCity 7.1.1 (build 24074) initialized, OS: Linux, JRE: 1.6.0_24-b24
TeamCity is running in professional mode
bin/teamcity-server.sh: line 18: 4231 Killed ./catalina.sh $1
I promise that I did not kill the process! I can find my way around in Linux well enough, but I'm not at all sure where to go next to find out why or what killed the process. Can anyone help?
After some further scanning of .sh files to see how TeamCity was starting itself up, I noticed that it was grabbing a fair amount of memory for it's java process (either 512m or 750m depending on which line you use).
The EC2 micro instance only has 613m of RAM total. When I realized this, I tried the whole process again with a larger instance, and things worked fine.
I'm still curious if there's a better way I could've known what was causing catalina to die, so if anyone wants to answer with that information...

Resources