Avoid New Relic attaching to leiningen on Heroku - heroku

I've enabled New Relic monitoring for my Clojure app running on Heroku. To avoid the overhead of nesting my app inside Leiningen's JVM process, I start up with lein trampoline run.
This apparently adds some overhead from New Relic attaching to the initial Leiningen process, which then shuts down and launches my app, causing delay for New Relic to attach once again. This can sometimes result in not starting up within the 30-second boot timeout window and results in downtime.
Log output showing both New Relic agents starting up:
heroku/web.1: Starting process with command `lein trampoline run`
app/web.1: [date] NewRelic 1 INFO: Agent is using Log4j
app/web.1: [date] NewRelic 1 INFO: Loading configuration file "/app/newrelic/./newrelic.yml"
app/web.1: [date] NewRelic 1 INFO: Agent Host: 866e2426-7a0f-4293-ae89-b55c0332253e IP: 10.159.0.212
app/web.1: [date] NewRelic 1 INFO: Setting audit_mode to false
app/web.1: [date] NewRelic 1 INFO: Setting protocol to "http"
app/web.1: [date] NewRelic 1 INFO: Configuration file is /app/newrelic/./newrelic.yml
app/web.1: [date] NewRelic 1 INFO: New Relic Agent v2.9.0 has started
app/web.1: [date] NewRelic 1 INFO: Java version: 1.6.0_20
app/web.1: [date] NewRelic 1 INFO: Agent class loader: sun.misc.Launcher$AppClassLoader#7ea2dfe
app/web.1: [date] NewRelic 5 INFO: JVM is shutting down
app/web.1: [date] NewRelic 5 INFO: New Relic Agent has shutdown
app/web.1: [date] NewRelic 1 INFO: Agent is using Log4j
app/web.1: [date] NewRelic 1 INFO: Loading configuration file "/app/newrelic/./newrelic.yml"
app/web.1: [date] NewRelic 1 INFO: Agent Host: 866e2426-7a0f-4293-ae89-b55c0332253e IP: 10.159.0.212
app/web.1: [date] NewRelic 1 INFO: Configured to connect to New Relic at collector.newrelic.com:80
app/web.1: [date] NewRelic 1 INFO: Setting audit_mode to false
app/web.1: [date] NewRelic 1 INFO: Setting protocol to "http"
app/web.1: [date] NewRelic 1 INFO: Configuration file is /app/newrelic/./newrelic.yml
app/web.1: [date] NewRelic 1 INFO: New Relic Agent v2.9.0 has started
app/web.1: [date] NewRelic 1 INFO: Java version: 1.6.0_20
app/web.1: [date] NewRelic 1 INFO: Agent class loader: sun.misc.Launcher$AppClassLoader#7ea2dfe
Is there a way to avoid having New Relic attach to the leiningen process?

Instead of having -javaagent:newrelic/newrelic.jar set in your Heroku config's JVM_OPTS, could you not set it in your production profile's :jvm-opts in your project.clj?

Related

How to run a pentaho job from Command Line

Have a job which takes around 1/2 minutes to finish, now trying to run this job through the command line just goes on forever and doesn't finish. It doesn't look like I get any errors from this either. So the job seems to be starting and I know the job works correctly since it works within spoon, any ideas?
C:\Users\a\Downloads\pdi-ce-8.3.0.0-371\data-integration> Kitchen.bat
/file:C:\Users\a\Downloads\pdi-ce-8.3.0.0-371\data-integration\job.kjb
/level:Minimal
DEBUG: Using PENTAHO_JAVA_HOME
DEBUG: _PENTAHO_JAVA_HOME=C:\Program Files\Java\jre1.8.0_231 DEBUG: _PENTAHO_JAVA=C:\Program Files\Java\jre1.8.0_231\bin\java.exe
C:\Users\a\Downloads\pdi-ce-8.3.0.0-371\data-integration>"C:\Program
Files\Java\jre1.8.0_231\bin\java.exe" "-Xms1024m" "-Xmx2048m"
"-XX:MaxPermSize=256m" "-Dhttps.protocols=TLSv1,TLSv1.1,TLSv1.2"
"-Djava.library.path=libswt\win64" "-DKETTLE_HOME="
"-DKETTLE_REPOSITORY=" "-DKETTLE_USER=" "-DKETTLE_PASSWORD="
"-DKETTLE_PLUGIN_PACKAGES=" "-DKETTLE_LOG_SIZE_LIMIT="
"-DKETTLE_JNDI_ROOT=" -jar launcher\launcher.jar -lib ..\libswt\win64
-main org.pentaho.di.kitchen.Kitchen -initialDir "C:\Users\a\Downloads\pdi-ce-8.3.0.0-371\data-integration"\
/file:C:\Users\a\Downloads\pdi-ce-8.3.0.0-371\data-integration\job.kjb
/level:Minimal Java HotSpot(TM) 64-Bit Server VM warning: ignoring
option MaxPermSize=256m; support was removed in 8.0 13:58:07,867 INFO
[KarafBoot] Checking to see if org.pentaho.clean.karaf.cache is
enabled 13:58:12,006 INFO [KarafInstance]
* Karaf Instance Number: 2 at C:\Users\a\Downloads\pdi-ce-8.3.0.0-
371\data-integration.\system\karaf\caches\kitchen\data-1
FastBin Provider Port:52902
Karaf Port:8803
OSGI Service Port:9052 *
******************************************************************************* Dec 19, 2019 1:58:12 PM org.apache.karaf.main.Main$KarafLockCallback
lockAquired INFO: Lock acquired. Setting startlevel to 100 2019/12/19
13:58:12 - Kitchen - Logging is at level : Minimal 2019/12/19 13:58:12
- Kitchen - Start of run. 2019-12-19 13:58:15.902:INFO:oejs.Server:jetty-8.1.15.v20140411 2019-12-19
13:58:15.955:INFO:oejs.AbstractConnector:Started
NIOSocketConnectorWrapper#0.0.0.0:9052 Dec 19, 2019 1:58:16 PM
org.apache.cxf.bus.osgi.CXFExtensionBundleListener addExtensions INFO:
Adding the extensions from bundle org.apache.cxf.cxf-rt-management
(182) [org.apache.cxf.management.InstrumentationManager] Dec 19, 2019
1:58:16 PM org.apache.cxf.bus.osgi.CXFExtensionBundleListener
addExtensions INFO: Adding the extensions from bundle
org.apache.cxf.cxf-rt-transports-http (183)
[org.apache.cxf.transport.http.HTTPTransportFactory,
org.apache.cxf.transport.http.HTTPWSDLExtensionLoader,
org.apache.cxf.transport.http.policy.HTTPClientAssertionBuilder,
org.apache.cxf.transport.http.policy.HTTPServerAssertionBuilder,
org.apache.cxf.transport.http.policy.NoOpPolicyInterceptorProvider]
Dec 19, 2019 1:58:16 PM
org.pentaho.caching.impl.PentahoCacheManagerFactory$RegistrationHandler$1
onSuccess INFO: New Caching Service registered 2019/12/19 13:58:17 -
job - Start of job execution Dec 19, 2019 1:58:18 PM
org.apache.cxf.endpoint.ServerImpl initDestination INFO: Setting the
server's publish address to be /lineage Dec 19, 2019 1:58:18 PM
org.apache.cxf.endpoint.ServerImpl initDestination INFO: Setting the
server's publish address to be /i18n Dec 19, 2019 1:58:19 PM
org.apache.cxf.endpoint.ServerImpl initDestination INFO: Setting the
server's publish address to be /marketplace
Update
Tried deleting kitchen cache from Karaf cache starting running but job never finished, now I'm running the job with a debug level and getting these results. Still, the job doesn't get any further than this, Job works in spoon so cannot be related to the job.
C:\Users\a\Downloads\pdi-ce-8.3.0.0-371\data-integration>kitchen.bat
/file:C:\Users\a\Downloads\pdi-ce-8.3.0.0-371\data-integration\Job.kjb
/level:Debug
DEBUG: Using PENTAHO_JAVA_HOME
DEBUG: _PENTAHO_JAVA_HOME=C:\Program Files\Java\jre1.8.0_231
DEBUG: _PENTAHO_JAVA=C:\Program Files\Java\jre1.8.0_231\bin\java.exe
C:\Users\a\Downloads\pdi-ce-8.3.0.0-371\data-integration>"C:\Program
Files\Java\jre1.8.0_231\bin\java.exe" "-Xms1024m" "-Xmx2048m"
"-XX:MaxPermSize=256m" "-Dhttps.protocols=TLSv1,TLSv1.1,TLSv1.2"
"-Djava.library.path=libswt\win64" "-DKETTLE_HOME="
"-DKETTLE_REPOSITORY=" "-DKETTLE_USER=" "-DKETTLE_PASSWORD="
"-DKETTLE_PLUGIN_PACKAGES=" "-DKETTLE_LOG_SIZE_LIMIT="
"-DKETTLE_JNDI_ROOT=" -jar launcher\launcher.jar -lib ..\libswt\win64
-main org.pentaho.di.kitchen.Kitchen -initialDir "C:\Users\a\Downloads\pdi-ce-8.3.0.0-371\data-integration"\
/file:C:\Users\a\Downloads\pdi-ce-8.3.0.0-371\data-integration\Job.kjb
/level:Debug
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
MaxPermSize=256m; support was removed in 8.0
08:07:33,026 INFO [KarafBoot] Checking to see if
org.pentaho.clean.karaf.cache is enabled
08:07:37,211 INFO [KarafInstance]
* Karaf Instance Number: 1 at C:\Users\a\Downloads\pdi-ce-8.3.0.0- *
* 371\data-integration.\system\karaf\caches\kitchen\data-1 *
* FastBin Provider Port:52901 *
* Karaf Port:8802 *
* OSGI Service Port:9051 *
Dec 23, 2019 8:07:38 AM org.apache.karaf.main.Main$KarafLockCallback
lockAquired
INFO: Lock acquired. Setting startlevel to 100
2019/12/23 08:07:38 - Kitchen - Logging is at level : Debug
2019/12/23 08:07:38 - Kitchen - Start of run.
2019/12/23 08:07:38 - Kitchen - Allocate new job.
2019/12/23 08:07:38 - Kitchen - Parsing command line options.
2019-12-23 08:07:43.475:INFO:oejs.Server:jetty-8.1.15.v20140411
2019-12-23 08:07:43.538:INFO:oejs.AbstractConnector:Started
NIOSocketConnectorWrapper#0.0.0.0:9051
Dec 23, 2019 8:07:43 AM
org.apache.cxf.bus.osgi.CXFExtensionBundleListener addExtensions
INFO: Adding the extensions from bundle
org.apache.cxf.cxf-rt-management (182)
[org.apache.cxf.management.InstrumentationManager]
Dec 23, 2019 8:07:43 AM
org.apache.cxf.bus.osgi.CXFExtensionBundleListener addExtensions
INFO: Adding the extensions from bundle
org.apache.cxf.cxf-rt-transports-http (183)
[org.apache.cxf.transport.http.HTTPTransportFactory,
org.apache.cxf.transport.http.HTTPWSDLExtensionLoader,
org.apache.cxf.transport.http.policy.HTTPClientAssertionBuilder,
org.apache.cxf.transport.http.policy.HTTPServerAssertionBuilder,
org.apache.cxf.transport.http.policy.NoOpPolicyInterceptorProvider]
Dec 23, 2019 8:07:44 AM
org.pentaho.caching.impl.PentahoCacheManagerFactory$RegistrationHandler$1
onSuccess
INFO: New Caching Service registered
2019/12/23 08:07:45 - Job - Start of job execution
2019/12/23 08:07:45 - Job - exec(0, 0, START.0)
2019/12/23 08:07:45 - START - Starting job entry
2019/12/23 08:07:45 - Job - Job
Dec 23, 2019 8:07:46 AM org.apache.cxf.endpoint.ServerImpl
initDestination
INFO: Setting the server's publish address to be /lineage
Dec 23, 2019 8:07:47 AM org.apache.cxf.endpoint.ServerImpl
initDestination
INFO: Setting the server's publish address to be /i18n
Dec 23, 2019 8:07:48 AM org.apache.cxf.endpoint.ServerImpl
initDestination
INFO: Setting the server's publish address to be /marketplace
2019/12/23 08:07:55 - Job - Triggering heartbeat signal
for Job at every 10 seconds
Something deeper must have been corrupted, as I deleted all files, downloaded the latest version, and it worked.
to run from command line you have to run below command
path to kitchen.sh/kitchen.sh -file=".ktr filename" --level=Debug >> "log.txt"

Cannot deploy Spring Boot application

I'm currently evaluating CloudControl as platform provider for my Java based applications.
I created a very simple Spring Boot (https://github.com/mhmpl/gradle-example-app) app with Gradle but I'm unable to deploy the app.
There are no errors in the Error log which could give me some information. However, this is the output of the Deploy log:
8/3/14 12:53 PM lxc-1272 INFO Container did not come up within 120 seconds.
8/3/14 12:53 PM lxc-1250 INFO Waiting for the container to be reachable...
8/3/14 12:53 PM lxc-1272 INFO Waiting for the container to be reachable...
8/3/14 12:52 PM lxc-1250 INFO Waiting for the container to be reachable...
8/3/14 12:52 PM lxc-1272 INFO Waiting for the container to be reachable...
8/3/14 12:52 PM lxc-1250 INFO Waiting for the container to be reachable...
8/3/14 12:52 PM lxc-1272 INFO Waiting for the container to be reachable...
8/3/14 12:51 PM lxc-1250 INFO Deploying ...
Finally, the app is not deployed and I cannot see an error which I've potentially made. I already tried to set the memory to 1024MB and added a second container, but that did not change anything at all.
You need to bind the webserver to the correct port, which is defined in the PORT environment variable.

Play 2.2 application crashes on Heroku

After moving from Play 2.0.4 to Play 2.2.0 I get this error when deploying on Heroku:
Oct 15 13:23:12 heroku/web.1: Starting process with command `target/universal/stage/bin/demagog -Dhttp.port=${PORT} ${JAVA_OPTS} -Dconfig.resource=${DEMAGOG_ENVIRONMENT}.conf`
Oct 15 13:23:13 app/web.1: Picked up JAVA_TOOL_OPTIONS: -Djava.net.preferIPv4Stack=true -Djava.rmi.server.useCodebaseOnly=true
Oct 15 13:23:13 app/web.1: Bad application path: -Xmx384m
Oct 15 13:23:15 heroku/web.1: State changed from starting to crashed
Oct 15 13:23:15 heroku/web.1: Process exited with status 0
Oct 15 13:24:37 heroku/web.1: Starting process with command `target/universal/stage/bin/demagog -Dhttp.port=${PORT} -Dconfig.resource=${DEMAGOG_ENVIRONMENT}.conf`
Oct 15 13:24:37 app/web.1: Picked up JAVA_TOOL_OPTIONS: -Djava.net.preferIPv4Stack=true -Djava.rmi.server.useCodebaseOnly=true
Oct 15 13:24:37 app/web.1: Play server process ID is 2
Oct 15 13:24:37 app/web.1: Oops, cannot start the server.
Oct 15 13:24:37 app/web.1: java.lang.IllegalStateException: System property demagog.defaultUser must be set.
I don't understand this message
Bad application path: -Xmx384m
The second problem I can see is that my Play application can't found system property 'demagog.defaultUser', but this property is set in JAVA_OPTS environment variable. So it should work. Maybe it's just a consequence of the above problem? Any hints?
UPDATED
I have removed ${JAVA_OPTS} from the Procfile as #jan suggested. The first error
Bad application path: -Xmx384m
is not here anymore, but the system property 'demagog.defaultUser' is still not set.
Oct 16 10:50:35 heroku/web.1: Starting process with command `target/universal/stage/bin/demagog -Dhttp.port=${PORT} -Dconfig.resource=${DEMAGOG_ENVIRONMENT}.conf`
Oct 16 10:50:35 app/web.1: Picked up JAVA_TOOL_OPTIONS: -Djava.net.preferIPv4Stack=true -Djava.rmi.server.useCodebaseOnly=true
Oct 16 10:50:35 app/web.1: Play server process ID is 2
Oct 16 10:50:35 app/web.1: Oops, cannot start the server.
Oct 16 10:50:35 app/web.1: java.lang.IllegalStateException: System property demagog.defaultUser must be set.
...
Oct 16 10:50:35 app/web.1: at play.api.Play$.start(Play.scala:87)
Oct 16 10:50:35 app/web.1: at play.core.StaticApplication.<init>(ApplicationProvider.scala:52)
Oct 16 10:50:35 app/web.1: at play.core.server.NettyServer$.createServer(NettyServer.scala:243)
Oct 16 10:50:35 app/web.1: at play.core.server.NettyServer$$anonfun$main$3.apply(NettyServer.scala:279)
Oct 16 10:50:35 app/web.1: at play.core.server.NettyServer$$anonfun$main$3.apply(NettyServer.scala:274)
Oct 16 10:50:35 app/web.1: at scala.Option.map(Option.scala:145)
Oct 16 10:50:35 app/web.1: at play.core.server.NettyServer$.main(NettyServer.scala:274)
Oct 16 10:50:35 app/web.1: at play.core.server.NettyServer.main(NettyServer.scala)
Oct 16 10:50:35 heroku/web.1: Process exited with status 255
when I run heroku command
heroku config
I can see the system property is included in the JAVA_OPTS environment variable
JAVA_OPTS: -Xmx384m -Xss512k -XX:+UseCompressedOops -Ddemagog.defaultUser=xxx ...
You probably haven't removed ${JAVA_OPTS} from your Procfile. With Play 2.2 the JAVA_OPTS are included in the generated start script so you don't have to include them in the Procfile anymore.
What happens then is that the start script tries to interpret your JAVA_OPTS as app parameters.
Ok, I finally found it. The problem with setting my system property using environment variable JAVA_OPTS is that:
Environment variables are case sensitive in Unix while case insensitive in Windows.
with combination that the script generated by the sbt-native-packager reads java_opts environment variable. So you have to set java_opts (lower case) environment variable within Heroku.
answer to the update:
i do not know exactly what the problem is now. i would suggest to set the default user via the application.config. that'll be more play like anyways.
in your application.conf it could for example look
demagog.defaultUser="SOME_STD_DEFAULT_USER"
demagog.defaultUser=${?DEMAGOG_DEFAULTUSER}
then you can set the system specific variable via something like
heroku config:add DEMAGOG_DEFAULTUSER="yourdefaultuser"

Liferay startup takes way too long

I'm new to Liferay developing and I’m facing troubles with the startup of my Liferay Tomcat server. It takes almost 3 minutes (169048 ms) which is unacceptable for development. I’d like to get it down to about one minute.
Here are the specs of my machine:
Intel Core Duo T2300 # 1.66GHz
4GB RAM (3.24GB in use)
Windows 7 Enterprise 32 bit with Service Pack 1
I’m using:
Liferay 6.1.1-ce-ga2 bundled with Tomcat 7
Eclipse IDE Juno Release
In order to speed things up, I’ve:
removed all unnecessary portlets from the tomcat\webapps folder.
put the Tomcat native library 1.1.24 in the tomcat\bin folder
tweaked my portal-ext.properties as shown below
#disable some filters
com.liferay.portal.servlet.filters.sso.cas.CASFilter = false
com.liferay.portal.servlet.filters.sso.ntlm.NtlmFilter = false
com.liferay.portal.servlet.filters.sso.ntlm.NtlmPostFilter = false
com.liferay.portal.servlet.filters.sso.opensso.OpenSSOFilter= false
com.liferay.portal.sharepoint.SharepointFilter = false
com.liferay.portal.servlet.filters.gzip.GZipFilter = false
#disable indexing
index.on.startup=false
Here’s my startup log:
Jan 30, 2013 8:39:49 AM org.apache.catalina.core.AprLifecycleListener init
INFO: Loaded APR based Apache Tomcat Native library 1.1.24.
Jan 30, 2013 8:39:49 AM org.apache.catalina.core.AprLifecycleListener init
INFO: APR capabilities: IPv6 [true], sendfile [true], accept filters [false], random [true].
Jan 30, 2013 8:39:51 AM org.apache.coyote.AbstractProtocol init
INFO: Initializing ProtocolHandler ["http-apr-8080"]
Jan 30, 2013 8:39:51 AM org.apache.coyote.AbstractProtocol init
INFO: Initializing ProtocolHandler ["ajp-apr-8009"]
Jan 30, 2013 8:39:51 AM org.apache.catalina.startup.Catalina load
INFO: Initialization processed in 2620 ms
Jan 30, 2013 8:39:51 AM org.apache.catalina.core.StandardService startInternal
INFO: Starting service Catalina
Jan 30, 2013 8:39:51 AM org.apache.catalina.core.StandardEngine startInternal
INFO: Starting Servlet Engine: Apache Tomcat/7.0.27
Jan 30, 2013 8:39:51 AM org.apache.catalina.startup.HostConfig deployDescriptor
INFO: Deploying configuration descriptor C:\Liferay\portal-6.1.1-ce-ga2\tomcat-7.0.27\conf\Catalina\localhost\Hi-portlet.xml
Jan 30, 2013 8:39:51 AM org.apache.catalina.startup.HostConfig deployDescriptor
WARNING: A docBase C:\Liferay\portal-6.1.1-ce-ga2\tomcat-7.0.27\webapps\Hi-portlet inside the host appBase has been specified, and will be ignored
Jan 30, 2013 8:39:51 AM org.apache.catalina.startup.SetContextPropertiesRule begin
WARNING: [SetContextPropertiesRule]{Context} Setting property 'source' to 'org.eclipse.jst.jee.server:Hi-portlet' did not find a matching property.
Jan 30, 2013 8:39:52 AM org.apache.catalina.startup.HostConfig deployDescriptor
INFO: Deploying configuration descriptor C:\Liferay\portal-6.1.1-ce-ga2\tomcat-7.0.27\conf\Catalina\localhost\ROOT.xml
Loading jar:file:/C:/Liferay/portal-6.1.1-ce-ga2/tomcat-7.0.27/webapps/ROOT/WEB-INF/lib/portal-impl.jar!/system.properties
Loading jar:file:/C:/Liferay/portal-6.1.1-ce-ga2/tomcat-7.0.27/webapps/ROOT/WEB-INF/lib/portal-impl.jar!/portal.properties
Loading file:/C:/Liferay/portal-6.1.1-ce-ga2/portal-ide.properties
Loading file:/C:/Liferay/portal-6.1.1-ce-ga2/tomcat-7.0.27/webapps/ROOT/WEB-INF/classes/portal-developer.properties
Loading file:/C:/Liferay/portal-6.1.1-ce-ga2/portal-ext.properties
Jan 30, 2013 8:39:59 AM org.apache.catalina.core.ApplicationContext log
INFO: Initializing Spring root WebApplicationContext
08:40:16,321 INFO [pool-2-thread-1][DialectDetector:71] Determine dialect for HSQL Database Engine 2
08:40:16,330 WARN [pool-2-thread-1][DialectDetector:86] Liferay is configured to use Hypersonic as its database. Do NOT use Hypersonic in production. Hypersonic is an embedded database useful for development and demo'ing purposes. The database settings can be changed in portal-ext.properties.
08:40:16,484 INFO [pool-2-thread-1][DialectDetector:136] Found dialect org.hibernate.dialect.HSQLDialect
Starting Liferay Portal Community Edition 6.1.1 CE GA2 (Paton / Build 6101 / July 31, 2012)
08:41:36,974 INFO [pool-2-thread-1][BaseDB:452] Database supports case sensitive queries
08:41:37,828 INFO [pool-2-thread-1][ServerDetector:154] Server supports hot deploy
08:41:37,850 INFO [pool-2-thread-1][PluginPackageUtil:1030] Reading plugin package for the root context
08:42:19,657 INFO [pool-2-thread-1][AutoDeployDir:106] Auto deploy scanner started for C:\Liferay\portal-6.1.1-ce-ga2\deploy
08:42:24,410 INFO [pool-2-thread-1][HotDeployImpl:178] Deploying Hi-portlet from queue
08:42:24,415 INFO [pool-2-thread-1][PluginPackageUtil:1033] Reading plugin package for Hi-portlet
Jan 30, 2013 8:42:24 AM org.apache.catalina.core.ApplicationContext log
INFO: Initializing Spring root WebApplicationContext
Jan 30, 2013 8:42:30 AM org.apache.catalina.core.ApplicationContext log
INFO: Initializing Spring FrameworkServlet 'Remoting Servlet'
Jan 30, 2013 8:42:34 AM org.apache.catalina.startup.HostConfig deployDirectory
INFO: Deploying web application directory C:\Liferay\portal-6.1.1-ce-ga2\tomcat-7.0.27\webapps\resources-importer-web
08:42:35,522 INFO [pool-2-thread-1][HotDeployImpl:178] Deploying resources-importer-web from queue
08:42:35,523 INFO [pool-2-thread-1][PluginPackageUtil:1033] Reading plugin package for resources-importer-web
Jan 30, 2013 8:42:36 AM org.apache.catalina.core.ApplicationContext log
INFO: Initializing Spring root WebApplicationContext
Jan 30, 2013 8:42:36 AM org.apache.catalina.startup.HostConfig deployDirectory
INFO: Deploying web application directory C:\Liferay\portal-6.1.1-ce-ga2\tomcat-7.0.27\webapps\welcome-theme
08:42:36,609 INFO [pool-2-thread-1][HotDeployEvent:109] Plugin welcome-theme requires resources-importer-web
08:42:37,305 INFO [pool-2-thread-1][HotDeployImpl:178] Deploying welcome-theme from queue
08:42:37,306 INFO [pool-2-thread-1][PluginPackageUtil:1033] Reading plugin package for welcome-theme
Jan 30, 2013 8:42:37 AM org.apache.catalina.core.ApplicationContext log
INFO: Initializing Spring root WebApplicationContext
08:42:37,787 INFO [pool-2-thread-1][ThemeHotDeployListener:87] Registering themes for welcome-theme
08:42:39,764 INFO [pool-2-thread-1][ThemeHotDeployListener:100] 1 theme for welcome-theme is available for use
Jan 30, 2013 8:42:40 AM org.apache.coyote.AbstractProtocol start
INFO: Starting ProtocolHandler ["http-apr-8080"]
08:42:40,167 INFO [liferay/hot_deploy-1][HotDeployMessageListener:142] Group or layout set prototype already exists for company liferay.com
Jan 30, 2013 8:42:40 AM org.apache.coyote.AbstractProtocol start
INFO: Starting ProtocolHandler ["ajp-apr-8009"]
Jan 30, 2013 8:42:40 AM org.apache.catalina.startup.Catalina start
INFO: Server startup in 169048 ms
Any suggestions?
The comments already gave some hints. I'd say, the most important issue is to check if virtual memory (paging) is used - as soon as the OS has to page memory to disk, you have lost: There's a potentially huge performance hit.
When you upgrade your memory (e.g. if you hit the virtual memory) you might want to consider upgrading the OS to a 64bit OS - 32bit can only address 4G and you might hit limits with appserver memory as each process can only get a limited amount of memory.
You could also test if Liferay starts up faster before you run so many other applications - this is another hint that you're running into a memory issue.
The SSD option will further accelerate your system, but for a much higher price than RAM. Also, virtual memory on SSD is not really recommended - it will wear out the drive quicker. And instead of using virtual memory on SSD, rather don't use virtual memory - this will be quicker AND cheaper.
This problem is solved by upgrading to Liferay 7.
While Liferay 7 does not start faster, developers really never need to restart it, as everything can be overridden by deploying new OSGi components. That is actually the biggest difference between Liferay 6 and Liferay 7.
I have been developing for Liferay 7 for 3 months, including very deep customization (for instance intercepting all file reads for audit), and have never needed to restart the Liferay server.
The server speed depends so much on a well configured JVM (memory, garbage collector type, etc) and Tomcat connector thread pool. depending of available server resources. Liferay provide a recommended configuration:
`-server -XX:NewSize=1024m -XX:MaxNewSize=1024m -Xms4096m
-Xmx4096m -XX:MetaspaceSize=300m -XX:MaxMetaspaceSize=300m
-XX:SurvivorRatio=12 –XX:TargetSurvivorRatio=90 –
XX:MaxTenuringThreshold=15 -XX:+UseLargePages
-XX:LargePageSizeInBytes=256m -XX:+UseParNewGC
-XX:ParallelGCThreads=16 -XX:+UseConcMarkSweepGC
-XX:+CMSParallelRemarkEnabled -XX:+CMSCompactWhenClearAllSoftRefs
-XX:CMSInitiatingOccupancyFraction=85 -XX:+CMSScavengeBeforeRemark
-XX:+UseLargePages -XX:LargePageSizeInBytes=256m
-XX:+UseCompressedOops -XX:+DisableExplicitGC -XX:-UseBiasedLocking
-XX:+BindGCTaskThreadsToCPUs -XX:+UseFastAccessorMethods
-XX:InitialCodeCacheSize=32m -XX:ReservedCodeCacheSize=96m`
The above JVM settings should formulate a starting point for your
performance tuning. Each system’s final parameters will vary due to a variety of
factors including number of current users and transaction speed.
In tomcat servers you define this configuration like CATALINA_OPTS environment variable in /[tomcat_server]/bin/setenv.[sh or bat] file.

TeamCIty Server on EC2 Linux instance dies before it's done starting up

I'm installing TeamCity in EC2, starting with the Server then moving on the agents. I'm starting with the Amazon Linux AMI, running on a micro instance. Then I did:
sudo yum update
wget http://download.jetbrains.com/teamcity/TeamCity-7.1.1.tar.gz
tar -xvzf TeamCity-7.1.1.tar.gz
cd TeamCity
bin/teamcity-server.sh start
When I start it using bin/teamcity-server.sh start, things happen. I can connect using a web browser which shows the 'TeamCity is starting' page. The teamcity-server.log shows a bunch of activity, unzipping plugins etc.
But then suddently, the server process just disappears. The port's no longer listened to, ps shows no java process running, and the browser can't connect.
There's no error messages in the catalina or teamcity logs. After much trial and error though, I ran bin/teamcity-server.sh run (instead of start) to get console output, and got the following:
Using CATALINA_BASE: /home/ec2-user/TeamCity
Using CATALINA_HOME: /home/ec2-user/TeamCity
Using CATALINA_TMPDIR: /home/ec2-user/TeamCity/temp
Using JRE_HOME: /usr/lib/jvm/jre
Using CLASSPATH: /home/ec2-user/TeamCity/bin/bootstrap.jar:/home/ec2-user/TeamCity/bin/tomcat-juli.jar
Nov 1, 2012 7:22:25 PM org.apache.catalina.core.AprLifecycleListener init
INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/lib/amd64/server:/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/lib/amd64:/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
Nov 1, 2012 7:22:26 PM org.apache.coyote.AbstractProtocol init
INFO: Initializing ProtocolHandler ["http-bio-8111"]
Nov 1, 2012 7:22:26 PM org.apache.catalina.startup.Catalina load
INFO: Initialization processed in 2742 ms
Nov 1, 2012 7:22:26 PM org.apache.catalina.core.StandardService startInternal
INFO: Starting service Catalina
Nov 1, 2012 7:22:26 PM org.apache.catalina.core.StandardEngine startInternal
INFO: Starting Servlet Engine: Apache Tomcat/7.0.23
Nov 1, 2012 7:22:26 PM org.apache.catalina.startup.HostConfig deployDirectory
INFO: Deploying web application directory /home/ec2-user/TeamCity/webapps/ROOT
Log4J configuration file /home/ec2-user/TeamCity/bin/../conf/teamcity-server-log4j.xml will be monitored with interval 10 seconds.
Nov 1, 2012 7:22:30 PM org.apache.coyote.AbstractProtocol start
INFO: Starting ProtocolHandler ["http-bio-8111"]
Nov 1, 2012 7:22:30 PM org.apache.catalina.startup.Catalina start
INFO: Server startup in 3786 ms
=======================================================================
TeamCity 7.1.1 (build 24074) initialized, OS: Linux, JRE: 1.6.0_24-b24
TeamCity is running in professional mode
bin/teamcity-server.sh: line 18: 4231 Killed ./catalina.sh $1
I promise that I did not kill the process! I can find my way around in Linux well enough, but I'm not at all sure where to go next to find out why or what killed the process. Can anyone help?
After some further scanning of .sh files to see how TeamCity was starting itself up, I noticed that it was grabbing a fair amount of memory for it's java process (either 512m or 750m depending on which line you use).
The EC2 micro instance only has 613m of RAM total. When I realized this, I tried the whole process again with a larger instance, and things worked fine.
I'm still curious if there's a better way I could've known what was causing catalina to die, so if anyone wants to answer with that information...

Resources