I have a project with a few dozen EJBs and a web project that I'm attempting to deploy from NetBeans 7.0.1 on my laptop directly to Glassfish 3.0.1 on a Solaris 10 server. Ignoring the transfer time of copying the ear file, the deployments seem to take a very long time (3 minutes is the fastest I've seen it). The performance of deployments seems to degrade over time, to the point where eventually I have to restart my domain. I've seen a deployment take anywhere from 12-20 minutes after I've redeployed my application a few times.
I deploy by right-clicking my main project in NetBeans and picking "Deploy". What options do I have for making this more usable? What additional information can I provide to help track down the source of the problem?
UPDATE: Letting the most recent deployment run through to completion, it ended with the following error message in my log:
[#|2011-08-20T14:05:54.494-0400|SEVERE|glassfish3.1|javax.enterprise.system.tools.admin.org.glassfish.deployment.admin|_ThreadID=2490;_ThreadName=Thread-1;|Exception while loading the app : EJB Container initialization error
java.lang.OutOfMemoryError: Java heap space
|#]
So this does appear to be memory related. The deployment itself ran for over 10 minutes before dying in this manner.
Because of my application's requirements, I had to increase the heap space from the default 512MB allocation to a min/max of 1GB/2GB. This seems to have improved deployment slightly. My typical deployment time is ~1 minute now. It's not stellar, but it's at least tolerable.
This is the result of a serious bug in the weld-integration module of Glassfish. Without this bug the deployment is more than 20!! times faster as before.
http://java.net/jira/browse/GLASSFISH-18875
Please vote to get this fixed as soon as possible!
Related
I'm trying, but failing to setup a reliable continuous integration environment using Xcode server.
I have a git repository on a headless mac mini server running the Xcode server service, the server has a separate development user account with administrator privileges that is used by Xcode.
I have setup my schemes, with testing included and shared them to the repository.
The bots run, check out code, build, analyze and archive, but only seems to run tests when it feels like it, which is almost never. I've checked the schemes and they have not changed since Xcode ran the tests and when it didn't.
On first setting them up, tests wouldn't run at all, until I added administrator privileges to the development account, then the tests ran a couple of times, before Xcode server decided to stop running them again.
I don't seem to get any reason why the tests aren't run, sometimes the bots fail to run because of some crash during the setup, and an error is reported, but mostly the bot seems to run, they just don't execute the tests, and no error is reported.
I've logged in remotely to the server, and the simulator is running, but never seems to do anything.
Here's a screenshot of an example bot, you can see the tests used to run, it sees I've reduced my warnings and got rid of an analysis issue. You can also see where no tests run, and no kind of warning or error is given as to why.
I've tried restarting the server, nope.
I've tried restarting the client, nope.
It's really frustrating and can't find any recent issues that offer a proper solution to this. The server is in constant use running backups and other tasks, so I'd rather not have a solution that involves me logging in to the server and restarting something every time there's a problem, which is always, it makes the whole point of bots useless if I'm spending more time logging in to my server trying to get them to work than they are at actually running.
Anyone have similar issues and a solution?
Edit: Noticed that my memory usage was very high on the server, memory pressure was practically always amber, so went out and got some memory today, increased the mac mini's memory from 4GB to 16GB, and now the tests have started running again. Also, the whole process is much faster (less than surprising i guess).
Could it just be low memory causing problems with the simulator? I've only just installed the memory and restarted, so I'll give it a few test runs before I confirm this solution, it's stopped working before...
Seems that this may be a memory issue, I upgraded the servers memory from 4GB to 16GB as my Activity monitor was showing significant memory pressure.
Since doing this, the bots started running tests again, and the total running time for the bot is a quarter that it was.
As per my edit, I've been running the bots for a day now, including bots that run on multiple simulators, and everything seems to be fine.
It's not very good that no obvious indication is given in xcode as to why the tests didn't run.
For reference and to see if this might fix your problems, original server specs were :
Mac Mini Server edition (late 2012)
2.3 GHz Intel Core i7
4GB memory
2x1TB drives
Replaced the 2x2GB memory sticks with 2x8GB sticks (The maximum allowed for the model)
EDIT : After a month of running with no problems, increasing the memory has solved the problem permanently.
On a windows 2012 RT (x64) TEST server we are running a Tomcat 8 installation and the CPU usage is disconcerting in its regularity of hitting peak usage.
The behavior is happening after an installation of our application but before anyone is accessing it. I have accessed a few pages and tested some features but nothing that could create this behavior that I know of.
There are 2 virtual processors on the server and every ~20 seconds, the CPU usage will spike (on the one processor that is running Tomcat) to 100% for 10 seconds (give or take). See below:
The regularity of the pattern indicates to me that something is incorrect in either the installation or the settings of Tomcat 8.
I have installed the YourKit Java Profiler (by SO recommendation) which I was hoping could shed some light on what is causing these spikes, but haven't been able to see the reason the threads are starting -- at least in part to my newness to YourKit. I did attach it to the Tomcat launch file and it seems to be tracking the behavior.
The catalina logs are silent during the spiking occurrences (as are my application logs) but when I stopped Tomcat there were some messages about ThreadLocals getting started but could not be removed and then: "...Threads are going to be renewed over time to try and avoid a probable memory leak."
I left the server running over the weekend and the pattern has continued until today so I don't think my symptoms are going away. Whatever is starting up has now consumed all available RAM on the system just from starting up these threads (and/or YourKit) every 20 seconds.
What is a possible approach to isolate this aberrant Tomcat activity and hopefully stop or rectify it?
There are many graphs and tabs in YourKit so I hesitate to list everything that might be helpful. Thanks for helping me narrow down the problem with what YourKit (or other tools) could offer me.
Info from catalina log regarding start-up:
Apache Tomcat/8.0.23
Architecture: amd64
Java Home: C:\Program Files\Java\jre1.8.0_65
CATALINA_BASE: C:\Program Files\Apache Software Foundation\Tomcat 8.0
2015-12-08 Update
At Gergely's request, the application is a local installation of DSpace. It's a Java application with a Postgres SQL database backend. We are customizing an opensource version of it from here: http://www.dspace.org/introducing. I'm not exactly sure what else can be helpful and I think the stack trace is more revealing as to what is (and isn't) running -- see below.
By turning on Stack Telemetry in YourKit, "CPU Estimation" was made available by dragging the cursor across a period of profiler history. To me, it looks like all the CPU is doing is spinning idly. Are the Java files pictured below Tomcat routines? They don't strike me as DSpace related (although I'm not an expert) nor does it look like any work is being done while the CPU is peaking.
Of note: the stack trace is identical during the quiet periods -- the only difference being CPU Time (ms) is in the hundreds rather than thousands of milliseconds. For a more direct comparison than what is below, the hump represents ~8,000 ms in Thread.run() and the quiet periods consume ~125 ms of cpu time (although covering approximately the same amount of time).
Lastly, when pages of the application are being requested, a subsequent branch of code appears in the Call Tree. If it happened during the time of a spike it may only take 400 ms of CPU time to load a whole page. The code branch that appears is ApplicationFilterChain.java as a whole separate branch alongside PooledExecutor$Worker.run() -- both underneath java.lang.Thread.run() in the hierarchy.
When trying to interpret the stack trace: Is EDU.oswego.cs.dl.util.concurrent.PooledExecutor$Worker.run() responsible?
Processor spikes with no known, associated activity
2015-12-08 Update #2
YourKit comes pre-configured to hide certain java class name patterns which obscured drilling down on java.lang.Thread. Clearing the filters enabled the following screenshots showing that the vast majority of processing time during a spike event is through calling the following 3 methods:
java.io.WinNTFileSystem.canonicalize0
java.io.WinNTFileSystem.getBooleanAttributes (inFile.exists())
StardardRoot.java
My apologies for not yet knowing enough about Tomcat or DSpace to know who is launching these tasks. (In case it matters the line directly above the first line is java.lang.Thread.run() and then <All threads>)
Thank you to those who has viewed and responded to this inquiry. As various individuals have surmised, the problem was related to our settings and use of Tomcat -- not a problem with Tomcat itself (most likely).
This is an attempt to answer the question without perfect knowledge at installing the DSpace application and Tomcat but I think I know enough to be dangerous and potentially helpful to follow-up users.
When installing the application DSpace there are some installation properties in Tomcat's configuration directories that determine whether or not to allow for changes in coding files to be reflected immediately without a Tomcat restart. These settings for us were previously in the directory [tomcat]/conf/Catalina/localhost/ and each of the three files contained a small, insignificant XML file like (e.g. oai.xml):
<?xml version='1.0'?>
<Context docBase="E:/dspace/webapps/oai"
reloadable="false"
cachingAllowed="true"/>
You can find documentation on these properties at the following link:
https://wiki.duraspace.org/display/DSDOC5x/Installing+DSpace
Within that documentation is a recommendation about the reloadable and cachingAllowed properties. Search for "Tomcat Context Settings in Production". Here is an excerpt (emphasis mine):
These settings are extremely useful to have when you are first getting started with DSpace, as they let you tweak the DSpace XMLUI (XSLTs or CSS) or JSPUI (JSPs) and see your changes get automatically reloaded by Tomcat (without having to restart Tomcat). However, it is worth noting that the Apache Tomcat documentation recommends Production sites leave the default values in place (reloadable="false" cachingAllowed="true"), as allowing Tomcat to automatically reload all changes may result in "significant runtime overhead".
It is entirely up to you whether to keep these Tomcat settings in place. We just recommend beginning with them, so that you can more easily customize your site without having to require a Tomcat restart. Smaller DSpace sites may not notice any performance issues with keeping these settings in place in Production. Larger DSpace sites may wish to ensure that Tomcat performance is more streamlined.
When I switched these boolean flags to reloadable="false" and cachingAllowed="true" the spiked CPU experience stopped immediately. I don't know if the warning about "Larger sites" applies to us or whether "streamlined performance" could refer to the negative activity I observed.
I presume there may be other problems with our installation that allowed this particular manifestation; one ominous clue is that our production server seems to be operating with these flags in the reloadable="true" configuration. Java, Tomcat, Windows, AND DSpace are ALL getting new versions at the same time so it is fairly difficult to pinpoint why similar Tomcat <context> settings produce such different results.
I am at least content for now to have new behavior and that the system has calmed down. I'll post more if I learn more but will be focusing next on other quandaries.
Update
FWIW, the attributes are settings that directly control Tomcat and they have changed between versions. E.g., cachingAllowed was removed in version 8 which means it can be removed from the Context elements. Compare:
https://tomcat.apache.org/tomcat-8.0-doc/config/context.html#Attributes
https://tomcat.apache.org/tomcat-7.0-doc/config/context.html#Attributes
And for good measure, here is the help text for reloadable in the Tomcat 8 documentation:
Set to true if you want Catalina to monitor classes in /WEB-INF/classes/ and /WEB-INF/lib for changes, and automatically reload the web application if a change is detected. This feature is very useful during application development, but it requires significant runtime overhead and is not recommended for use on deployed production applications. That's why the default setting for this attribute is false. You can use the Manager web application, however, to trigger reloads of deployed applications on demand.
So it would seem that the ultimate answer is that Tomcat 8 on Windows 2012-R2 with the flag reloadable='true' polls for changes to WEB-INF/lib and WEB-INF/classes. The volume of the folders and files to peruse may very well be the cause of these intense, spiked CPU events. For now I will be relying on reloadable='false' which definitely removes the symptom for us.
Not an explicit answer, but way too long for a comment
After reviewing the update on this question and reading a bit I suspect that this recurring issues is caused by a CuratorTask. Reasons being:
The stacktrace you acquired clearly shows that a WorkerThread managed by the DSpace library (so Tomcat is not to be blamed) is using the processor at those times.
After reading a bit about DSpace itself, it looks like that it has a feature that allows users to define curator tasks that should be periodically executed.
On top of this there is at least one task that is - according to the documentation - It is activated by default, so theoretically there can be any number of tasks activated by default.
Moreover this conversation reveals at least 1 curation task that is actived every 10 seconds.
All these together point to the same direction. I would suggest using the UI of DSpace (probably in Admin mode) to look around and find the active curation tasks and verify if their scheduling corresponds to what you have observed.
I'm using NetBeans IDE 6.9.1. I have a web application in JSP using Spring version 3.0.2 and Hibernate Tools 3.2.1.GA. Slowly and gradually, it has been growing in size yet it's not a very big application though I have added many external class libraries as and when required like HibernateValidator.
The performance is degraded and it takes a considerable amount of time in building the application. When changes are saved, many a times, the application is deployed infinitely/endlessly with the auto-deploy feature of NetBeans. It never ends and I have to restart the IDE and the procedure begins all over again from scratch. Sometimes the application is stopped automatically and I have to restart the Tomcat server (6.0.26) because mostly an attempt to restart the application doesn't succeed.
Many a times (every half an hour or so), the application ends with following exception.
java.lang.OutOfMemoryError: PermGen Space
and I have to restart the system itself!
While working with JPA along with EJB and JSF as a front-end (GlassFish Server 3), it often wasn't the case even with heavily loaded applications with the same version of the NetBeans IDE and exactly the same platform, if I remember correctly.
Are there some ways to improve the performance?
try overriding the jvm option for more memory if you can
export JAVA_OPTS="-Xms64m -Xmx512m -XX:PermSize=128m -XX:MaxPermSize=756m"
here http://www.unidata.ucar.edu/projects/THREDDS/tech/tds4.2/reference/JavaOptsSummary.html you can find a bit more about java_opts parameters
I understand you are using Netbeans. A simple solution would be to go tools->servers -> (selecte the server (in your case tomcat))->platform, then in the VM option, paste this settings:
-Xmx1024m -XX:NewSize=256m -XX:MaxNewSize=256m -XX:PermSize=256m -XX:MaxPermSize=256m
that would solve your problem. good luck!
Recently have been experiencing a strange problem on our JBoss5. After running our app for a while, the clients who call the EJB's start Throwing NoClassDefFoundError on some classes. After a restart, all is fine again for a while until other functions start returning NoClassDefFoundError. It seems totally random and a restart of the JBoss seems to cure the problem. This particular JBoss runs in a VM with 4GB of RAM and 2 CPU's and more than enough disk space (it has never has less than 5Gb free at any time). We have increased the Xmx and XMs to 2048 Mb and the permgen sweeping to 512Mb (ridiculousness I know). Intersetingly, the same install runs elsewhere on a VM with half the memory and Xmx/Xms/permgen settings with no problems whatsover. The Only differnce being that the last stable one is not any major load , although the broken one only has maximum of 8 clients connecting which could hardly constitues "load" in my books :-). Has anybody come across this kind of problem, or have any idea of what it could be?
Not really an answer, be we had a RPM install for CENTOS 5. We removed that and used the zip file from the the jboss site instead. That cured the problem. Looks like we had a Dodgy install.
In a nutshell: my JBoss instance is running ok, but after some days it's performance is slowly degrading.
Detailed:
I've got a setup with JBoss 5.1.0-GA and Java 1.6.0_18-b07 (x64) running on a 64 bits RHEL 4 box. The hardware is a virtual machine with 8 core Xeon X5550 / 20G ram.
The product deployed in JBoss contains a webservice on which a endurance test is performed.
No database is involved in the process.
The tests are performed using soapui with 4 threads and the tests are configured to create 20% cpu usage.
Let say, at first the average response times are 300ms. After 2 days, the response times are now 600ms, which I don't understand.
Of course I did some checks:
There are no memory leaks (confirmed with jprofiler)
Heap mem is always around 25-50%, perm space usage is 50%
GC is almost never busy
All threads are idle after inspecting a thread dump
While do some further investigations, I did a cpu profile with JProfiler at the beginning (when it's still fast), and on on the (slow) end. What I see then, is that every single call just is 100% slower!
Even call's to a simple Map#put(). (the # of invocations and the content of these maps are the same).
When running a profiler, there are no signs of blocked threads, just running threads.
Does anyone has a clue what's causing the performance degradation?
Thanks!
Update: solved the performance degradation by upgrading the Java version to 1.6.0_24 !
While out of options, I scanned through all the release notes of the java vm, and discovered a performance and reliability fix in 1.6.0_23. See also the
1.6.0_23 release notes
After the jvm upgrade, the performance stays the same and does not degrade over days.
Solution found by Jan :
Solved the performance degradation by upgrading the Java version to 1.6.0_24 !
While out of options, I scanned through all the release notes of the java vm, and discovered a performance and reliability fix in 1.6.0_23. See also the 1.6.0_23 release notes
After the jvm upgrade, the performance stays the same and does not degrade over days.*