We are using Grails 2.2.4, WebSphere 8.0.0.5 all running on AIX 6.1.0.0. Websphere is using the IBM JDK:
Java(TM) SE Runtime Environment (build pap6460_26sr3ifix-20121005_02(SR3+IV27268+IV27928+IV28217+IV25699))
IBM J9 VM (build 2.6, JRE 1.6.0 AIX ppc64-64 20120919_122629 (JIT enabled, AOT enabled)
J9VM - R26_Java626_SR3_iFix_1_20120919_1316_B122629
JIT - r11.b01_20120808_24925ifx1
GC - R26_Java626_SR3_iFix_1_20120919_1316_B122629 J9CL - 20120919_122629)
JCL - 20120713_01
The problem is that using:
grails.gsp.enable.reload = true
grails.gsp.view.dir="/path/to/gsp/views"
is slow, and by that I mean a good 20 seconds to render a small GSP. What's interesting is that in our local development environments it takes 2 seconds.
We've isolated this problem by having a controller that does nothing except call render(..) on a blank GSP with nothing in the model, so I can only assume it's the compilation but I could be wrong.
Has anyone come across other instances where rendering GSPs is extremely slow, or have any suggestions, perhaps it's some sort of weird JDK issue on AIX?
In addition to the bounty, whoever answers this correctly gets free waffles.
EDIT Just noticed this the other day: there are three environments with the same WAS config and setup and one of them works fine, so it is definitely some sort of environment issue.
I agree with your suspicion that it's compilation time. Perhaps you grails.gsp.view.dir is slow - a networked file system perhaps?
The real answer, unfortunately, is to not enable GSP reload in production.
It's clearly meant as a development convenience, and is not intended to perform well in production.
Make sure sitemesh is being preprocessed
grails.views.gsp.sitemesh.preprocess=true
Also I suspect this to be locking issue rather than compiling issue.
To at least reduce this problem set the following configs
grails.gsp.reload.interval= time in milliseconds.
something high depending on your comfort. Perhaps every hour?
If your file is changing the last modified time too quickly you need to lower the granularity by
grails.gsp.reload.granularity= Time in milliseconds.
Limit the number of classes being reloaded by
grails.reload.excludes
and
grails.reload.includes
Also remember the view path MUST end in slash. I didn't see that in the example you provided.
While I can't tell you what is the cause for your problem, I can point you to a tool that could help you look into the performance problem.
The Grails Miniprofiler plugin is a fantastic tool for examining end to end performance, all the way to the view (which is where you believe your problem is).
On some GSPs on a project of mine, I was surprised to see how heavy some views got with respect to the SQL calls (via lazy loading).
You may have suspicions for where the problems lie, and maybe you're hitting an obscure bug on that specific platform, but having hard numbers to point to bottlenecks is extremely useful.
For what it's worth, I haven't seen the problem you've described in any of my OS/environments. But I do remember experiencing serious pain with trying to deploy to JBoss 6.1 (since resolved) so I am sensitive to the idea that Grails could have issues on some environments.
Best of luck...
Grails Miniprofiler plugin
We are having the same problem on our application that tried to render a Birt report on Gsp page(it takes some times 5 minutes on prod server,We used Tomcat and Mysql) , not answer but a suggestion you might need to use the grails cache plugin ,that has a capability of cache gsp pages based on you requirement.
grails.org/plugin/cacheā
Turned out the problem was https://plugins.grails.org/plugin/sergiomichels/grails-melody-plugin. Who knew!
Related
While developing Java EE applications, the NetBeans IDE (currently using 8.0.2 but the issue is not restricted/limited to this version only) issues a low memory message quite frequently.
For example, while modifying and saving some Java classes (especially, JPA entity classes), the memory soon gets over-flooded after repeating this modifying and saving process for merely 10 to 15 entity classes. The IDE issues a low memory message disturbing the whole system severely.
It leaves only one alternative which is to restart the system and I indeed have been wasting more time in restarting the system than actually developing Java EE applications for couple of years :). I have not yet seen anything about this issue over the internet.
The auto-deploy (Deploy on save) option on the IDE is turned off forever. I never use this facility as it causes the heap/PermGen space to get over-flooded quite soon.
This may or may not happen in small toy applications. I am only talking about Java EE applications using a Java EE compliant application server.
I see if there is no way to get around this problem, then this IDE cannot be used in developing real Java EE applications because dealing with this problem basically takes more time than the actual development time and otherwise, the solution may be to lean towards other IDEs like Eclipse, IntelliJ Idea, JDeveloper etc, if they go better.
I am currently using,
GlassFish Server 4.1/Java EE 7
JDK 8u45
NetBeans 8.0.2
Is there a solution somewhere in the world?
NetBeans tends to be slower than other IDEs on Windows but this is far beyond the single term slow.
The picture is taken from a simple test web application (thus not an enterprise application). In enterprise applications, changing a single character in a single entity class is not affordable. One is likely to run into a big problem, if attempted. Fixing those errors shown in the link may take hours or probably the whole day or so.
Not to mention again that I am not talking anything other than large scale applications involving Java EE or other platforms like Spring.
Long story short : Nothing can prevent me from wondering as to how it is affordable to use this IDE in software industries. It is no longer usable in this way (Nothing to abuse. Honestly, I kept patience for almost three years. I am merely wasting time using this IDE).
I have an application based on DOJO, with some performance issues on some workstations, not all.
We are trying to find out reason but since the behavior is not consistent its really difficult to pin point the reason for performance issues.
There are machines on which the code just works fine at most times, but then there are machines that would have the issues of getting stuck.
Note: We do not have access to the client machines that are reporting issues.
So one of the things we are looking at is: Installed JRE on the machines.
Please let me know.
Also if you have any suggestions where else I could look, please suggest.
Thanks
Nick
Unless you are running Dojo under Node.js, then in theory you would need some kind of browser or other JavaScript container to run a Dojo application. If we assume that this is a browser based application, then Dojo is running inside of the browser's JavaScript engine.
So it seems very unlikely that the JRE would be effecting the browser's performance, unless to you are also running Java applets in your application.
You might want to look at what could be slowing down these workstations in general.
We are planning to upgrade from Oracle 9.2.0.7 to 9.2.0.8. Main reason of the proposed upgrade is to address the issue in relation to exception "terminated with error: ORA-00904: "T2"."SYS_DS_ALIAS_4": invalid identifier" when we try to execute DBMS_STATS.GATHER_SCHEMA_STATS.
We are concerned that the proposed upgrade may have negative impact on our Java application or in the worst case may not even support by our Java application.
What are the possible approaches or strategies that we can take to ensure the upgrade from Oracle 9.2.0.7 to 9.2.0.8 will not have adverse impact on our Java application or will not cause our Java application to function incorrectly. Essentially we just want to confirm that our application will still support Oracle 9.2.0.8.
Thank you.
Your first step should be to ensure you set up a test system with your exact production layout and current software (9.2.0.7).
Run it for a bit to make sure it's okay, then perform the upgrade on your test system and run it for as bit longer to ensure it hasn't broken anything. I'm not talking about the cowboy-developer "if it runs for five minutes, that's okay" type of test. It should be a thorough test of all functionality and performance if possible.
Once you're happy with the level of testing, you can plan to do the same thing to production.
This isn't rocket science, you should always have a production mirror on which you can test upgrades of software, both your own and third-party stuff. And you should have backout strategies on the off-chance that the production upgrade fails anyway despite your testing.
We're pretty paranoid so we actually set up a whole new machine well in advance doing as much as possible. Then, at cutover time, we disable current production, perform whatever transfer is still required to the new machine, then bring that up and test it. If at any point during testing something cannot be fixed in the upgrade window, the old machine is put back online and we try again later, with appropriate kicks in the rear end for those responsible for the failure :-)
I've upvoted Paxdiablo's answer - there are few shortcuts around testing with as much application coverage as possible on a full-size copy of your production system.
I think you're generally looking to answer two questions with an upgrade:
Have new bugs been introduced in the
Oracle functionality used by the
application?
Have changes in the optimizer changed
the execution plans (for the worse!)
for any application queries?
I believe that as early as 9.2 the optimizer would include system stats in determination of execution plans, so you want to at least bring that information over into the test system to reduce that variable to the optimizer if your test system hardware differs from production.
If you upgrade to Oracle 11g and have the $$$$, you can license and configure Real Application Testing. This will let you essentially record and play back database activity in a test instance to answer these two questions.
In addition to the excellent answers by dpbradley & paxdiablo, before the database is patched it is worth looking on the Oracle support site, support.oracle.com, at the known issues that this patch may introduce which may stop you losing more than you gain!
You will need a valid support license to log into the Oracle support site but there is a note for 9.2.0.8 filed under:
9.2.0.8 Patch Set - Availability and Known Issues [ID 388281.1]
Bugs can be difficult enough to resolve when they're your (or a coworker's) fault. However, we all know that the technology we use to implement our programs is written by infallible people such as ourselves. So it stands to reason that some people have been affected by bugs in the implementation of the tools they used.
So, have you found a bug in your program that was caused by a widespread underlying technology, such as a programming language or framework? If so, did it fail with some indication, or did it silently overwrite some data? How difficult was it to debug? Did it cause a potential security vulnerability? Were you able to contact the provider and confirm that it was fixed (or fix it yourself)?
Here are some of the worst (in my opinion) technologies to have a bug in (especially one that fails silently):
Programming language
Concurrency framework
Remote API
Database
I deal with one on a daily basis called Internet Explorer.
To be fair though, there are lots of bugs in all of the browsers. I have filed several bugs for Firefox as well, and just the other day I found a strange case where the border doesn't take padding into consideration.
This is a good argument for writing lots of unit tests. If you upgrade your platform to a newer version that has some new bug, hopefully you have a test that reveals the bug
in one case I had, the vendor was working on a brand new API. They were not ready to release the new API, but they were not very keen to fix the bug in the old one either as they considered it dead from a $$ perspective.
A colleague once stumbled across a bug in the Jikes Java compiler. He had something like this:
if (condition)
{
}
else
{
System.out.println("Code that does stuff.");
}
He hadn't intended to leave the top block empty permanently, but just had it that way during development. He discovered that the condition was ignored unless he put a comment in that block so that it was no longer empty.
During my time developing (mostly) with Java I've run into bugs in the following components:
Java compiler
This actually happened several times. Usually we found that ecj (the Eclipse compiler) and javac (the Sun compiler) disagree about the validity of some Java code. Usually I enter bug reports for both systems, one of them gets accepted and the other one is closed as invalid.
Database engine
Those are very rare and very, very nasty, because no one expects the DB itself to have a bug. In our case it was an old product (the bug was already fixed in a newer release) that accepted values in a field that were not within the defined range (similar to having a NOT NULL field containing NULL)
JDBC driver
There were several bug fixes to a JDBC driver due to a project I've been working on. The bug fixes ranged from trivial ("why is there debug output in the production release?") to might-not-even-be-a-real-bug ("you can easily safe one roundtrip per statement by doing that-and-that").
JVM implementations
those are hard to diagnose and often present themselves as effectively random crashes on one JVM and stable operation on another one. We temporarily switched JVM vendor several times due to things like this.
Each time it took quite some time (and usually the involvement of the vendor of the component) before I actually believed it was a bug in that component.
And yes: the cases of false positives (i.e. the bug was actually in my/our code) were orders of magnitude more common.
The only place where bugs in the third-party component are kind-of expected seem to be web browsers. Almost no one questions you when you say "that's a bug in <insert buggy browser of the week>, we need to work around it like this ...".
I guess almost anyone who has programmed JavaScript with Internet Explorer has found a bug in their program which was caused by a widespread underlying technology.
The indication of failure is the blue "e" on your Windows desktop.
The first that comes to mind was with version 1 of the .NET Framework; for some reason, Random.NextDouble() method never produced a value greater than 0.5. I was completely baffled, and having run a test apps that called the method thousands and thousands of times, I had to presume it was a bug and work round it.
Never did find out what the cause was...
I've run into something with the gcc 4.4.0 but as the product I'm currently working on is still pre-alpha, it was fairly easy to repair locally. Hopefully they'll fix it soon.
i found a very strange bug in gcc on Mipsel (openwrt). We was testing a small app (about 3K sloc) that give me sigsev even if the code was corrected in theory.
I don't know the detail of the bug (and I don't have that code anymore) but change the gcc version from 4.1 to 3.6 solve the problem.
How can I improve performance when developing locally with Websphere and RAD? I am using one web application of moderate size (1000? classes) and it is impossible to handle the app locally on a Windows box. The Websphere 6.1 configuration uses the default configs. RAD7 is configured to handle a max heap of 1024mb. I thought about increasing the heap of the server. At present, the min and max are 128/300mb.
In terms of unresponsiveness, sometimes it may take minutes to load a page, if the page loads at all. Also, I disabled "Build Automatically" and Publish Automatically. Maybe those should be turned on?
I'm not sure about RAD7 but from my past experience, I'd suggest to give MyEclipse Blue a try.
Since that might not be an option, here are some other usual culprits, you can check:
How much RAM does your machine have? It's good to give WS 1GB of RAM but if your computer only has 1GB of real RAM, it's going to swap itself to death. If your boss won't pay for it, go get some RAM with your own money. 2GB are less than $80 ATM. I suggest to get at least 4GB. Yes, Windows can only use 3.5GB even when 4 are installed but that half GB costs $20 or less. Even thinking about this for more than five minutes will cost more than simply buying it.
Next make sure whether you are using the correct Java GC options. There should be some info about this in the docs. Plus make sure that the process uses the "jvm.dll" from the "server" directory, not the "client" one. "Process Explorer" will help.
Since I'm not using RAD, I'm not 100% sure about "Build Automatically" and "Publish Automatically" but since RAD7 is based on Eclipse, these options will compile code in the background as you type. This will greatly reduce the time between you saving your last change and the moment the app server can start to load the new code.
When all else fails, run websphere in a profiler and look where it spends all the time.
Aaron had great advice.
I would also suggest using JConsole to see what is going on, to help you determine if you need more memory, larger heap size, etc. My experience with running Websphere and RAD locally is that it will be slow, but then I was on an old machine that needed more memory. :)
http://java.sun.com/j2se/1.5.0/docs/guide/management/jconsole.html
Berlin,
RAD 7 saps your PC! When I was using it to develop Portlets, I followed this optimization guide and it made the IDE significantly quicker to develop Portlets in. Obviously it is aimed at Portlet development but it might help you.
Also following the advice given to the answer to this question will also help.
I definitely agree with Aaron Digulla. You will see a major performance improvement with 4GB RAM installed on your development machine. I developed an Eclipse/RAD plugin with some buddies of mine and we were able to measure how much time we saved by upgrading from 2GB to 4GB.
The plugin is available here: http://lopb.org/
After gathering some hard numbers on how much time we spent waiting for publishing and loading the app on our 2GB development machines, we were able to convince management to upgrade the rest of the developers on the team.
Anyway, you should really consider upgrading to 4GB if you want to run RAD 7 and Websphere 6 on the same development machine. Each one needs -Xms=512m -Xmx=1024m as JVM args to run well, and that means you will swap to disk way too much if you only have 2GB of RAM or less. HTH
Make sure your running was in development mode for your development and testing.
Option is under the server in the console.
Karl
hehe, we had the same problem with RAD6 and Websphere 6.
The way we speeded things up is moved to Eclipse and JBoss.
We developed on Eclipse and JBoss and then first round of testing was on Websphere. We had some issues with the differences but would never had completed the project were it not for out switch (a lot fewer issues than deving on RAD/WAS).
But to help you in the mean time...
Definitely, probably want build automatically and publish automatically off. That way you can make a bunch of changes and then tell RAD to compile and deploy while you go and get coffee.
There is a "run in dev mode" in Websphere (I know there was for 6.0) so track that down and turn it on (it's on the WAS console somewhere)
I found WAS's on stack replacement to work fairly well. I found that at the beginning of the day I'd deploy to WAS and then not have to redeploy at least until lunch time (as I was debugging). I would make changes and the changes would be fed to the server without my having to redeploy.
Chances are, even after running the profiler you'll find there's nothing much that you can do..
Turn off all validations (in RAD), they tend to take forever.
Depending on what you're doing with EE investigate the possibility of deving on another IDE/Server combo, maybe you can do the bulk of your work in there and then deploy from RAD/WAS to do some final testing. If you're using vanilla ejb's or web services this is feasible.
That max heap does sound a bit small to me. The suggestion to fire up JConsole is a good one cause it will tell you how much heap is being used, though I'm not sure if it will work on the IBM vm (RAD's). You might try and turn on the memory usage monitor in RAD, tells you how much memory is being used, that way you can tell if it's hitting the max.
JConsole will not work without specifically enabling it via a JVM command line switch.
Suggestions from Michael Wiles sound reasonable but please update your RAD first to the latest FixPack available.
You can also contact support.