Performance improvement jsf - performance

I read about 2 parameters such as javax.faces.FACELETS_REFRESH_PERIOD =-1
javax.faces.FACELETS_SKIP_COMMENTS=true can improve the performance.I tried those but i dont see any improvements is there anything i am missing or is there any better option which can achive the improvement.

You hardly see a performance improvement because the author of this blog highly exagerates when he/she states:
we will talk about the most important aspects that can be tuned in order to enhance the
performance of JSF 2.x applications.
And adds the two you talk about as important ones without adding that at least the refresh period is highly related to load and has with java (n)io and way faster disks been optimized a lot.
javax.faces.FACELETS_REFRESH_PERIOD =-1
The OS IO layer caching in combination with the java nio and faster disks makes it really quick to check if a file timestamp, against which JSF checks for changes, has changed. This is so quick that you'll hardly notice any improvement these days. Maybe only a little when you have 1000's of simultaneous users. So yes, it helps, a little, but not as much as you'd expect from the wordings in the blog.
javax.faces.FACELETS_SKIP_COMMENTS=true
This would help if you have a comment to useful code ratio of 1:1, but you'll hardly have that much comment in the page so the gain of sending 100-500 bytes less in a 10k-100k page (all examples) is negligeable.
This setting is (at least for me) more usefull in that our internal comments in the page doe not end-up at the enduser.
For other improvements see
JSF Performance improvement)

Related

Java 8 migration - Test planning for low latency applications

We are planning for Java 8 migration for our application from Java 7. As part of this migration, the most important thing that we want to achieve is to recompile our source code using JDK 8 and gain out of the performance improvement made in JVM, garbage collection model etc. Besides this, we want to also set stage to take advantage from the new features added in Java 8.
My question to this group is to get some advice on how should we plan our testing. What are the key areas that we should be watching out for? What are some of the challenges that others have faced?
Note: Our application is intended for low latency use.
A few things...
Some things that compile under jdk-7 might not with jdk-8. This is because lots of bugs were fixed and some code now potentially might be much closer to the jls (this is probably more about generics, but might affect other areas as well).
If you have external libraries, not all are compatible with jdk-8.
HashMap internals have changed. If you rely on some iteration order (I've seen that), it might now fail; otherwise the internal changes will only make your HashMap usages faster.
You say that your app is intended for low latency. Be aware that Stream operations are slower and require more resources then simpler structures. BUT unless you actually measure this to be an impact (it wasn't my case when migrating), there's nothing to worry about.
This is a great example where if you have your test cases in place - they would help A LOT. You would be catching all the major problems at that point (if any).
I'd say that the biggest challenge for me was not the migration itself, but the post-migration. Lots of people (including me) have done multiple mistakes in basic stuff - since lambda and Streams were quite new. My personal advice here - do not be afraid to ask. Better late then sorry.
P.S. As noted in comment, you should also check this guide.

Are there any resources for language independent performance tips?

I work with many people that program video games for a living. I have a quite a bit of knowledge in C++ and I know a number of general performance strategies to utilize in day to day programming. Like using prefix ++/-- over post fix.
My problem is that often times people come to me to give them tips on general optimizations they can do on a regular basis when programming, but often times these people program in all sorts of languages. Some use C++, C#, Java, ActionScript, etc.
I am wondering if there are any general performance tips that can be utilized on a day by day programming basis? For example, I would suggest prefix ++/-- over postfix for people programming in another language, but I am just not sure if that is true.
My guess is that it is language specific and the best way to go about general optimizations is to make sure you are not using majorly bloated algorithms, but maybe someone has some advice.
Without going into language specifics, or even knowing whether this is embedded, web, CAD, game, or iPhone programming, there isn't much that can be said. All we know is that there's multiple languages involved, and for some unknown reason performance is always slower than desirable.
First, check your algorithms. A slow algorithm can cause horrible performance. Read up on algorithms and their complexity.
Second, note if there are any really slow operations, such as hitting a database or transmitting information or moving a robot arm. See if the program is doing more of those than it should.
Third, profile. If there's a section of code that's taking 5% of the time, no optimization will make your program more than 5% faster. If a section of code is taking a lot of the time, it's worth looking at.
Fourth, get somebody who knows what they're doing to make any specific optimizations. Test them when they're done to make sure they actually speed up performance. When performance was an issue, I've improved it with some counterintuitive measures, like rolling up loops.
I don't think you can generalize optimization as such. To optimize execution time, you need to dig deep into the language and understand how things work in detail. Just guessing or making assumptions on experiences with other languages won't work! For example, writing x = x << 1 instead of x = x*2 might be a big benefit in C++. In JavaScript it will slow you down.
With all the differences between all the languages it's hard to find generic optimization tips. Maybe for some languages which are similar (f.ex. C# and Java). But if you add both JavaScript and Python to that list I'm pretty sure not many common optimization techniques will be left over.
Also keep in mind that premature optimization is often considered bad practice. Developer-hours are much more expensive than buying additional hardware.
However, there is one thing which comes to mind. Over the past decade or so, Object Relational Mappers have become quite popular. And hence, they emerge(d) in pretty much all popular languages. But you have to be careful with those. It's easy to load tons of data into memory that you will never use in your code if not properly configured. Keep that in mind. Lazy loading might be of some help here. But your mileage will vary.
Optimization depends on so many things that answering such a generic question would make this post explode into a full-fledged paper. In my opinion, optimization should be regarded on a project-by-project basis. Not only Language-by-Language basis.
I think you need to split this into two separate questions:
1) Are there language-agnostic ways to find performance problems? YES. Profile, but avoid the myths around that subject.
2) Are there language-agnostic ways to fix performance problems? IT DEPENDS.
A general language-agnostic principle is: do (1) before you do (2).
In other words, Ready-Aim-Fire, not Ready-Fire-Aim.
Here's an example of performance tuning, in C, but it could be any language.
A few things I have learned since asking this:
I/O operations are usually the most expensive to performance. This holds especially true when you are doing disk or network I/O (which is usually the most expensive because if you have to wait for a response from the other host you have to wait for all processing and I/O operations the remote host does). Only do these operations when absolutely necessary and possibly consider using a cache when possible.
Database operations can be very expensive because of network/disk I/O and the translation time to and from SQL. Using in-memory DB or cache can help reduce I/O issues and some (not all) NoSQL databases can reduce SQL translation time.
Only log important information. Using logging libraries like log4j can help because you can put logging to your hearts desire in your application but you set each message to a certain log level. Whichever log level you set the application to it will only log messages at that level or higher. This way if you need to troubleshoot functionality you only have to change a quick config and restart you application to give you additional messages. Then when you are done just turn you application back to the default level so that you do not log too often.
Only include functionality that is needed. Additional functionality may be nice to have but can increase processing time, provide additional locations for the application to fail, and costs your team development time that could be spent on more important tasks.
Use and configure your memory manager correctly. Garbage collection routines can kill performance if they are not configured correctly. If every minute you application freezes for a second or two for garbage collection your customer probably will not be happy.
Profile only after you have discovered a performance issue. Profilers will make the applications performance look worse than it is because you have your application and the profiler running on the same host, consuming the same hardware resources.
Do not prematurely do performance tuning. There are general practices you can take that should be better on performance in each language, but starting performance tuning in the middle of application development can cost you a lot on development because there is still functionality to be added.
This is not necessarily going to help performance but keep class dependency to a minimal. When you get into performance tuning there is good chance you will have to rewrite whole portions of code, which if there is a lot of dependencies on the section you are performance tuning the greater chance you will break the code. It can often be a domino affect because after fixing the performance issue than you have to fix all the dependencies, and possibly dependencies of the original dependencies. A performance tuning exercise estimate for a few hours can quickly turn into months with an application that has a lot of dependencies.
If performance is a concern do not use interpreted languages (scripting languages).
Only use the hardware you need. Having a system with a 64 core processor may seem cool but if you only have two or three threads running in your application than you are getting little benefit from having 64 cores. In fact, in rare instances having overly excessive hardware can sometimes hurt performance because the chips have to be wired to handle all the hardware which can cause your application to spend more time switching between cores or processors than actually being processed.
Any timing metrics you report make as granular as possible. Currently, you may only need to be worried about the number of milliseconds a process takes but in the future as you make your application faster and faster you may need more granular timings. If version A uses milliseconds and version B uses microseconds, how can you compare performance if version B is taking about the same number of milliseconds. Version B may be better but you just can't tell because version A did not use granular enough metrics.

How can I sell non-functional testing tool to my company?

I need to present performance test tool to management team of my company. Some of them think performance testing is not necessary for us because our customer never request or give requirement about performance to us.
However one of our current big project found performance problem, responding time is very long, server down when it handle many concurrent user.
I think I need to prepare myself to present about it benefit both concrete and non-concrete. Anyone have experience with performance testing tool? How it can empower your productivity?
Management cares about money. Show them how your tool will save them money and you will get their approval. Everything else is usually trivial to them.
Expanding on what #LWoodyiii had to say. When presenting a case for anything, whether that be hiring more people, or investing in a performance testing tool (or outsourcing your performance testing for that matter), it needs to be presented in terms of money saved. By doing a little leg work, you should be able to back into the $ saved amount.
If you had never had any performance problems, then it would be more difficult to quantify $ saved. But in your case, it should be a little easier to figure this out, as you have already had some significant performance issues. You should be able to put a $ amount to your existing performance problem. You should be able to quantify the revenue lost (lost transactions, lost customers, decreased transaction throughput, etc...) due to the degradation of service. You can also factor in costs associated with fixing and resolving the performance issue. Then it is a matter of comparing the costs of having performance problems vs implementing a performance testing program (tool, training and resource costs).
It also probably would not hurt to spice up the presentation with some anecdotal performance horror stories that were well publicized in the news and how much those outages cost those firms.
It sounds like you haven't used profilers yourself. That would be a good start. You didn't mention your environment but red-gate makes a wonderful profile for .NET.
http://www.red-gate.com/products/ants_performance_profiler/index.htm
Whatever environment you're in, you can probably find a decent profiler with a trial period. Use the trial period to profile your app and get to know how profilers work and how they can help make your app better.
One thing to demonstrate about productivity is how they can let you focus on the biggest bottlenecks and have the most impact on improving performance with the least effort. With a good profiler you won't bother optimizing code that is already performant.
Of course, if your company really don't care about performance they won't want you doing any optimization anyways. There are lots of companies like this and it stinks.
I think performance is one of the trivial cases that is really difficult to present to someone else especially the management. You should have some "Clear" plus "simple" way to show the use of it.
I have the experience with profilers like JBuilder and YourKit but no other performance tools. But I think the "Numbers" shown on them are not sufficient to show the use for them.
If you can build a nice practical example it would be great. Show same case for both scenarios. If you can show that the old one's response time is large and after performance improving the same operation takes much less time then it is a good way to prove your claim.

Is Performance Always Important?

Since I am a Lone Developer, I have to think about every aspect of the systems I am working on. Lately I've been thinking about performance of my two websites, and ways to improve it. Sites like StackOverflow proclaim, "performance is a feature." However, "premature optimization is the root of all evil," and none of my customers have complained yet about the sites' performance.
My question is, is performance always important? Should performance always be a feature?
Note: I don't think this question is the same as this one, as that poster is asking when to consider performance and I am asking if the answer to that question is always, and if so, why. I also don't think this question should be CW, as I believe there is an answer and reasoning for that answer.
Adequate performance is always important.
Absolute fastest possible performance is almost never important.
It's always worth keeping an eye on performance and being aware of anything outrageously non-optimal that you're doing (particularly at a design/architecture level) but that's not the same as micro-optimising every line of code.
Performance != Optimization.
Performance is a feature indeed, but premature optimization will cost you time and will not yield the same result as when you optimize the parts that need optimization. And you can't really know which parts need optimization until you can actually profile something.
Performance is the feature that your clients will not tell you about if it's missing, unless it's really painfully slow and they're forced to use your product. Existing customers may report it in the end, but new customers will simply not bother if the performance is required.
You need to know what performance you need, and formulate it as a requirement. Then, you have to meet your own requirement.
That 'root of all evil' quote is almost always misused and misunderstood.
Designing your application to perform well can be mostly be done with just good design. Good design != premature optimization, and it's utterly ridiculous to go off writing crap code and blowing off doing a better job on the design as an 'evil' waste. Now, I'm not specifically talking about you here... but I see people do this a lot.
It usually saves you time to do a good job on the design. If you emphasize that, you'll get better at it... and get faster and faster at writing systems that perform well from the start.
Understanding what kinds of structures and access methods work best in certain situations is key here.
Sure, if you're app becomes truly massive or has insane speed requirements you may find yourself doing tricked out optimizations that make your code uglier or harder to maintain... and it would be wrong to do those things before you need to.
But that is absolutely NOT the same thing as making an effort to understand and use the right algorithms or data patterns or whatever in the first place.
Your users are probably not going to complain about bad performance if it's bearable. They possibly wouldn't even know it could be faster. Reacting to complaints as a primary driver is a bad way to operate. Sure, you need to address complaints you receive... but a lack of them does not mean there isn't a problem. The fact that you are considering improving performance is a bit of an indicator right there. Was it just a whim, or is some part of you telling you it should be better? Why did you consider improving it?
Just don't go crazy doing unnecessary stuff.
Keep performance in mind but given your situation it would be unwise to spend too much time up front on it.
Performance is important but it's often hard to know where your bottleneck will be. Therefore I'd suggest planning to dedicate some time to this feature once you've got something to work with.
Thus you need to set up metrics that are important to your clients and you. Keep and analyse these measurements. Then estimate how long and how much each would take to implement. Now you can aim on getting as much bang for you buck/time.
If it's web it would be wise to note your page size and performance using Firebug + yslow and/or google page speed. Again, know what applies to a small site like yours and things that only apply to yahoo and google.
Jackson’s Rules of Optimization:
Rule 1. Don’t do it.
Rule 2 (for experts only). Don’t do it
yet— that is, not until you have a
perfectly clear and unoptimized
solution.
—M. A. Jackson
Extracted from Code Complete 2nd edition.
To give a generalized answer to a general question:
First make it work, then make it right, then make it fast.
http://c2.com/cgi/wiki?MakeItWorkMakeItRightMakeItFast
This puts a more constructive perspective on "premature optimization is the root of all evil".
So to parallel Jon Skeet's answer, adequate performance (as part of making something work, and making it right) is always important. Even then it can often be addressed after other functionality.
Jon Skeets 'adequate' nails it, with the additional provision that for a library you don't know yet what's adequate, so it's better to err on the safe side.
It is one of the many stakes you must not get wrong, but the quality of your app is largely determined by the weakest link.
Performance is definitely always important in a certain sense - maybe not the one you mean: namely in all phases of development.
In Big O notation, what's inside the parantheses is largely decided by design - both components isolation and data storage. Choice of algorithm will usually only best/worst case behavior (unless you start with decidedly substandard algorithms). Code optimizations will mostly affect the constant factor - which shouldn't be neglected, either.
But that's true for all aspects of code: in any stage, you have a good chance to fail any aspect - stability, maintainability, compatibility etc. Performance needs to be balanced, so that no aspect is left behind.
In most applications 90% or more of execution time is spend in 10% or less of the code. Usually there is little use in optimizing other code than these 10%.
performance is only important to the extent that developing the performance improvement takes less time than the total amount of time that will be saved for the user(s).
the result is that if you're developing something for millions... yeah it's important to save them time. if you're coding up a tool for your own use... it might be more trouble than it's worth to save a minute or even an hour or more.
(this is clearly not a rule set in stone... there are times when performance is truly critical no matter how much development time it takes)
There should be a balance to everything. Cost (or time to develop) vs Performance for instance. More performance = more cost. If a requirement of the system being built is high performance then the cost should not matter, but if cost is a factor then you optimize within reason. After a while, your return on investment suffers in that more performance does not bring in more returns.
The importance of performance is IMHO highly correlated to your problem set. If you are creating a site with an expectation of a heavy load and lot of server side processing, then you might want to put some more time into performance (otherwise your site might end up being unusable). However, for most applications the the time put into optimizing your perfomance on a website is not going to pay off - users won't notice the difference.
So I guess it breaks down to this:
Will users notice the improvements?
How does this improvement compare to competing sites?
If users will notice AND the improvement would be enough to differentiate you from the competition - performance is an important feature - otherwise not so much. (To a point - I don't recommend ignoring it entirely - you don't want your site to turtle along after all).
No. Fast enough is generally good enough.
It's not necessarily true, however, that your client's ideas about "fast enough" should trump your own. If you think it's fast enough and your client doesn't then yes, you need to accommodate your ideas to theirs. But if you're client thinks it's fast enough and you don't you should seriously consider going with your opinion, no theirs (since you may be more knowledgeable about performance standards in the wider world).
How important performance is depends largely and foremost on what you do.
For example, if you write a library that can be used in any environment, this can hardly ever have too much performance. In some environments, a 10% performance advantage can be a major feature for a library.
If you, OTOH, write an application, there's always a point where it is fast enough. Users won't neither realize nor care whether a button pressed reacts within 0.05 or 0.2 seconds - even though that's a factor of 4.
However, it is always easier to get working code faster, than it is to get fast code working.
No. Performance is not important.
Lack of performance is important.
Performance is something to be designed in from the outset, not tacked on at the end. For the past 15 years I have been working in the performance engineering space and the cause of most project failures that I work on is a lack of requirements on performance. A couple of posts have noted "fast enough" as an observation and whether your expectation matches that of your clients, but what about when you have a situation of your client, your architectural team, your platform engineering team, your functional test team, your performance test team and your operations team all have different expectations on performance, none of which have been committed to stone and measured against. Bad Magic to be certain.
Capture those expectations on the part of your clients. Commit them to a specific, objective, measurable requirement that you can evaluate at each stage of production of your software. Expectations may not be uniform, with one section of your app/code needing to be faster than others, nor will each customer have the same expectations on what is considered acceptable. Having this information will force you to confront decisions in the design and implementation that you may have overlooked in the past and it will result in a product which is a better match to your clients expectations.

What strategies have you employed to improve web application performance? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Any personal experience in overcoming web application performance hurdles?
Any recommended strategies for improving the performance of a data-driven web application?
My development team works on a web application (JSP reports, HTML, JavaScript) that uses an Oracle database (PL/SQL). The key functionality the application delivers is in reporting, where a user can get PDFs of reports at a high level and drill down to lower levels of supporting details.
As the number of supporting detail records has grown into the millions, the performance of the system has significantly degraded. Based on our current analysis of the metrics, the bottleneck seems to be in the logic hitting the DB and the DB performance. Changing the DB model and re-doing some of the server side logic is currently being explored.
Partioning, indexing, explain plans, and running statistics are things that have been done on the DB side to try to help improve performance. While they've helped, they haven't solved the issue satisfactorily. The toughest part in analyzing performance data is that the database and web servers are remotely administered by a different part of the IT organization, so the developers don't have regular, full access to see what's going on (especially in the production environment, which is not mirrored exactly in any other development/testing environment).
While my answer may not contain any concrete steps to help this is always where I start.
First thing I would do is try to throw away all of your assumptions about what the trouble is and take steps to install metrics everywhere you can. Let the metrics guide you rather than your intuition. I've chased many, many, many white rabbits going on a hunch...the let me down more times than they've been right.
Have you checked this out?
Best practices for making web pages fast from Yahoo!'s Exceptional Performance team
If you really are having trouble at the backend, this won't help. But we used their advice to great effect to make our site faster, and there is still more to do.
Also use the YSlow add-on for Firebug. You may be surprised when you see where the actual time is being taken up.
Have you considered building your data ahead of time? In other words are there groups of data that are requested again and again? If so have them ready before the user asks. I'm not exactly talking about caching, but I think that is part of the equation.
It might be worth it to take a step back from the code and examine the usage patterns of the system. For example, if you are showing people monthly inventory or sales information do they look at it at only at the end of the month? If so just build the data on the last day and store it. If they look at it daily, maybe try building each previous days results and storing the results and avoid the calculation. I guess ultimately I am pushing you in to a Dynamic Programming solution; if you know an answer don't solve it again.
As Webjedi says, metrics are your friend.
Also look at your stack and see where there are opportunities for caching - then employ mercilessly wherever possible!
As I said in another question:
Use a profiler. Yes they cost money, and using them can occasionally be a bit awkward, but they do provide you with a great deal more real evidence rather than guesswork.
Human beings are universally bad at guessing where performance bottlenecks are. It just seems to be something our brains aren't build to do very well. It may seem obvious, you may have great ideas about what the problem is, but the real world often turns out to be doing something different. And optimising the wrong part of code means, at best, lots of work for minimal benefit. More often it makes things slower, and sometimes it breaks things entirely. So before you make any changes for the sake of optimisation, you should always have real evidence from a profiler or other accurate tool.
Not all profilers cost (extra) money. For .Net, I'm successfully using an old build of NProf (currently abandoned but it still works for me) for profiling my ASP.Net applications. For SQL Server, the query profiler is part of the package. There's also the CLF Profiler from MS but I've never been able to get it to work successfully.
That being said, profilers are definitely the way to go. That way you can see where your program is spending most of its time, and not focus on things that you think are slow. Plus it means you don't have to write anything in your code to actually record the metrics.
As I hinted to at the beginning, there are different types of profilers. The three I find most useful are application profilers, which let you see which functions you actually spend most of your time in. The second is SQL profilers that let you see how long your queries take to run. The third is memory profilers, which help to show you what type of objects your memory is being used up by. All three of these are really useful, and although you won't use them every day, the times you do use them will save you a lot of headache.

Resources