What are the ways to save CPU cycles in a Ruby web application? As my project is expanding, the average time for rendering, running methods, db queries is increasing. I consider average of 60ms as high.
Any design guidelines to keep machine code tight and compact? Like - Being completely OO?
Using more variables and memory to avoid repeated iteration etc? Consolidating db queries?
try to find out your bottlenecks before actually start tuning everything.
Are you sure the problem is in the CPU? could it be your storage system?, is it your methods? is it your DB? are you using idices in your db? are you using more indices that you need in your db?
There is no all in one solution for your question.
You should run you application against a profiler, which will determine which point of your software eat the most resources. Optimize that part, and repeat the operation, until you're happy with the results.
There are plenty tools for measuring performance, ruby-prof, gperftools , and such stuff like newrelic, check out this keywords on google
Have you looked into caching? It's a very important tool for speeding up your web application and making it scale to high traffic, and in many cases it is orders of magnitude more efficient than regular code optimizations.
Besides the (correct) answers of profiling your code to find the parts that are actually slow, you could also look into alternate Ruby implementations that could be faster for your use case.
See:
http://jruby.org/
http://rubini.us/
for two alternatives that are reported to be faster than the standard 1.9 interpreter.
Related
I know the title of my question is rather vague, so I'll try to clarify as much as I can. Please feel free to moderate this question to make it more useful for the community.
Given a standard LAMP stack with more or less default settings (a bit of tuning is allowed, client-side and server-side caching turned on), running on modern hardware (16Gb RAM, 8-core CPU, unlimited disk space, etc), deploying a reasonably complicated CMS service (a Drupal or Wordpress project for arguments sake) - what amounts of traffic, SQL queries, user requests can I resonably expect to accommodate before I have to start thinking about performance?
NOTE: I know that specifics will greatly depend on the details of the project, i.e. optimizing MySQL queries, indexing stuff, minimizing filesystem hits - assuming web developers did a professional job - I'm really looking for a very rough figure in terms of visits per day, traffic during peak visiting times, how many records before (transactional) MySQL fumbles, so on.
I know the only way to really answer my question is to run load testing on a real project, and I'm concerned that my question may be treated as partly off-top.
I would like to get a set of figures from people with first-hand experience, e.g. "we ran such and such set-up and it handled at least this much load [problems started surfacing after such and such]". I'm also greatly interested in any condenced (I'm short on time atm) reading I can do to get a better understanding of the matter.
P.S. I'm meeting a client tomorrow to talk about his project, and I want to be prepared to reason about performance if his project turns out to be akin FourSquare.
Very tricky to answer without specifics as you have noted. If I was tasked with what you have to do, I would take each component in turn ( network interface, CPU/memory, physical IO load, SMP locking etc) and get the maximum capacity available, divide by rough estimate of use per request.
For example, network io. You might have 1x 1Gb card, which might achieve maybe 100Mbytes/sec. ( I tend to use 80% of theoretical max). How big will a typical 'hit' be? Perhaps 3kbytes average, for HTML, images etc. that means you can achieve 33k requests per second before you bottleneck at the physical level. These numbers are absolute maximums, depending on tools and skills you might not get anywhere near them, but nobody can exceed these maximums.
Repeat the above for every component, perhaps varying your numbers a little, and you will build a quick picture of what is likely to be a concern. Then, consider how you can quickly get more capacity in each component, can you just chuck $$ and gain more performance (eg use SSD drives instead of HD)? Or will you hit a limit that cannot be moved without rearchitecting? Also take into account what resources you have available, do you have lots of skilled programmer time, DBAs, or wads of cash? If you have lots of a resource, you can tend to reduce those constraints easier and quicker as you move along the experience curve.
Do not forget external components too, firewalls may have limits that are lower than expected for sustained traffic.
Sorry I cannot give you real numbers, our workloads are using custom servers, high memory caching and other tricks, and not using all the products you list. However, I would concentrate most on IO/SQL queries and possibly network IO, as these tend to be more hard limits, than CPU/memory, although I'm sure others will have a different opinion.
Obviously, the question is such that does not have a "proper" answer, but I'd like to close it and give some feedback. The client meeting has taken place, performance was indeed a biggie, their hosting platform turned out to be on the Amazon cloud :)
From research I've done independently:
Memcache is a must;
MySQL (or whatever persistent storage instance you're running) is usually the first to go. Solutions include running multiple virtual instances and replicate data between them, distributing the load;
http://highscalability.com/ is a good read :)
All too often I read statements about some new framework and their "benchmarks." My question is a general one but to the specific points of:
What approach should a developer take to effectively instrument code to measure performance?
When reading about benchmarks and performance testing, what are some red-flags to watch out for that might not represent real results?
There are two methods of measuring performance: using code instrumentation and using sampling.
The commercial profilers (Hi-Prof, Rational Quantify, AQTime) I used in the past used code instrumentation (some of them could also use sampling) and in my experience, this gives the best, most detailed result. Especially Rational Quantity allow you to zoom in on results, focus on sub trees, remove complete call trees to simulate an improvement, ...
The downside of these instrumenting profilers is that they:
tend to be slow (your code runs about 10 times slower)
take quite some time to instrument your application
don't always correctly handle exceptions in the application (in C++)
can be hard to set up if you have to disable the instrumentation of DLL's (we had to disable instrumentation for Oracle DLL's)
The instrumentation also sometimes skews the times reported for low-level functions like memory allocations, critical sections, ...
The free profilers (Very Sleepy, Luke Stackwalker) that I use use sampling, which means that it is much easier to do a quick performance test and see where the problem lies. These free profilers don't have the full functionality of the commercial profilers (although I submitted the "focus on subtree" functionality for Very Sleepy myself), but since they are fast, they can be very useful.
At this time, my personal favorite is Very Sleepy, with Luke StackWalker coming second.
In both cases (instrumenting and sampling), my experience is that:
It is very difficult to compare the results of profilers over different releases of your application. If you have a performance problem in your release 2.0, profile your release 2.0 and try to improve it, rather than looking for the exact reason why 2.0 is slower than 1.0.
You must never compare the profiling results with the timing (real time, cpu time) results of an application that is run outside the profiler. If your application consumes 5 seconds CPU time outside the profiler, and when run in the profiler the profiler reports that it consumes 10 seconds, there's nothing wrong. Don't think that your application actually takes 10 seconds.
That's why you must consistently check results in the same environment. Consistently compare results of your application when run outside the profiler, or when run inside the profiler. Don't mix the results.
Also use a consistent environment and system. If you get a faster PC, your application could still run slower, e.g. because the screen is larger and more needs to be updated on screen. If moving to a new PC, retest the last (one or two) releases of your application on the new PC so you get an idea on how times scale to the new PC.
This also means: use fixed data sets and check your improvements on these datasets. It could be that an improvement in your application improves the performance of dataset X, but makes it slower with dataset Y. In some cases this may be acceptible.
Discuss with the testing team what results you want to obtain beforehand (see Oded's answer on my own question What's the best way to 'indicate/numerate' performance of an application?).
Realize that a faster application can still use more CPU time than a slower application, if the faster one uses multi-threading and the slower one doesn't. Discuss (as said before) with the testing time what needs to be measured and what doesn't (in the multi-threading case: real time instead of CPU time).
Realize that many small improvements may lead to one big improvement. If you find 10 parts in your application that each take 3% of the time and you can reduce it to 1%, your application will be 20% faster.
It depends what you're trying to do.
1) If you want to maintain general timing information, so you can be alert to regressions, various instrumenting profilers are the way to go. Make sure they measure all kinds of time, not just CPU time.
2) If you want to find ways to make the software faster, that is a distinctly different problem.
You should put the emphasis on the find, not on the measure.
For this, you need something that samples the call stack, not just the program counter (over multiple threads, if necessary). That rules out profilers like gprof.
Importantly, it should sample on wall-clock time, not CPU time, because you are every bit as likely to lose time due to I/O as due to crunching. This rules out some profilers.
It should be able to take samples only when you care, such as not when waiting for user input. This also rules out some profilers.
Finally, and very important, is the summary you get.
It is essential to get per-line percent of time.
The percent of time used by a line is the percent of stack samples containing the line.
Don't settle for function-only timings, even with a call graph.
This rules out still more profilers.
(Forget about "self time", and forget about invocation counts. Those are seldom useful and often misleading.)
Accuracy of finding the problems is what you're after, not accuracy of measuring them. That is a very important point. (You don't need a large number of samples, though it does no harm. The harm is in your head, making you think about measuring, rather than what is it doing.)
One good tool for this is RotateRight's Zoom profiler. Personally I rely on manual sampling.
Any idea how to do performance and scalability testing if no clear performance requirements have been defined?
More information about my application.
The application has 3 components. One component can only run on Linux, the other two components are Java programs so they can run on Linux/Windows/Mac... The 3 components can be deployed to one box or each component can be deployed to one box. Deployment is very flexible. The Linux-only component will capture raw TCP/IP packages over the network, then one Java component will get those raw data from it and assemble them into the data end users will need and output them to hard disk as data files. The last Java component will upload data from data files to my database in batch.
In the absence of 'must be able to perform X iterations within Y seconds...' type requirements, how about these kinds of things:
Does it take twice as long for twice the size of dataset? (yes = good)
Does it take 10x as long for twice the size of dataset? (yes = bad)
Is it CPU bound?
Is it RAM bound (eg lots of swapping to virtual memory)?
Is it IO / Disk bound?
Is there a certain data-set size at which performance suddenly falls off a cliff?
Surprisingly this is how most perf and scalability tests start.
You can clearly do the testing without criteria, you just define the tests and measure the results. I think your question is more in the lines 'how can I establish test passing criteria without performance requirements'. Actually this is not at all uncommon. Many new projects have no clear criteria established. Informally it would be something like 'if it cannot do X per second we failed'. But once you passed X per second (and you better do!) is X the 'pass' criteria? Usually not, what happens is that you establish a new baseline and your performance tests guard against regression: you compare your current numbers with the best you got, and decide if the new build is 'acceptable' as build validation pass (usually orgs will settle here at something like 70-80% as acceptable, open perf bugs, and make sure that by ship time you get back to 90-95% or 100%+. So basically the performance test themselves become their own requirement.
Scalability is a bit more complicated, because there there is no limit. The scope of your test should be to find out where does the product break. Throw enough load at anything and eventually it will break. You need to know where that limit is and, very importantly, find out how does your product break. Does it give a nice error message and revert or does it spills its guts on the floor?
Define your own. Take the initiative and describe the performance goals yourself.
To answer any better, we'd have to know more about your project.
If there has been 'no performance requirement defined', then why are you even testing this?
If there is a performance requirement defined, but it is 'vague', can you indicate in what way it is vague, so that we can better help you?
Short of that, start from the 'vague' requirement, and pick a reasonable target that at least in your opinion meets or exceeds the vague requirement, then go back to the customer and get them to confirm that your clarification meets their requirements and ideally get formal sign-off on that.
Some definitions / assumptions:
Performance = how quickly the application responds to user input, e.g. web page load times
Scalability = how many peak concurrent users the applicaiton can handle.
Firstly perfomance. Performance testing can be quite simple, such as measuring and recording page load times in a development environment and using techniques like applicaiton profiling to identify and fix bottlenecks.
Load. To execute a load test there are four key factors, you will need to get all of these in place to be successfull.
1. Good usage models of how users will use your site and/or application. This can be easy of the application is already in use, but it can be extermely difficult if you are launching a something new, e.g. a Facebook application.
If you can't get targets as requirements, do some research and make some educated assumptions, document and circulate them for feedback.
2. Tools. You need to have performance testing scripts and tools that can excute the scenarios defined in step 1, with the number of expected users in step 1. (This can be quite expensive)
3. Environment. You will need a production like environment that is isolated so your tests can produce repoducible results. (This can also be very expensive.)
4. Technical experts. Once the applicaiton and environment starts breaking you will need to be able to identify the faults and re-configure the environment and or re-code the application once faults are found.
Generally most projects have a "performance testing" box that they need to tick because of some past failure, however they never plan or budget to do it properley. I normally recommend to do budget for and do scalability testing properley or save your money and don't do it at all. Trying to half do it on the cheap is a waste of time.
However any good developer should be able to do performance testing on their local machine and get some good benefits.
rely on tools (fxcop comes to mind)
rely on common sense
If you want to test performance and scalability with no requirements then you should create your own requirements / specs that can be done in the timeline / deadline given to you. After defining the said requirements, you should then tell your supervisor about it if he/she agrees.
To test scalability (assuming you're testing a program/website):
Create lots of users and data and check if your system and database can handle it. MyISAM table type in MySQL can get the job done.
To test performance:
Optimize codes, check it in a slow internet connection, etc.
Short answer: Don't do it!
In order to get a (better) definition write a performance test concept you can discuss with the experts that should define the requirements.
Make assumptions for everything you don't know and document these assumptions explicitly. Assumptions comprise everything that may be relevant to your system's behaviour under load. Correct assumptions will be approved by the experts, incorrect ones will provoke reactions.
For all of those who have read Tom DeMarcos latest book (Adrenaline Junkies ...): This is the strawman pattern. Most people who are not willing to write some specification from scratch will not hesitate to give feedback to your document. Because you need to guess several times when writing your version you need to prepare for being laughed at when being reviewed. But at least you will have better information.
The way I usually approach problems like this is just to get a real or simulated realistic workload and make the program go as fast as possible, within reason. Then if it can't handle the load I need to think about faster hardware, doing parts of the job in parallel, etc.
The performance tuning is in two parts.
Part 1 is the synchronous part, where I tune each "thread", under realistic workload, until it really has little room for improvement.
Part 2 is the asynchronous part, and it is hard work, but needs to be done. For each "thread" I extract a time-stamped log file of when each message sent, each message received, and when each received message is acted upon. I merge these logs into a common timeline of events. Then I go through all of it, or randomly selected parts, and trace the flow of messages between processes. I want to identify, for each message-sequence, what its purpose is (i.e. is it truly necessary), and are there delays between the time of receipt and time of processing, and if so, why.
I've found in this way I can "cut out the fat", and asynchronous processes can run very quickly.
Then if they don't meet requirements, whatever they are, it's not like the software can do any better. It will either take hardware or a fundamental redesign.
Although no clear performance and scalability goals are defined, we can use the high level description of the three components you mention to drive general performance/scalability goals.
Component 1: It seems like a network I/O bound component, so you can use any available network load simulators to generate various work load to saturate the link. Scalability can be measure by varying the workload (10MB, 100MB, 1000MB link ), and measuring the response time , or in a more precise way, the delay associated with receiving the raw data. You can also measure the working set of the links box to drive a realistic idea about your sever requirement ( how much extra memory needed to receive X more workload of packets, ..etc )
Component 2: This component has 2 parts, an I/O bound part ( receiving data from Component 1 ), and a CPU bound part ( assembling the packets ), you can look at the problem as a whole, make sure to saturate your link when you want to measure the CPU bound part, if is is a multi threaded component, you can look for ways to improve look if you don't get 100% CPU utilization, and you can measure time required to assembly X messages, from this you can calculate average wait time to process a message, this can be used later to drive the general performance characteristic of your system and provide and SLA for your users ( you are going to guarantee a response time within X millisecond for example ).
Component 3: Completely I/O bound, and depends on both your hard disk bandwidth, and the back-end database server you use, however you can measure how much do you saturate disk I/O to optimize throughput, how much I/O counts do you require to read X MB of data, and improve around these parameters.
Hope that helps.
Thanks
I am using Spring with Hibernate to create an Enterprise application.
Now, due to the abstractions given by the framework to the underlying J2EE
architecture, there is obviously going to be a runtime performance hit on my app.
What I need to know is a set of factors that I need to consider to make a decision about the minimum specs(Proc speed + RAM etc) that I need for a single host server of the application running RedHat Linux 3+ and devoted to running this application only, that would produce an efficiency score of say 8 out of 10 given a simultaneous-access-userbase increase of 100 per month.
No clustering is to be used.
No offense, but I'd bet that performance issues are more likely to be due to your application code than Spring.
If you look at the way they've written their source code, you'll see that they pay a great deal of attention to quality.
The only way to know is to profile your app, see where the time is being spent, analyze to determine root cause, correct it, rinse, repeat. That's science. Anything else is guessing.
I've used Spring in a production app that's run without a hitch for three years and counting. No memory leaks, no lost connections, no server bounces, no performance issues. It just runs like butter.
I seriously doubt that using Spring will significantly affect your performance.
What particular aspects of Spring are you expecting to cause performance issues?
There are so many variables here that the only answer is to "suck it and see", but, in a scientific manner.
You need to build a server than benchmark this. Start of with some "commodity" setup say 4 core cpu and 2 gig ram, then run a benchmark script to see if it meets your needs. (which most likely it will!).
If it doesnt you should be able to calculate the required server size from the nulbers you get out of the benchmark -- or -- fix the performance problem so it runs on hte hardware youve got.
The important thing is to identiffy what is limmiting your performance. Is you server using all the cores or are your processes stuck on a single core, is your JVM getting enough memory, are you IO bound or database bound.
Once you know the limiting factors its pretty easy to work out the solution -- either improve the efficiency of your programs or buy more of the right hardware.
Two thing to watch out for with J2EE -- most JVMs have default heap sizes from the last decade, make sure your JVM has enough Heap and Stack (at least 1G each!), -- it takes time for all the JIT compiling, object cacheing, module loading etc to settle down -- exercise your system for at least an hour before you start benchmarking.
As toolkit, I don't see Spring itself affecting the performance after initialization, but I think Hibernate will. How big this effect is, depends on a lot of details like the DB-Schema and how much relational layout differs from the OO layer and of course how DB-access is organized and how often DB-access happens etc. So I doubt, there is a rule of thumb to this. Just try out by developing significant prototypes using alternative applications servers or try a own small no-ORM-use-JDBC-version.
I've never heard that Spring creates any type of runtime performance hit. Since it uses mainly POJOs I'd be surprised if there was something wrong with it. Other than parsing a lot of XML on startup maybe, but that's solved by using annotations.
Just write your app first and then tune accordingly.
Spring is typically used to create long-lived objects shortly after the application starts. There is virtually no performance cost over the life of the process.
Which performance setback? In relation to what?
Did you measure the performance before using the framework?
If the Spring framework causes inacceptable performance issues the obvious solution is not to use it.
I'm investigating the differences between using log4net and System.Diagnostics.Trace for logging, and I'm curious about the performance differences I've observed.
I created a test application to compare the performance of both logging methods in several scenarios, and I'm finding that log4net is significantly slower than the Trace class. For example, in a scenario where I log 1,000 messages with no string formatting, log4net's mean execution time over 1,000 trials is 9.00ms. Trace executes with a mean of 1.13ms. A lot of my test cases have a relatively large amount of variance in the log4net execution times; the periodic nature of outlier long executions seems to suggest GC interference. Poking around with CLR Profiler confirms there are a large amount of collections for a ton of log4net.Core.LoggingEvent objects that are generated (to be fair, it looks like Trace generates a ton of Char[] objects as well, but it doesn't display the large variance that log4net does.)
One thing I'm keeping in mind here are that even though log4net seems roughly 9 times slower than Trace, the difference is 8ms over 1,000 iterations; this isn't exactly a significant performance drain. Still, some of my expected use cases might be calling methods that are logging things hundreds of thousands of times, and these numbers are from my fast machine. On a slower machine more typical of our users' configurations the difference is 170ms to 11ms which is a tiny bit more alarming.
Is this performance typical of log4net, or are there some gotchas that can significantly increase log4net's performance?
(NOTE: I am aware that string formatting can alter the execution time; I am trying to compare apples to apples and I have test cases with no formatting and test cases with formatting; log4net stays as proportionally slow whether string formatting is used or not.)
The story so far:
Robert Gould has the best answer to the question; I was mainly curious if it was typical to see log4net perform much slower than the Trace class.
Alex Shnayder's answer is interesting information but doesn't really fall under the scope of the question. Half of the intent for introducing this logging is to assist in debugging both logical and performance problems on live systems; our customers put our products in many exotic scenarios that are often difficult to reproduce without expensive and large-scale hardware configurations. My main concern is that a large timing difference between "not logging" and "logging" could affect the system in such a way that bugs don't happen. In the end, the scale of the performance decrease is large but the magnitude is small, so I'm hoping it won't be a problem.
yes log4xxx is slower than trace, since trace is normally a near kernel tool, while log4xxx is a much more powerful tool. Personally I prefer log4xxx because of it's fexibility, but if you want something that doesn't impact as much, and you don't really need logs for production,say in debug only trace should be enough.
Note: I use log4xxx because the exact same applies to all languages with a log4 library not just .Net
you might be interested in the Common.Logging library. It's a thin abstraction wrapper over existing logging implementations and allows you to plug in any logging framework you like at runtime. Also it is much faster then System.Diagnostics.Trace as described in my blog post about performance.
hth,
Erich
From my experience log4net performance isn't an issue in most cases.
The real question is, why would you even need to "logging things hundreds of thousands of times" in a production system.
As I see it, in production you should log only bare minimum (info nd may be warning level), and only if need to (debugging an issue on site) should activate debugging at debug level.
If you want the best of both worlds, log4net will allow you to log to the aspnet tracer as well. I turn this option on when I want to get performance stats that are tied in to specific events in my logging.
Have just run a test comparing sequental writing to a simple file compared to using Log4Net for the same task.Log4Net is about 400 times slower comparet to a StreamWriter.So I consider Log4Net not usable if You are writing to huge logfiles. But I find it very usefull for small amounts of log entries and debugging.
Maybe a solution to isolate logging in a separate thread in some cases.