JMeter versus Clif for TCP server load testing - jmeter

I have a tcp-based application and would like to do load testing for it, I have narrowed down on JMeter TCP Sample and Clif, for now i am leaning towards JMeter because it easier to use, Clif seems to be more complicated and less used.
Any thoughts?
Has anyone used some 3rd tool?
Does anyone have an opinion/experience with getting Clif to work?
We would mostly be looking for open source options so any help in that direction would be extremely useful.

Usually in work I use LoadRunner and JMeter. If LoadRunner is able to create test then it's a fast way. JMeter is much more flexible, because you could write your own plugins or simply implementation of JavaSamplerClient interface. I don't know many posibilities of JMeter, but with java language can perform different tasks. LoadRunner has big weight, rare updates and terrible price.

All of the commercial tools support Windows sockets interface support, up to the ability to record the actual conversation (Loadrunner, SilkPerformer, Rational Performance Tester). Winsock has been a part of these large commercial tools since the mid-late 1990's. All of the above tools offer engagement based models as well, for periods as short as a month. I would suggest contacting the sales organizations for HP, Microfocus and Borland to see what they can offer.
Compared to hand constructing sockets code and then having to engineer the reporting side you may save time and labor hours in the rental of one of the big commercial tools which already sports the dedicated sockets interface combined with the system monitoring and reporting that you need.

Related

LoadRunner VS CA ApplicationTest(LISA) on Performance testing

What are your thoughts on LoadRunner and CA AppTest(LISA) on Performance testing? Why would someone use LoadRunner instead of CA AppTest or the other way around? Help me to make a decision as a customer to choose which one is better?
Write down your requirements. Do not use requirements handed to you from either vendor. Make sure you include requirements on environment, types of applications to test, monitoring in your environment of your types of hosts and services, reporting and analysis.
Then, with your requirements in hand, construct a set of tasks to measure against the requirements.
Next, go to market and hire for one week experts in both tools. Here is where you will want to pay for the best so you can match each tool against it's best use with an optimal user.
Schedule the use of a demo license of the tools for that one week.
None of us know your organization's infrastructure requirements, the skills of your team, your reporting and analysis needs, even whether you have the skills in house to use either tool effectively. You need to come up with specific, measurable, attainable, realistic and time-based (SMART) requirements for your evaluation of both tools.
And, by the way, asking for "which is better..." or "a review of..." tends to not go well in public forums where everyone else's requirements are distinct from your own.
Use Bob's homegrown Get & Put engine. It beats all of the tools in the universe and Bob is super cool
See, it could get really bad......

Gatling installation and use

I am new to load testing.
So please help in learning gatling and Apache Jmeter for stress testing.
Please help in installing both on Windows and Linux.
How to implement them in my application?
Which one is better for stress testing?
You are asking very generic questions in terms of Stress/Load testing. I think it would be best if you take a look at their documentation then formulate a more specific question.
Installation documentation is best served from the creators of the software.
Implementing these load/stress testing tools into your application isn't really a thing. If you are looking for unit testing (test to utilize in validating your functions/classes/etc work then look at your languages specific go-to libraries - ie. Java is junit/jboss, Nodejs is Karma/Protractor, Python is TestCase/Nose, etc). These tools (jmeter/gatling) are used for stressing your application outside of your build process so they should be treated as end-users (meaning you run the stress testing from remote machines if it is a web service).
Either are best for the right scenario. I think jmeter clusters easier (built-in, where gatling is more manual) but gatling is more programatic and can be manipulated more.
These are opinions and shouldn't be taken as fact or the best so your milage may vary
I strongly doubt that you need them both, if you want a piece of advice in regards which one to choose take a look at Open Source Load Testing Tools: Which One Should You Use? guide.
Once you have clear vision on what tool better suits the needs - you could start ramping up on the selected tool and ask questions in its community communication channels.

Performance Testing Tool That Can Produce a Graph

Is anybody know a good testing tool that can produce a graph containing the CPU cycle and RAM usage?
What I will do for ex. is I will run an application and while the application is running the testing tool will record CPU cycle and RAM Usage and it will make a graph as an output.
Basically what I'm trying to test is how much heavy load an application put on RAM and CPU.
Thanks in advance.
In case this is Windows the easiest way is probably Performance Monitor (perfmon.exe).
You can configure the counters you are interested in (Such as Processor Time/Commited Bytes/et) and create a Data Collector Set that measures these counters at the desired interval. There are even templates for basic System Performance Report or you can add counters for the particular process you are interested in.
You can schedule the time where you want to execute the sampling and you will be able to see the result using PerfMon or export to a file for further processing.
Video tutorial for the basics: http://www.youtube.com/watch?v=591kfPROYbs
Good Sample where it shows how to monitor SQL:
http://www.brentozar.com/archive/2006/12/dba-101-using-perfmon-for-sql-performance-tuning/
Loadrunner is the best I can think of ; but its very expensive too ! Depending on what you are trying to do, there might be cheaper alternatives.
Any tool which can either hook to the standard Windows or 'NIX system utilities can do this. This has been a defacto feature set on just about every commercial tool for the past 15 years (HP, IBM, Microfocus, etc). Some of the web only commercial tools (but not all) and the hosted services offer this as wekll. For the hosted services you will generally need to punch a hole through your firewall for them to get access to the hosts for monitoring purposes.
On the open source fron this is a totally mixed bag. Some have it, some don't. Some support one platform, but not others (i.e. support Windows, but not 'NIX or vice-versa).
What tools are you using? It is unfortunately common for people to have performance tools in use and not be aware of their existing toolset's monitoring capabilities.
All of the major commercial performance testing tools have this capability, as well as a fair number of the open source ones. The ability to integrate monitor data with response time data is key to the identification of bottlenecks in the system.
If you have a commercial tool and your staff is telling you that it cannot be done then what they are really telling you is that they don't know how to do this with the tool that you have.
It can be done using jmeter, once you install the agent in the target machine you just need to add the perfmon monitor to your test plan.
It will produce 2 result files, the pefmon file and the requests log.
You could also build a plot that compares the resource compsumtion to the load, and througput. The throughput stops increasing when some resource capacity is exceeded. As you can see in the image CPU time increases as the load increases.
JMeter perfmon plugin: http://jmeter-plugins.org/wiki/PerfMon/
I know this is an old thread but I was looking for the same thing today and as I did not found something that was simple to use and produced graphs I made this helper program for apachebench:
https://github.com/juanluisbaptiste/apachebench-graphs
It will run apachebench and plot the results and percentile files using gnuplot.
I hope it helps someone.

Web site performance tools?

YSlow, dynaTrace, HTTPWatch, Fiddler .........
All these things are really good for measuring the performance of the website and get statistics for the same. YSlow is really cool, offers good guidelines also.
However, i am very confused with so many things around (Though it's good that people already invested time and have made nice guidelines to follow and i thank them for great work done).
Following are my questions:
How much accuracy these tools have in terms or numbers they show ?
Which one(tool) is BEST to use (one for all needs)? Or i am missing name of some tool which is out of box and better than above all?
I'm suprised that you haven't mentioned JMeter. It is free, quite easy to use, has lots of features, and great for load testing your website.
As for question one, I'm not sure I can answer that. I'm sure that in general, the numbers these tools show are pretty accurate, but there are some catches. Take JMeter for example:
JMeter itself uses a lot of memory and also some substantial CPU time if you do some heavy load testing. That means that if you run the tool on the same machine as your website, some resources are lost, e.g. not available for the website
Testing it on the same machine does not out-of-the-box take in account that the data has to be sent over the internet connection, so response times are lower then the reality.
But in all, I think you should never blindly trust the results these tools give you, but they can give you a good insight into possible bottlenecks or problems.
YSlow is good to measure performance for a single user. Try to keep it grade A and it will be OK. But it actually doesn't measure performance in case of multiple concurrent users. For that you can use under each Apache JMeter. It's a good webserver/webapplication stresstest tool. So I would say, just use both YSlow (for client performance) and JMeter (for server performance).
I haven't used DynaTrace before, so I'll skip that part. The mentioned HTTP request trackers doesn't really measure performance, they are more debuggers.
As far as I am concerned, i find YSlow to be really good (have tried fiddler too) and it does help me when i need it and i do believe that it provides the correct figures thereby making me use that in the time ahead too unless there is anything unanimously accepted (which is difficult because everyone has different choices and requirements.) or even better. Oh they are right, forgot the JMeter, something you should definitely give a mention.
There is also Speed Tracer extension for Chrome. It should be usable with any JavaScript heavy website.
http://code.google.com/webtoolkit/speedtracer/
http://gtmetrix.com is a good tool and it is free. that analyzes your page's speed performance using Page Speed and YSlow

How do you do performance testing in Ruby webapps?

I've been looking at ways people test their apps in order decide where to do caching or apply some extra engineering effort, and so far httperf and a simple sesslog have been quite helpful.
What tools and tricks did you apply on your projects?
I use httperf for a high level view of performance.
Rails has a performance script built in, that uses the ruby-prof gem to analyse calls deep within the Rails stack. There is an awesome Railscast on Request Profiling using this technique.
NewRelic have some seriously cool analysis tools that give near real-time data.
They just made it a "Lite" version available for free.
I use jmeter for session-based testing - it allows very fine-grained control over pages you want to hit, parameters to inject, loops to go through, etc. It's great for simulating how many real users your site can handle, rather than just performance testing a set of static urls. You can distribute tests over multiple machines quite easily by loading up the jmeter-server on computers with publicly accessible IP's. I have found some limitations in the number of users/threads any one machine can throw at a server at once (it depends on the test), but jmeter has helped my team improve our apps capacity for users to 6x.
It doesn't have any fancy graphing -- I actually use my own in-house graphing with gruff that can do performance analysis on request time for certain pages and actions.
I'm evaluating a new opensource web page instrumentation and measurement suite called Jiffy. It's not particularly for ruby, it works for all kind of webapps
There's also a Jiffy Firebug Extension for rendering the metrics inside the browser.
I also suggest you look at Browser Mob for load testing.
A colleague of mine has also posted some interesting thoughts on this.

Resources