Report template - jmeter

What information is required to be clear on the performance of the website.Or what listener ,what type of information is needs to be present to third party so that they will know the exact performance of website through a final report.For example transaction per sec,response,crash point and errors,users..Please suggest if i want to make a report of jmeter plan,what should I post into the report so that no questions should arrive back from the client.And please let me know what this figure is about in attached graph?1And in what cases we are able to know that this issue is arriving due to server system resources constraint.

Related

JMeter : Individual Request data in Aggregate Report is not adding up correctly to Transaction Controller data

I have excecated a test and got the following report and duration the analysis I noticed that the sum of the individual request data is not matching up to the transaction enter image description here
Help me around here to identify the cause of this issue
Note: For running one or few users I am not facing this issue, only during on higher user count this issue is coming up
Most probably there is an issue with the way you "excecate" the test (whatever it means)
Given the issue is not reproducible with "few" users and only happens under the load my expectation is that JMeter doesn't have enough headroom to operate in terms of CPU/RAM/etc. or it's not properly configured for high loads.
Make sure to follow JMeter Best Practices
Make sure to set up monitoring of resources like CPU, RAM, Network, Disk IO, swap, etc. If you don't have other/better monitoring toolchain you can consider using JMeter PerfMon Plugin
If after following JMeter Best Practices the error is still there or resource consumption is way too high - consider going for Distributed Testing
If even after switching to distributing testing the issue is still there check jmeter.log for any suspicious entries

Hybris back-office using Jmeter and this ZK plugin

I am trying to create a performance test script for Hybris back-office using Jmeter and this ZK plugin(I am assuming which is created using ZK AJAX framework). I am able to generate desktop Id(dtid) and component IDs. For some requests, I am getting the same response as a browser.
But for some requests, I am getting a blank response( {“rs”:[],”rid”:126} ). The script is sending the same parameters as the browser. In failed requests, some co-ordinates like parameters are sending.( data_1 = {“top”:242,”left”:0} ). Is the test failing because of this co-ordinates?
Please help me with this issue? Or Please suggest an alternative tool for testing the Hybris BackOffice?
Thank you
Performance testing a ZK application is generally not easy, and test cases tend to be hard to maintain. It's best to probe the initial page rendering performance without too many interactions (and DON'T forget to send the rmDesktop commands at the end of each test, or your test case will not reflect reality).
I don't have a better/easier alternative to JMeter (similar tools capturing the network requests/responses propose the same challenges).
Besides that the mouse coordinates don't matter for an onClick event unless the server side event listener actually uses those to determine the outcome of the event. In 99.99% of the cases the server side is interested in the button-click event, not the mouse coordinate. If you're getting unexpected responses it's most likely the wrong component-UUID you're firing events to. In such cases the server simply ignores the event since it can't be dispatched to any matching component. Then if no event listener fires the response is most likely empty indicated by {“rs”:[],”rid”:126}.
One important thing is to disable UUID-recycling which will mix UUIDs at server side, likely resulting in the non-deterministic problems you encounter.

Log processing tool

I have a new requirement. I need a tool, which will
1.push logs / KPIs from customer end to a backend server,
2.That server must process those logs, and trigger events on any unusual events.ex: exception/ crash , null pointer exceptions, usage of new feature, wrong business inputs.
3.should provide summary of metrics.
I tried to understand logstash, some how I didn't get it clearly .
Any suggestions of such tool please?
Thx,
--Gopi

Whats the impact of response code 400,503 ? Can we ignore these codes if my primary focus is to measure loading time of web application?

I am testing a web application login page loading time with 300 thread users and ramp up period of 300 secs.Most of my samples return response code 200.But few of them return response code 400,503.
My goal is to just check the performance of the web application if 300 users start using it.
I am new to Jmeter and have basic knowledge of programming.
My Question :-
1.Can i ignore these errors and focus just on timings from the summary report ?
2.If i really need to fix these errors, how to fix it ?
There are 2 different problems indicated by these errors:
HTTP Status 400 stands for Bad Request - it means that you're sending malformed requests which cannot be understood by the server. You should inspect request details and amend JMeter configuration as it is the problem in your script.
HTTP Status 503 stands for Service Unavailable - it indicates the problem on server side, i.e. server is not capable of handling the load you're generating. This is something you can already report as the application issue. You can try to identify the underlying cause by:
looking into your application log files
checking whether your application has enough headroom to operate in terms of CPU, RAM, Network, Disk, etc. It can be done using APM tool or JMeter PerfMon Plugin
re-running your test with profiler tool telemetry to deep dive into what's under the hood of the longest response times
So first of all you should ensure that your test is doing what it is supposed to be doing by running it with 1-2 users/loops and inspecting requests/response details. At this stage you should not be having any errors.
Going forward you should increase the load gradually and correlate the increasing number of virtual users with the increasing response time/number of errors
`
Performance testing is different from load testing. What you are doing is load testing.
Performance testing is more about how quickly an action takes. I typically capture performance on a system not under load for a given action.
This gives a baseline that I can then refer to during load tests.
Hopefully, you’ve been given some performance figures to test. E.g. must be able to handle 300 requests in two minutes.
When moving onto load, I run a series of load tests with increasing number of users/threads and capture the results from each test.
Armed with this, I can see how load degrades performance to the point where errors start to show up. This gives you an idea of how much typical load the system can handle.
I’d also look to run soak tests too. This where I’d run JMeter for a long period with typical (not peak) load to make sure the system can handle sustained load.
In terms of the errors you’re seeing, no I would not ignore them. Assuming your test is calling the same endpoint, it seems safe to say the code is fine, its the infrastructure struggling with the load you’re throwing at it.

"Replay" the steps needed to recreate an error

I am going to create a typical business application that will be used by a few hundred consultants. Normally, the consultants would be presented with an error message with a standard text. As the application will be a complicated one with lots of changes being made to it constantly I would like the following:
When an error message is presented, the user has the option to "send" the error message to the developers. The developers should be able to open the incoming file in i.e. Eclipse and debug the steps of the last 10 minutes of work step by step (one line at a time if they want to). Everything should be transparent, meaning that they for example should be able to see the return values of calls to the database.
Are there any solutions that offer such functionality today, my preferred language is Python or also Java. I know that there will be a huge performance hit because of such functionality, but that is acceptable as this kind of software is not performance sensitive.
It would be VERY nice if the database also had a cronology so that one could query the database for values that existed at the exact time that a specific line of code was run in the application, leading up to the bug.
You should try to use logging, e.g. commit logs from the DB and logging the user interactions with the application, if it is a web application you can start with the log files from the webserver. Make sure that the logfiles include all submitted data such as the complete GET url with parameters and POST with entity body. You can configure the web server to generate such logs when necesary.
Then you build a test client that can parse the log files and re-create all the user interaction that caused the problem to appear. If you suspect race conditions you should log with high precision (ms resolution) and make sure that the test client can run through the same sequences over and over again to stress those critical parts.
Replay (as your title suggests) is the best way to reproduce an error, just collect all the data needed to recreate the input that generated a specific state/situation. Do not focus on internal structures and return values, when it comes to hunting down an error or a bug you should not work in forensic mode, e.g. trying to analyse the cause of the crash by analyzing the wreck, you should crash the plane over and over again and add more and more logging/or use a debugger until you know that goes wrong.

Resources