Being new to systematic debugging, I asked myself what these three terms mean:
Debugging
Profiling
Tracing
Anyone could provide definitions?
Well... as I was typing the tags for my question, it appeared that stack overflow already had defined the terms in the tags description. Here their definitions which I found very good:
Remote debugging is the process of running a debug session in a local development environment attached to a remotely deployed application.
Profiling is the process of measuring an application or system by running an analysis tool called a profiler. Profiling tools can focus on many aspects: functions call times and count, memory usage, cpu load, and resource usage.
Tracing is a specialized use of logging to record information about a program's execution.
In addition to the answer from Samuel:
Debugging is the process of looking for bugs and their cause in applications. a bug can be an error or just some unexpected behaviour (e.g. a user complains that he/she receives an error when he/she uses an invalid date format). typically a debugger is used that can pause the execution of an application, examine variables and manipulate them.
Profiling is a dynamic analysis process that collects information about the execution of an application. the type of information that is collected depends on your use case, e.g. the number of requests. the result of profiling is a profile with the collected information. the source for a profile can be exact events (see tracing below) or a sample of events that occured.
because the data is aggregated in a profile it is irrelevant when and in which order the events happened.
Tracing "trace is a log of events within to the program"(Whitham). those events can be ordered chronologically. thats why they often contain a timestamp. Tracing is the process of generating and collecting those events. the use case is typically flow analysis.
example tracing vs profiling:
Trace:
[2021-06-12T11:22:09.815479Z] [INFO] [Thread-1] Request started
[2021-06-12T11:22:09.935612Z] [INFO] [Thread-1] Request finished
[2021-06-12T11:22:59.344566Z] [INFO] [Thread-1] Request started
[2021-06-12T11:22:59.425697Z] [INFO] [Thread-1] Request finished
Profile:
2 "Request finished" Events
2 "Request started" Events
so if tracing and profiling measure the same events you can construct a profile from a trace but not the other way around.
source
Whitham: https://www.jwhitham.org/2016/02/profiling-versus-tracing.html
IPM: http://ipm-hpc.sourceforge.net/profilingvstracing.html
Related
When I run Jmeter from Windows CLI, after some random time, the tests are being stopped or stuck. I can click on ctrl+C (one time) just to refresh the run but part of the request will be lost during the time it was stuck.
Take a look at jmeter.log file, normally it should be possible to figure out what's wrong by looking at messages there. If you don't see any suspicious entries there - you can increase JMeter's logging verbosity by changing values in logj2.xml file or via -L command-line parameters.
Take a thread dump and see what exactly threads are doing when they're "stuck"
If you're using HTTP Request samplers be aware that JMeter will wait for the result forever and if the application fails to respond at all - your test will never end so you need to set reasonable timeouts.
Make sure to follow JMeter Best Practices
Take a look at resources consumption like CPU, RAM, etc. - if your machine is overloaded and cannot conduct the required load you will need to switch to distributed testing
There are several approaches to debugging a JMeter test which can be combined as a general systematic approach that I capable of diagnosing most problems.
The first thing that I would suggest is running the test within the JMeter GUI to visualize the test execution. For this you may want to add a View Results Tree listener which will provide you with real time results from each request generated:
Another way you can monitor your test execution in real time within the JMeter GUI is with the Log Viewer. If any exceptions are encountered during your test execution you will see detailed output in this window. This can be found under the Options menu:
Beyond this, JMeter records output files which are often very useful in debugging you load tests. Both the .log file and the .jtl file will provide a time stamped history of every action your test performs. From there you can likely track down the offending request or error if your test unexpectedly hangs:
If you do decide to move your test into the cloud using a service that hosts your test, you may be able to ascertain more information through that platform. Here is a comprehensive example on how to debug JMeter load tests that covers the above approaches as well as more advanced concepts. Using a cloud load test provider can provide your test will additional network and machine resources beyond what your local machine can, if the problem is related to a performance bottleneck.
I am testing a web application login page loading time with 300 thread users and ramp up period of 300 secs.Most of my samples return response code 200.But few of them return response code 400,503.
My goal is to just check the performance of the web application if 300 users start using it.
I am new to Jmeter and have basic knowledge of programming.
My Question :-
1.Can i ignore these errors and focus just on timings from the summary report ?
2.If i really need to fix these errors, how to fix it ?
There are 2 different problems indicated by these errors:
HTTP Status 400 stands for Bad Request - it means that you're sending malformed requests which cannot be understood by the server. You should inspect request details and amend JMeter configuration as it is the problem in your script.
HTTP Status 503 stands for Service Unavailable - it indicates the problem on server side, i.e. server is not capable of handling the load you're generating. This is something you can already report as the application issue. You can try to identify the underlying cause by:
looking into your application log files
checking whether your application has enough headroom to operate in terms of CPU, RAM, Network, Disk, etc. It can be done using APM tool or JMeter PerfMon Plugin
re-running your test with profiler tool telemetry to deep dive into what's under the hood of the longest response times
So first of all you should ensure that your test is doing what it is supposed to be doing by running it with 1-2 users/loops and inspecting requests/response details. At this stage you should not be having any errors.
Going forward you should increase the load gradually and correlate the increasing number of virtual users with the increasing response time/number of errors
`
Performance testing is different from load testing. What you are doing is load testing.
Performance testing is more about how quickly an action takes. I typically capture performance on a system not under load for a given action.
This gives a baseline that I can then refer to during load tests.
Hopefully, you’ve been given some performance figures to test. E.g. must be able to handle 300 requests in two minutes.
When moving onto load, I run a series of load tests with increasing number of users/threads and capture the results from each test.
Armed with this, I can see how load degrades performance to the point where errors start to show up. This gives you an idea of how much typical load the system can handle.
I’d also look to run soak tests too. This where I’d run JMeter for a long period with typical (not peak) load to make sure the system can handle sustained load.
In terms of the errors you’re seeing, no I would not ignore them. Assuming your test is calling the same endpoint, it seems safe to say the code is fine, its the infrastructure struggling with the load you’re throwing at it.
So I was running some tests across some machines and monitoring CPU/Memory usage by process.
To test it's accuracy I was also monitoring with VisualVM.
and the graphs were off slightly.
Also on Jmeter when I monitor CPU/memory but not per process name it gives the exact same results - so it's not seeing the process.
If i do it by process ID it works - but the process ID changes so don't want to go this route.
Beside process name there is occurence , anyone know what this is and whether it can be left blank or not.
I don't think the process is called kafka, my expectation is that both for Windows or for Linux/Unix you should be looking for java process, not Kafka
So try changing the process name to java and it should start working as expected:
If it doesn't - launch PerfMon Agent with --loglevel debug parameter - it should print a lot of useful information into stdout
Just in case check out How to Monitor Your Server Health & Performance During a JMeter Load Test article for more information.
How can i know the critical point where the systems breaks.
To analyze the result is the toughest part in Jmeter.I failed to judge it because everytime the result or listeners show different result
Can anyone suggest me what efforts should i put so that i can easily say "that this website is crashing with 500 users or giving no response after certain point."
I also have a problem in configuring the threads that what combination should i entered in thread group.
Because i have to report it further or needs to explain.
Reporting is JMeter's Achilles' heel. You can use JMeter Plugins project which provides
Ultimate Thread Group - which simplifies load scenario definition
Active Threads Over Time - which displays amount of active threads as your test goes
Server Hits Per Second - which provides information how many requests per second your threads provided
You can also consider using Taurus tool which simplifies the process of configuring and executing of JMeter tests and has rich reporting capabilities.
I am using Lttng in an application. I have enabled heavy traces and I have found that there is a loss in traces. Is there any way of knowing if there is any trace loss or any information regarding about it . Are there any API calls to know about them.
Thanks & Regards.,
K.V.Ranganadh.
Both viewers for LTTng traces should both be able to report if a trace has lost events in them.
Babeltrace, the command-line trace reading tool, prints lost events on stderr. So a quick way to locate these is to reroute stdout elsewhere, so you only see the lost events in the console, using a command like:
babeltrace /path/to/trace > /dev/null
Alternatively, the graphical viewer Trace Compass displays lost events in its Statistics View.
In general, lost events happen when the machine is too loaded and the tracer cannot keep up with the events coming in. To reduce the chance of losing events, you can look at increasing the subbuffer sizes and number (see the 'lttng' man page), or enabling less events in your tracing session (instead of doing 'lttng enable-event ... -a', only enable the events you need).