I'm new in here and I have a lot of doubts about jmeter. How to know the problems between Jmeter and the software when I run a test and then the results it's not that clear. For example the 100% Error, can someone help me a little bit please.
Open the .jtl results file in a Listener like View Results Tree, it normally contains HTTP Status Code and HTTP Status Message or other error message like connection timeout, connection reset, etc. Looking into it you should be able to figure out what's wrong.
It is also possible to save full request/response data for the JMeter samplers, you can do this either by adding the next lines to user.properties file:
jmeter.save.saveservice.output_format=xml
jmeter.save.saveservice.response_data=true
jmeter.save.saveservice.samplerData=true
jmeter.save.saveservice.requestHeaders=true
jmeter.save.saveservice.responseHeaders=true
jmeter.save.saveservice.url=true
once done on next run of your test you will be able to see what's going on under the hood. Alternative option is i.e. using Simple Data Writer listener, example configuration:
Related
I am trying to make a request where it returns very big data. When I make a request from Katalon Studio or JMeter, it gives me a response after 7-8 seconds. But from swagger when I try to make same request as I did from katalon studio it needs 2-3 minutes to give me the data and also if it needs more than 2-3 minutes, page dies.
Can you help me understanding, why I get response from swagger so slow and from Katalon so fast? I think the problem is in showing the big information ?
In JMeter data is not shown fully.
I can provide everything what is needed.
This is the data from JMeter.
With regards to "swagger so slow" - most probably it's your browser issue, it might fail to render big amounts of data. Consider using a command-line tool like Curl which can output the response as plain text or save it into a file
With regards to JMeter data is not shown fully, by default JMeter limits the data coming from the server to save memory to 10 megabytes, it's controllable via the view.results.tree.max_size property. If you want to see the full response data in the View Results Tree listener - add the next line to user.properties file:
view.results.tree.max_size=0
or provide the above parameter via -J command-line argument like:
jmeter -Jview.results.tree.max_size=0 -t test.jmx ....
see Apache JMeter Properties Customization Guide for more information on JMeter properties and ways of setting/overriding them.
You can also consider using Save Responses to a file listener to store the response to a file of your choice.
I am using the JMeter for doing load testing. How can I consolidate the response codes from a generated JTL files?
Are there any best practices to generate the JTL file? (Like To get failed sampler request and response)
If you open .jtl results file (by default it is normal CSV file) using Microsoft Excel or equivalent (like LibreOffice Calc) you will be able to see responseCode column which will hold the HTTP Status Codes values for all Samplers
The best practices of generating .jtl files is storing the absolute minimum data, and especially avoid of saving response data as it creates massive disk IO overhead and may ruin your test. However if you really need to store the response data - add the next lines to user.properties file:
jmeter.save.saveservice.output_format=xml
jmeter.save.saveservice.response_data.on_error=true
This will switch .jtl files output format to XML and "tell" JMeter to save responses for failed requests. See Configuring JMeter and Apache JMeter Properties Customization Guide to learn how to precisely tune your JMeter instances using properties.
The general recommendations on JMeter usage you can find in the JMeter Best Practices guide
Increasing User Threads in my JMeter performance testing, found some of the requests called before login. So those requests were failed.
My Test plan is below
In .csv file, added 1000 users with {email, password} and given to Login Http Request -> CSV Data Set Config. I'm extracting {uid} from login response and stored in uid
Second Random Controller, have multiple HttpRequest's, every request needs {uid}.
This works on below 50 user threads. But when i increased threads, some Random Controller ==> Http Request sends without uid
See, this one is with uid
But this one is with out uid
Can any one help me, how we can achieve this?
I can see 2 possible reasons:
Your application gets overloaded therefore your Login Request doesn't return correct uid so your extractors fail
Your JMeter intsance get overloaded so JMeter cannot handle 50+ users as you are running it in a non-optimal way
I would start with point 1 by temporarily enabling storing requests and responses data into .jtl results file by adding the next lines to user.properties file:
jmeter.save.saveservice.response_data=true
jmeter.save.saveservice.samplerData=true
jmeter.save.saveservice.requestHeaders=true
jmeter.save.saveservice.url=true
jmeter.save.saveservice.responseHeaders=true
This way you will be able to see response data for failed requests using View Results Tree listener and will be able to identify where the test fails.
Other things worth checking/implementing:
check your application logs
setup monitoring of baseline OS health metrics (CPU, RAM, Disk/Network IO) on both JMeter and application under test sides, you can use JMeter PerfMon Plugin for this
Make sure you're following JMeter Best Practices, i.e.
Run your test in command-line non-GUI mode
Increase JVM heap size
Switch to JSR223 Test elements instead of Beanshell
I need to write the responses and requests for all samplers in my threads.
I have a "View Results Tree" under my entire tests sets (as long with one I have inside every thread), and I know the option for "write results to file" I have in the results tree. The problem is that it writes the logs only when all tests has finished running.
Is there a way to write the responses and requests to a file without waiting for everything to finish running?
You can amend JMeter configuration so it will store request and response details into its .jtl results file during test execution.
Add the following lines to user.properties file (it's located under /bin folder of your JMeter installation)
jmeter.save.saveservice.output_format=xml
jmeter.save.saveservice.response_data=true
jmeter.save.saveservice.samplerData=true
jmeter.save.saveservice.requestHeaders=true
jmeter.save.saveservice.url=true
jmeter.save.saveservice.responseHeaders=true
Start JMeter in non-GUI mode as
jmeter -n -t /path/to/your/testplan.jmx -l /path/to/testresults.jtl
Monitor testresults.jtl in the real time with your favourite tool
Remember that saving request and response details causes massive disk IO and may have negative impact on JMeter performance hence your test results might be not reliable.
See Apache JMeter Properties Customization Guide for more information on what you can control with the help of different JMeter Properties.
I have jmeter loading my web service with load at around 2000 events per sec.
I want to log all my requests. Is there any way where i can write all my out going requests periodically? Basically i want to write outgoing requests in csv format every 15 mins?
Is it possible?
I am new to jmeter so a detailed answer will help a lot
Thanks
You can configure JMeter to store request details by adding the next few lines to user.properties file (lives under /bin folder of your JMeter installation)
jmeter.save.saveservice.output_format=xml
jmeter.save.saveservice.url=true
jmeter.save.saveservice.samplerData=true
So if you run JMeter on command-line non-GUI mode .jtl result file will contain all request details.
See Apache JMeter Properties Customization Guide for more details on various JMeter properties and ways of setting/overriding them
Just for reference here are other properties which can be used to define which values can be stored in results file:
jmeter.save.saveservice.output_format=csv
jmeter.save.saveservice.assertion_results_failure_message=false
jmeter.save.saveservice.assertion_results=none
jmeter.save.saveservice.data_type=true
jmeter.save.saveservice.label=true
jmeter.save.saveservice.response_code=true
jmeter.save.saveservice.response_data=false
jmeter.save.saveservice.response_data.on_error=false
jmeter.save.saveservice.response_message=true
jmeter.save.saveservice.successful=true
jmeter.save.saveservice.thread_name=true
jmeter.save.saveservice.time=true
jmeter.save.saveservice.subresults=true
jmeter.save.saveservice.assertions=true
jmeter.save.saveservice.latency=true
jmeter.save.saveservice.samplerData=false
jmeter.save.saveservice.responseHeaders=false
jmeter.save.saveservice.requestHeaders=false
jmeter.save.saveservice.encoding=false
jmeter.save.saveservice.bytes=true
jmeter.save.saveservice.url=false
jmeter.save.saveservice.filename=false
jmeter.save.saveservice.hostname=false
jmeter.save.saveservice.thread_counts=false
jmeter.save.saveservice.sample_count=false
jmeter.save.saveservice.idle_time=false
jmeter.save.saveservice.timestamp_format=ms
jmeter.save.saveservice.timestamp_format=yyyy/MM/dd HH:mm:ss.SSS
jmeter.save.saveservice.default_delimiter=,
jmeter.save.saveservice.default_delimiter=\t
jmeter.save.saveservice.print_field_names=false
jmeter.save.saveservice.xml_pi=<?xml-stylesheet type="text/xsl" href="../extras/jmeter-results-detail-report_21.xsl"?>
jmeter.save.saveservice.base_prefix=~/
jmeter.save.saveservice.autoflush=false
It is possible, but quite complex, and maybe won't improve the throughput as you'd hope.
You can add a beanshell listener to your test plan.
Within the beanshell code you can measure time elapsed, and record the current sample to memory.
When enough time has elapsed you can write the memory samples to file and flush memory.
It appears you are trying to allow for the performance of the client logging not affecting the server under test. An alternative approach to logging at intervals (which is going to make your test results look like a sawtooth), you could try using distributed jmeter testing, and have enough clients running to test your server thoroughly, rather than trying to make your client perform better.
Then you can just use the 'Save responses to a file' listener.
If your tests are limited by the client capability, you simply need more client power to test the server, or look at other ways to improve client/test plan performance.