Recently I'm evaluating performance with perf. I found a problem that when offering different event options the result fluctuate! For example,
When I'm running perf like:
perf record -F 1000 -e cycles ${COMMANDSPEC}
where ${COMMANDSPEC} refers to the command runcpu --config=default.cfg --size=train --noreportable --action=run 505.mcf_r. The event count of perf is
Event count (approx.): 193306241321
But when I'm running perf like:
perf record -F 1000 -e cycles,cpu-clock,L1-dcache-load-misses,branch-misses,cache-misses,instructions,branch-instructions,branch-misses,emulation-faults,cs ${COMMANDSPEC}
The event count of perf is
# Event count (approx.): 141720411904
There's a obvious difference between these two results. And the difference vary according to the number of event added into -e option. I wonder why adding some events could cause so much difference to the results?
Related
with jmeter performance testing. How we can know the no of users taken maximum time in summery report.As we can just know one of the sample taken taken this maximum time but we can not find the no of users or samples faced this maximum time?
For this you can generate a dashboard report using JMeter Command Line : Generating-Dashboard. Using either of the following commands, depending upon whether you want to generate after the test completes or at one go.
jmeter -n -t <path_to.jmx> -l <log.jtl> -e -o <dashboard_folder>
or
jmeter -g <log.jtl> -o <dashboard_folder>
Then Check:
Response Time Over Time Graph:
It seems that you're looking for Response Time vs Threads chart which allows analysis of the correlation of increasing number of threads (virtual users) with increasing response time per request/transaction:
You can install Response Time vs Threads chart as a part of KPI vs KPI Graphs bundle using JMeter Plugins Manager
I have a performance test in Jmeter.
I have some SSH listeners that should retrieve CPU and RAM usage. I want to get a clear explanation about the delay used by Jmeter to gather the listeners value while the test is running.
Is it possible to set that delay value by the user ?, if yes what is the minimum value that Jmeter supports.
The current data gathering by listener is a bit random I think which isn't good at all. Currently I don't have the similar number of entries in results although in both listeners I have the same number of commands.
I tried to set the value of jmeter.sshmon.interval in jmeter.properties to 100 and 3000 ms but that didn't help.
The measurement I did gave the following:
Remark 1:
* CPU CSV usage file has 1211 entries
* RAM CSV usage file has 1201 entries
* Number of used threads CSV file has 1276 entries
Although in my test plan the three listeners have exactly the same of number of SSH commands (15) and they are set on same level in test plan.
Remark 2:
The duration to execute each set of SSH commands to retrieve values of CPU usage is variable. I used timestamp difference to measure it and it is not the same duration with remarkable difference.
Remark 3:
When I compare the duration to execute set of SSH commands to retrieve CPU usage and RAM usage I see big difference duration.
I found this link: https://github.com/tilln/jmeter-sshmon by the plug-in owner but that didn't resolve my issue.
Thanks
As per the link you provided:
Samples are collected by a single thread, so if a command takes more than an insignificant amount of time to run, the frequency of sample collection will be limited. Even more so if more than one command is sampled. In this case, use a separate monitor for each sample command.
So basically after each sampler JMeter has to execute 45 SSH commands and according to the above explanation some results might be discarded.
So I would suggest the following workarounds:
Use separate Thread Group with a single Sampler which does nothing and has fixed execution time, i.e. Dummy Sampler. In this case you can control the interval by adding a Constant Timer and define the desired monitoring poll interval
Go for JMeter PerfMon Plugin which doesn't require establishing connection and executing the command, only plain metrics (numbers) are being passed via TCP or UDP channels. The approach from point 1 is still highly recommended.
I'm trying to stress test my Spring Boot application, but when I run the following command, what ab is doing is that trying to give out a result the the maximum my application could holds. But what I need is to check whether my application could hold at a specific request per second.
ab -p req.json -T application/json -k -c 1000 -n 500000 http://myapp.com/customerTrack/v1/send
The request per second given from above command is 4000, but actually, a lot of records are buffered in my application which means it can't hold that much rps. Could anyone tell me how to set a specific request per second in ab tools? Thanks!
I don't think you can get what you want from ab. There are a lot of other tools out there.
Here's a simple one that might do exactly what you want.
https://github.com/rakyll/hey
For rate limiting to 100 requests per second the below command should work.
hey -D req.json -T application/json -c 1000 -q 100 -n 500000 http://myapp.com/customerTrack/v1/send
Apache Bench is single threaded program that can only take advantage of one processor on your client’s machine. In extreme conditions, the tool could misrepresent results if the parameters of your test exceed the capabilities of the environment the tool is running in. Accorading to your description, the rps has already reach your hardware limitation.
A lot of records are buffered in my application which means it can't hold that much rps
It is very hard to control request per second in single machine.
You can find better performacne testing tools from here HTTP(S) Benchmark Tools
If you have budget you can try goad, which is an AWS Lambda powered, highly distributed, load testing tool built in Go for the 2016 Gopher Gala. Goad allows you to load test your websites from all over the world whilst costing you the tiniest fractions of a penny by using AWS Lambda in multiple regions simultaneously.
I am checking a load with a minimum of 2000 threads in JMeter in the command line mode. I am using Graphic Generator also to get nice graphs. But at the end of the execution, I am getting an aggregated result inside the graphics generated results. What I actually wanted is the time taken for each thread in a nice format either in CSV or in Graph.
The command I am using is
sh jmeter -n -t /Project/Tests/test.jmx -l /Project/Tests/results.csv
Even though the results.CSV generates the whole but its not in a nice format. Can someone suggest me any other better options if available? Because my program is expecting each thread to return within 7 seconds if not my program will discard that thread. Hence i need to know how many threads are returned within 7 seconds.
Actually you should already have what you need.
You can figure out threads response times from .jtl results file, look into elapsed column. You can sort and see the most time-consuming sample results and how many of them exceed 7000 ms
There is Response Times Over Time graph which can show the trend of response times while the test is running
There is Response Times Distribution graph which can show the statistics of response times per number of requests executed
Both plugins can be installed using JMeter Plugins Manager
And finally you can use Duration Assertion so JMeter would fail requests which last longer than 7 seconds automatically
I need to find out the total time a session is waiting when its is active.
For this i used the query like below...
SELECT (SUM (wait_time + time_waited) / 1000000)
FROM v$active_session_history
WHERE session_id = 614
But, i feel i'm not getting what i wanted using this query.
Like, first time when i ran this query i got 145.980962, # second time=145.953926and #3rd time i got 127.706429.
Ideally, the time should be same or increase. But, as you see, the value returned is reducing everytime.
Please correct me where i'm doing wrong.
It does not contain whole history, v$active_session_history "forgets" older lines. Think about it as a ring of buffers. Once all buffers are written, it restarts from 1st buffer.
To get events of some session, look v$session_event. To get current (active) event of active session: v$session_wait (In recent Oracle versions, you can find this info also in v$session)
NOTE: v$session_event view will not show you CPU time (which is not event but can be seen in v$active_session_history). You can add it, for example, from v$sesstat if needed...
Your bloomer is that you have not understood the nature of v$active_session_history: it is a sample not a log. That is, each record in ASH is a point in time, and doesn't refer back to previous records.
Don't worry, it's a common mistake.
This is a particular problem with WAIT_TIME. This is the total time waited for that specfic occurence of that event. So if the wait event stretches across two samples, in the first record WAIT_TIME will be 1 (one second) and in the next sample it will be 2 (two seconds). However, a SUM(WAIT_TIME) would produce a total of 3 which is too much. Of course this is an arithmetic proghression so if the wait event stretches to ten samples (ten seconds) a SUM(WAIT_TIME) would produce a total of 55.
Basically, WAIT_TIME is a flag - if it is 0 the session is ON CPU and if it's greater than zero it is WAITING.
TIME_WAITED is only populated when the event has stopped waiting. So a SUM(TIME_WAITED) wouldn't give an inflated value. In fact just the opposite: it will only be populated for wait events which were ongoing at the sample time. So there can be lots of waits which fall between the interstices of the samples which won't show up in that SUM.
This is why ASH is good for highlighting big performance issues and bad for identifying background niggles.
So why doesn't the total time doesn't increase each time you run your query? Because ASH is a circular buffer. Older records get aged out to make way for new samples. AWR stores a percentage of the ASH records on disk; they are accessible through DBA_HIST_ACTIVE_SESSION_HIST (the default is one record in ten). So probably ASH purged some samples with high wait times between the second and third times you ran your queries. You could check that by including MIN(SAMPLE_TIME) in the select list.
Finally, bear in mind that SIDs get reused. The primary key for identifying a session is (SID, Serial#), Your query only grouops by SID, so it may use data from several different sessions.
There is a useful presentation by Graham Woods, on of the Oracle gurus who worked on ASH called "Shifting through the ASHes". Altough if would be better to hear Graham speaking, the slide deck on its own still provides some useful insights. Find it here.
tl;dr
ASH is a sample not a log. Use it for COUNTs not SUMs.
"Anything wrong in the way query these tables? "
As I said above, but perhaps didn't make clear enough, DBA_HIST_ACTIVE_SESSION_HIST only holds a fraction of the records from ASH. So it is even less meaningful to run SUM() on its columns than on the live ASH.
Whereas V$SESSION_EVENT is an actual log of events. Its wait times are reliable and accurate. That's why you pay the overhead of enabling timed statistics. Having said which, V$SESSION_EVENT only gives us aggregated values per session, so it's not particularly useful in diagnosis.