What does Concurrency mean in Siege and how is it calculated? - siege

I am new to siege and I am so confused by the Concurrency in siege result.
In FAQ, https://www.joedog.org/siege-faq/#a17a, its formula is very simple.
Completed transactions / elapsed time.
But when I check out https://www.joedog.org/siege-manual/#a08, the data is not correct.
Also, I found transaction rate. What is the difference between Concurrency and transaction rate.
Can anybody help to clarify this? thanks in advance.

https://github.com/JoeDog/siege/tree/master/src source code
main.c and client.c shows us how this concurrency is calculated.

Related

Jmeter: Response time decreased AND throughput also decreased

I am running my jmeter script for almost a week and observed an interesting thing today. Below is the scenario:
Overview: I am gradually increasing the load on the application. In my last test I gave load of 100 users on the app and today I increased the load to 150 users.
Result of 150 users test:
Response time of the requests decreased compared to the last test. (Which is a good sign)
Throughput decreased drastically to half of what I got in the previous test with less load.
Received 225 errors while executing the test.
My questions are:
What could be the possible reason for such strange behavior of throughput? Why did throughput decrease instead of increasing with the increasing load?
Did I get good response time as many of my requests failed?
NOTE: Till 100 users test throughput was increasing with the increasing load of users.
Can anyone please help me with this question. I am a new bee in performance testing. Thanks in Advance!!
Also, would like to request if anyone can suggest good articles/site etc on finding performance bottleneck and learning crucial things in performance.
Most probably these 225 requests which failed returned failure immediately therefore average response time decreased, that's why you should be looking into i.e. Response Times Over Time chart and pay more attention to percentiles as mean response time can mask the real problem.
With regards to the bottleneck discovery, make sure to collect as much information from the server side as you can, i.e.
CPU, RAM, Network, Disk usage from JMeter PerfMon Plugin
Slow queries log from the database
"heaviest" functions and largest objects from the profiling tool for your application

How to Load Test ideally using JMeter tool?

I am completely new to Performance testing and JMeter and hence my question may sound silly to some people.
We have identified some flows of an application and they are like:- Login, SignUp, Perform Transaction. Basically, we are trying to test our API's performance so we have used HTTP Request Sampler heavily. If I have scripted all these flows in JMeter, how can achieve answers to following
How can we decide the benchmark of this system? There is no one in organisation who can help with numbers right now and we have to identify number of users beyond which our system can crash.
For Example, if we say that 1,00,000 users are expected to visit our website in one hour's time then how can we execute this in JMeter? Should Forever loop be used with 3600 seconds(60 mins) of RampUp OR should I go ahead with Number of Threads as 1,00,000 RampUp ask 3600 and Loop Count as 1? What is the ideal way to test this?
What has been done till now?
1. We use to run above mentioned flows with Loop Count as 1. However, as per my knowledge, it's completely based on how much RampUp time I give and JMeter will decide accordingly how many threads it require in parallel to complete the task. Results were not helpful in our case as there was not much load to system.
2. Then, we changed the approach and tried Loop Count as Forever for some 100 users and ran the test for a duration of 10 minutes. After continuing with such test for sometime, we got higher Standard Deviation in JMeter's Summary Report which was fixed by tuning our DB and applying some indexes. We continued this way but I am still confused whether this can really simulate realistic scenario.
Thanks in advance!
Please refer my answer and comments to the similar question below:
performance-testing-in-production-environment-using-jmeter

How to collect traffic data and macroscopic statistics in Veins?

Hello StackEx community.
I am implementing certain scenarios in Veins 3.0 and I wish to collect certain traffic statistics such as the Average Waiting Time, the Average Energy consumption, etc from my simulation.
Please help on how to generate and interpret these information.
Thanks
TraCIMobility already records some statistics that you can directly use or build on. See, for example totalCO2Emission. Other statistics you might have to implement yourself, e.g., after detecting that a car was stopped for a certain time. See the OMNeT++ user manual pages on result recording and analysis for general information on how to do that and the Tic Toc tutorial for a concrete example.

How i can know my site has very good performnce using jmeter?

Please give me instructions about j-meter, how can i test performance of my site that its response is good and it can bear 500 to 1000 users at same time. Also please give me scenarios that can be performed to test performance of my site.
I have tested my site using j-meter but i cannot understand what these results means. Kindly tell me some perfect/final result (Response time, throughput, mean time, etc) of some sites which have good performance so that if those results come to me i will be satisfied that i am going well.
What should be avg response time, throughput, deviation, median, mean etc for a website normally?
Thanks
While load testing,you have to take help of some tools that will perform resource monitoring.
like in java , there is jvisualV
-- You may take help of jvisualvm path of this tool is Program Files\Java\jdk1.6.0_38\bin\jvisualvm.exe
You may use it to determine your cpu utilization and memory consumption.
Hope it may help you.

In Performance Testing, is this a valid test. need expert opinion.

I am running a 1000 user test. and some of the flows have 25 users with the expected throughput as 0.000011574 per second.
The client is suggesting that I run it with about 1800 second think time.
Using Little's law I am getting the Think Time value to be 2160000.
I am suggesting that we just use 1 user and give a 600 second think time, even though calculations give me 86400 seconds think time. Since the flow has to be tested while under load.
What would be the correct approach? Go with client or Go with my assumption?
Let me know your valuable thoughts.
0.000011574 of what per second?
This reads like a requirement from a server admin and not from "the business."

Resources