configuration of jmeter settings - jmeter

doesn anyone know what is the purpose of these settings
httpclient.socket.http.cps
httpclient.socket.https.cps
from what I see it's to simulate slow connections but anyone has any example ?
for example to set 2s should I set the value = 2000 ?
Thanks

cps means characters per second.
For example you have 1 megabyte response size and you want it to be downloaded not faster than in 2 seconds you need to throttle JMeter speed to 0.5 megabyte per second which gives you 512 kilobytes which is 524288 bytes (characters).
References:
Controlling Bandwidth in JMeter to simulate different networks
How to Simulate Different Network Speeds in Your JMeter Load Test

Related

JMeter - Throughput Shaping Timer does not keep the requests/sec rate

I am using Ultimate Thread Group and fixed 1020 threads count for entire test duration - 520 seconds.
I've made a throughput diagram as follows:
The load increses over 10 seconds so the spikes shouldn't be very steep. Since the max RPS is 405 and max response time is around 25000ms 1020 threads should be enough.
However, when I run the test (jmeter -t spikes-nomiss.jmx -l spikes-nomiss.csv -e -o spikes-nomiss -n) I have the following graph for hits/seconds.
The threads are stopped for few seconds and suddenly 'wake up'. I can't find a reason for it. The final minute has a lot higher frequency of the calls. I've set heap size to 2GBs and resources are available, the CPU usage does not extend 50% during peaks, and memory is around 80% (4Gbs of ram on the machine). Seeking any help to fix the freezes.
Make sure to monitor JMeter's JVM using JConsole as it might be the case JMeter is not capable of create spikes due to insufficient resources. The slowdowns can be caused by excessive Garbage Collection
It might be the case 1020 threads are not enough to reach the desired throughput as it depends mainly on your application response time. If your application response time is higher than 300 milliseconds - you will not be able to get 405 RPS using 1020 threads. It might be a better idea to consider using Concurrency Thread Group which can be connected to the Throughput Shaping Timer via Schedule Feedback function

Why is the throughput of bandwidth simulation with cps = 0 almost the same with cps = 12800000 in jmeter?

I have tested bandwidth simulation in jmeter (version 3.1) with Non GUI execution but got an unexpected result that is the throughput of cps = 0 is almost the same with cps = 12800000.
I just added these 2 parameters in jmeter.properties and user.properties:
httpclient.socket.http.cps=12800000
httpclient.socket.https.cps=12800000
Here is my test plan and the result:
Thread Group
Users = 100
Ramp Up = 1
Loop Count = 100
HTTP Request
Server Name or IP = jmeter.apache.org
Result
CPS = 0
CPS = 12800000
And weird thing is the throughput of cps = 12800000 is greater than throughput of cps = 0. It should be cps = 0 > cps = 12800000
Please advise.
Thanks,
Rio
According to How to simulate network bandwidth in JMeter? article:
Fast Ethernet : 100 Mbit/s 12800000
So you are trying to limit the bandwidth to 100 Mbit/s which is approximately 12.5 megabytes per second.
In both cases you receive ~400 kilobytes in 4 seconds which means 100 kilobytes per second which means that there is 12.4 megabytes/second headroom therefore your throttling setting doesn't have any impact. You need to set desired simulated bandwidth to be lower than 100 cps in order to see the throttling effect.
In regards to the "Throughput" - according to JMeter Glossary
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
So please don't be confused as requests per second and bytes per second are different beasts, the latter one can be monitored Bytes Throughput Over Time listener, but remember, you will need to reach the throughput of more than 12.5 megabytes per second in order to see the impact of your throttling.
See How to Simulate Different Network Speeds in Your JMeter Load Test article for comprehensive information and example scenarios.
P.S. Don't load test public websites without explicit permission of the sites owners, you may be at least banned for an attempt of DOS attack

How to calculate the throughput in the "throughput" example of INET?

I am considering the idea of using Inet/omnet++ to evaluate a routing algorithm we are working on. Since I am using the tool for the first time, I was executing some examples and reading the source code.
Then I found an example, which is shipped with inet, /inet/examples/wireless/throughput.
The problem is that I don't get the same values.
In the README file one can read:
"Throughput is measured by the "sink" submodule of the AP. It is recorded
into the output scalar file, but can also be inspected during runtime.
The Excel sheet includes throughput measured by the simulation, and compares
it to the theoretical maximum which is roughly 5.12 Mbps (at 11 Mbps bitrate
and 1000-byte packets). The theoretical value and the simulation output
are very close, the difference being less than 1 kbps."
The same value is presented in Timing.xls
However, I obtain a different value when I execute the simulation: 846266 bit/sec
Do I need to perform some additional calculation to obtain the final value of throughput?
Is that a bug?
Is the value no longer valid due to some modification in INET?
The default value of bitrate for throughput example is 1 Mbps. So the value you obtained is correct.
To change the bitrate edit this line in omnetpp.ini in throughput directory:
**.wlan*.bitrate = 1Mbps

Netty TrafficCounter

I am currently using the io.netty.handler.traffic.ChannelTrafficShapingHandler & io.netty.handler.traffic.TrafficCounter to measure performance across a netty client and server. I am consistently see a discrepancy for the value Current Write on the server and Current Read on the client. How can I account for this difference considering the Write/Read KB/s are close to matching all the time.
2014-10-28 16:57:50,099 [Timer-4] INFO PerfLogging 130 - Netty Traffic stats TrafficShaping with Write Limit: 0 Read Limit: 0 and Counter: Monitor ChannelTC431885482 Current Speed Read: 3049 KB/s, Write: 0 KB/s Current Read: 90847 KB Current Write: 0 KB
2014-10-28 16:57:42,230 [ServerStreamingLogging] DEBUG c.f.s.r.l.ServerStreamingLogger:115 - Traffic Statistics WKS226-39843-MTY6NDU6NTAvMDAwMDAw TrafficShaping with Write Limit: 0 Read Limit: 0 and Counter: Monitor ChannelTC385810078 Current Speed Read: 0 KB/s, Write: 3049 KB/s Current Read: 0 KB Current Write: 66837 KB
Is there some sort of compression between client and server?
I can see that my client side value is approximately 3049 * 30 = 91470KB where 30 is the number of seconds where the cumulative figure is calculated
Scott is right, there are some fix around that are also taken this into consideration.
Some explaination:
read is actually real read bandwidth and read bytes account (since the system is not the origin of read reception)
for write events, the system is the source of them and managed them, so there are 2 kinds of writes (and will be in the next fix):
proposed writes which are not yet sent but before the fix taken into account in the bandwidth (lastWriteThroughput) and in the current write (currentWrittenBytes)
real writes when they are effectively pushed to the wire
Currently the issue is that currentWrittenBytes could be higher than real writes since they are mostly scheduled in the future, so they depend on the write speed from the handler which is the source of the write events.
After the fix, we will be more precise on what is "proposed/scheduled" and what is really "sent":
proposed writes taken into consideration into lastWriteThroughput and currentWrittenBytes
real writes operations taken into consideration into realWriteThroughput and realWrittenBytes when the writes occur on the wire (at least on the pipeline)
Now there is a second element, if you set the checkInterval to 30s, this implies the following:
the bandwidth (global average and so control of the traffic) is computed according to those 30s (read or write)
every 30s the "small" counters are reset to 0, while the cumulative counters are not: if you use cumulative counters, you should see that bytes received/sent should be almost the same, while every 30s the "small" counters (currentXxxx) are reset to 0
The smaller the value of this checkInterval, the better the bandwidth, but not too small to prevent too frequent reset and too many thread activities on bandwidth computations. In general, a default of 1s is quite efficient.
The difference seen could be for instance because the 30s event of the sender is not "synchronized" with 30s event of the receiver (and shall not be). So according to your numbers: when receiver (read) is resetting its counters with the 30s event, the writer will resetting its own counters 8s later (24 010 KB).

How fast is data transfer between instances in Windows Azure?

Suppose I create a Windows Azure application that consists of multiple instances talking to each other by starting a server on each instance and exchanging big chunks of data.
What data transfer speed should I expect from the underlying infrastructure?
It depends a bit on what size your instances are:
XS instance: 5 Mbps max
S: 100 Mbps sustained, ~250 Mbps bursts
M: 200 Mbps sustained, ~ 4-500 Mbps bursts
L: 400 Mbps sustained, upto 800 Mbps bursts
XL: 800 Mbps - you get whole NIC
Those are the limits. There are other factors as well of course:
Are you communicating within a datacenter (sub-region)? Assuming yes here.
Are you using affinity groups? That would put you in same stamp and you could minimize switch traffic - not a huge deal typically as NIC is slowest, but it would help latency a tiny bit. If this is all within a role, you are definitely in same affinity group and same deployment.
Are you writing to disk to buffer communication? Disk IO speeds are different between instances as well. If you are buffering large files or something to disk, you will see overall IO drop as the disk tries to keep up. XL instances have best IO performance.
There are likely other factors as well, but these are what I can think of off the top of my head.

Resources