JMeter: Reduce CPU and Internet Bandwidth, not working - jmeter

I want to test my website's main pages like homepage, payment page, registration page in lower bandwidth. For that, I have tried this tutorial https://qautomation.blog/2019/05/29/how-to-simulate-network-bandwidth-in-jmeter/ and the command is
jmeter -httpclient.socket.http.cps=<cps value> -n -t <path of .jmx>
Here I tried CPS values 64, 128, 256 but minimum, maximum, and throughput values are always the same, in the aggregate report.
Am I doing something wrong?
Also, how can I increase or decrease CPU allocation in the same command?

You miss -J key which is used to pass a JMeter Property via command-line override
Depending on the protocol you're using you might need to add another property for HTTPS
Assuming all above:
jmeter -Jhttpclient.socket.http.cps=xxx -Jhttpclient.socket.https.cps=xxx -n -t <path of .jmx>
See How to Simulate Different Network Speeds in Your JMeter Load Test for more detailed and somehow correct steps.
With regards to "decrease CPU allocation" - follow Reducing resource requirements of the JMeter Best Practices. If you still lack CPU - consider going for Distributed Testing. If you cannot get another machine and still need to reduce CPU usage - look for the way of changing CPU affinity for your OS but be aware that in this case the throughput will decrease as well

Related

Hardware configuration to run 200 users for Jmeter

Hi guys I want to run a test ( load test) with almost 200 users. I needed to know the exact hardware configuration I would require like : RAM, disk space, and all.
Can someone help me on this....thanks in Advance.
(P.S : please don't tell me it depends on your user and all I'm new to Jmeter I don't know anything)
Actually it depends on what your test is doing, number of Samplers, number of PreProcessors, PostProcessors, Assertions, request and response size, etc.
Minimal configuration would be:
At least 512KB per thread + around 100 megabytes for JMeter to operate which gives ~100 MB of RAM
At least ~50 MB of disk space (JMeter) + ~400 MB for Java SDK/JRE + how many extra space you need for test data and test results
However only you can get the exact answer by running your actual test and measuring the JMeter's footprint using i.e. JVisualVM or JMeter PerfMon Plugin
Also make sure you're following JMeter Best Practices
#Mehvish It is impossible to say the correct hardware requirement. But, a normal machines of this generation might support it. But, only way to validate is to actually do it.
Please refer this article as it has some good info which might help.
https://sqa.stackexchange.com/questions/15178/what-is-recommended-hardware-infrastructure-for-running-heavy-jmeter-load-tests

JMeter parameterized tests with large CSV files

I'm using JMeter to run performance tests, but my sample data set is huge.
We want to simulate production-like traffic, and in order to do that, we need to have a large variety of requests replayed from production logs.
In turn, this causes us to have a huge sample dataset. So the questions are:
What's the recommended CSV sample size for large input samples?
Is CSV Data Config enough to use files that contain 300MB - 500MB or more worth of HTTP request payloads?
Can I just increase JVM memory limits?
Is this method good enough? Is there a better alternative?
Thanks!
The size of the CSV has no impact on memory usage of JMeter provided you use CSV Data Set.
Just don't use the CSVRead function as per the note in documentation.
By the way I see you flagged question as JMeter 3.2, in case you are using it, you should upgrade to JMeter 4.0 which is the most powerful and accurate version.

jmeter - Is there any simple way to handle dynamic viewstates when the viewstate count is more?

My request has 3800 viewstates that are coming from the previous request's response. Its very hard to capture the values one by one using reg expression and replacing them with variables.
Is there any simple way to handle them?
There is an alternative way of recording a JMeter test using a cloud-based proxy service. It is capable of exporting recordings in SmartJMX format with automatic detection and correlation of dynamic parameters so you won't have to handle them manually - the necessary PostProcessors and variables substitutions will be added to the test plan automatically.
Check out How to Cut Your JMeter Scripting Time by 80% article for more details.
In general I would recommend talking to your application developers as almost 4k dynamic parameters are too much, it will create at least massive network IO overhead to pass them back and forth and immense CPU/RAM to parse on both sides.

Limiting memory of V8 Context

I have a script server that runs arbitrary java script code on our servers. At any given time multiple scripts can be running and I would like to prevent one misbehaving script from eating up all the ram on the machine. I could do this by having each script run in its own process and have an off the shelf monitoring tool monitor the ram usage of each process, killing and restarting the ones that get out of hand. I don't want to do this because I would like to avoid the cost of restart the binary every time one of these scripts goes crazy. Is there a way in v8 to set a per context/isolate memory limit that I can use to sandbox the running scripts?
It should be easy to do now
context.EstimatedSize() to get estimated size of the context
isolate.TerminateExecution() when context goes out of acceptable memory/cpu usage/whatever
in order to get access if there is an infinite loop(or something else blocking, like high cpu calculation) I think you could use isolate.RequestInterrupt()
A single process can run multiple isolates, if you have a 1 isolate to 1 context ratio you can easily
restrict memory usage per isolate
get heap stats
See some examples in this commit:
https://github.com/discourse/mini_racer/commit/f7ec907547e9a6ea888b2587e4edee3766752dd3
In particular you have:
v8::HeapStatistics stats;
isolate->GetHeapStatistics(&stats);
There are also fancy features like memory allocation callbacks you can use.
This is not reliably possible.
All JavaScript contexts by this process share the same object heap.
WebKit/Chromium tries some stuff to disable contexts after context OOMs.
http://code.google.com/searchframe#OAMlx_jo-ck/src/third_party/WebKit/Source/WebCore/bindings/v8/V8Proxy.cpp&exact_package=chromium&q=V8Proxy&type=cs&l=361
Sources:
http://code.google.com/p/v8/source/browse/trunk/src/heap.h?r=11125&spec=svn11125#280
http://code.google.com/p/chromium/issues/detail?id=40521
http://code.google.com/p/chromium/issues/detail?id=81227

Performance tuning in Linux for a TCP based server application

I have written an application that communicates over TCP using a proprietary protocol. I have a test client that starts a thousand threads that each make requests from the client, and I noticed I can only get about 100 requests/second, even with fairly simple operations. These requests are all coming from the same client so that might be relevant.
I'm trying to understand how I can make things faster. I've read a bit about performance tuning in this area, but I'm trying to understand what I need to understand to performance tune network applications like this. Where should I start? What linux settings do I need to override, and how do I do it?
Help is greatly appreciated, thanks!
Have you considered using asynchronous methods to do the test instead of trying to spawn lots of threads. Each time one thread stops and other starts on the same cpu core, aka. context switching, there can be a very significant overhead. If you want a quick example of networking using asynchronous methods check out networkComms.net and look at how the NetworkComms.ConnectionListenModeUseSync property is used here. Obviously if you're running in linux you would have to use mono to run networkComms.net.
Play around with the Sysctls and Socket Options of the TCP stack: man tcp(7). E.g. you can change the send and receive buffer of tcp or switch NO_DELAY on. Actually to tune the TCP stack itself you should know how TCP works. Things like slow start, congestion control, congestion window etc. But this is related to the transmitting/receiving performance and the buffers maybe with your process handling.
You need to Understand the following Linux utility Command
uptime - Tell how long the system has been running.
uptime gives a one line display of the following information. The current time, how long the system has been running, how many users are currently logged on, and the system load averages for the past 1, 5, and 15 minutes.
top provides an ongoing look at processor activity in real time. It displays a listing of the most CPU-intensive tasks on the system, and can provide an interactive interface for manipulating processes
The mpstat command writes to standard output activities for each available processor, processor 0 being the first one. Global average activities among all processors are also reported. The mpstat command can be used both on SMP and UP machines, but in the latter, only global average activities will be printed. If no activity has been selected, then the default report is the CPU utilization report.
iostat - The iostat command is used for monitoring system input/output device
loading by observing the time the devices are active in relation to
their average transfer rates.
vmstat reports information about processes, memory, paging, block IO, traps, and cpu activity. The first report produced gives averages since the last reboot
free - display information about free and used memory on the system
ping : ping uses the ICMP protocol's mandatory ECHO_REQUEST datagram to elicit an ICMP ECHO_RESPONSE from a host or gateway
Dstat allows you to view all of your system resources instantly, you can eg. compare disk usage in combination with interrupts from your IDE controller, or compare the network bandwidth numbers directly with the disk throughput (in the same interval)
Then you need to know how TCP protocol work, Learn how to identify the Network Latency, where is the Problem, is the problem with ACK,SYCN,ACK SYC, DATA, RELEASE

Resources