What is the difference between following Jmeter test architectures?
--Test Plan(Run Thread Groups Consecutively not checked)
--Thread Group 1(No of Threads(users) : 1, Ramp up Period(in Seconds) : 1, Loop count: 1)
--Thread Group 2(No of Threads(users) : 1, Ramp up Period(in Seconds) : 1, Loop count: 1)
--Thread Group 3(No of Threads(users) : 1, Ramp up Period(in Seconds) : 1, Loop count: 1)
--Thread Group 4(No of Threads(users) : 1, Ramp up Period(in Seconds) : 1, Loop count: 1)
--Thread Group 5(No of Threads(users) : 1, Ramp up Period(in Seconds) : 1, Loop count: 1)
--Thread Group 6(No of Threads(users) : 1, Ramp up Period(in Seconds) : 1, Loop count: 1)
--Thread Group 7(No of Threads(users) : 1, Ramp up Period(in Seconds) : 1, Loop count: 1)
--Thread Group 8(No of Threads(users) : 1, Ramp up Period(in Seconds) : 1, Loop count: 1)
--Thread Group 9(No of Threads(users) : 1, Ramp up Period(in Seconds) : 1, Loop count: 1)
--Thread Group 10(No of Threads(users) : 1, Ramp up Period(in Seconds) : 1, Loop count: 1)
--Test Plan(Run Thread Groups Consecutively not checked)
--Thread Group 1(No of Threads(users) : 10, Ramp up Period(in Seconds) : 1, Loop count: 1)
--Test Plan(Run Thread Groups Consecutively not checked)
--Thread Group 1(No of Threads(users) : 2, Ramp up Period(in Seconds) : 1, Loop count: 1)
--Thread Group 2(No of Threads(users) : 2, Ramp up Period(in Seconds) : 1, Loop count: 1)
--Thread Group 3(No of Threads(users) : 2, Ramp up Period(in Seconds) : 1, Loop count: 1)
--Thread Group 4(No of Threads(users) : 2, Ramp up Period(in Seconds) : 1, Loop count: 1)
--Thread Group 5(No of Threads(users) : 2, Ramp up Period(in Seconds) : 1, Loop count: 1)
In my scenario, I want to do load test of my solution where I have different test cases executed by different users. Here users have a very little chance to do same operation at a time. So I use different thread groups for my test cases. And with my scenario increase No of Threads(users) in thread group doesn't mean logical to me. So I unchecked Run Thread Groups Consecutively in my Test plan but not sure about does it really makes a concurrency test or not.
If you check Run Thread Groups Consecutively then thread groups will fire up consecutively.That means JMeter will start Thread Group 01 first, then Thread Group 02 ,......, and so on. It's an option here to instruct JMeter to run the Thread Groups serially rather than in parallel.
So if you unchecked Run Thread Groups Consecutively in your Test plan, it will make a concurrency in your test.
In case 1 you are guaranteed to have a concurrency of 10 threads right from the start of execution.
In case 2 the concurrency of 10 threads is reached after 1 second, with rate of 1 thread per 100ms (provided no threads finished their execution within that time).
Case 3 mixes both options: 5 threads will start immediately, and 5 more after 1 sec, so the concurrency of 10 threads is reached after 1 second.
Note that when I say "concurrency reached" I mean on JMeter side. Depending on the setup of the script, concurrency for the server may be reached later.
What's also important is that you have "Loop count: 1". If all users run that single loop iteration for a long time (minutes/hours), or there's a loop controller inside the thread group, the difference between 3 options is insignificant for statistics. But if that iteration is short (seconds to a few minutes), or each operation within each thread is unique, then in case of options 2 and 3 (especially 2), your statistics will be flawed, since many operations on threads that started earlier will be performed before 10 thread concurrency is reached and vice versa.
JMeter acts as follows:
For each Thread Group it starts threads during "ramp-up" period
Threads start executing samplers upside-down or according to Logic Controllers
When thread doesn't have a sampler to execute or loop to iterate - it's being shut down
So you can run into a situation where 1st thread has already finished everything and was shut down and 10th thread hasn't been started yet.
If you want to have guaranteed concurrency - provide enough loops at Thread Group level. You can also use i.e. Ultimate Thread Group which has some extra features to flexibly define users arrival rate and visualise anticipated concurrency.
By the way, you can get different virtual users executing different scenarios even in the same Thread Group using Throughput Controller. See Running JMeter Samplers with Defined Percentage Probability article for detailed explanation of the possible approaches.
Related
This question already has an answer here:
why does it seem that sleep doesn't work in goroutine
(1 answer)
Closed 5 months ago.
i have the code:
n := time.Now()
i := 0
for range time.Tick(5 * time.Second) {
fmt.Printf("start %d %s \n ", i, time.Since(n))
time.Sleep(10)
fmt.Printf("end %d %s \n", i, time.Since(n))
i++
}
output:
start 0 5.001125s
end 0 5.001386916s
start 1 10.001112041s
end 1 10.001232416s
start 2 15.001064s
end 2 15.001094958s
Trigger every 5 seconds, program execution takes 10 seconds, I found that the sleep method does not take effect, every 5 seconds will automatically print end, no wait 10 seconds.
question:
What is the reason for this?
I need to wait until the execution of the program in each loop is complete before I start recalculating the 5 seconds. What do I do?
What is the reason for this?
time.Sleep takes a parameter of type time.Duration which under the covers is an int64 representing nanoseconds.
As a result, this sleeps for 10 nanoseconds:
time.Sleep(10)
To sleep for 10 seconds, use one of the predefined consts:
time.Sleep(10*time.Second)
I need to wait until the execution of the program in each loop is complete before I start recalculating the 5 seconds
If you want to wait at least 5s between each loop iteration - then don't use a Ticker. A ticker will loop on a regular interval. Some examples:
if a loop task takes 2s, then the next loop iteration will begin in 3s (5 minus 2)
if the task takes 8s (missing a tick), then there will be zero wait between iterations;
if the task takes 12s (missing 2 ticks) - again there will be zero wait
So if you want to ensure a consistent pause between iterations, put a sleep at the end of the loop.
https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/hostmetricsreceiver/internal/scraper/processscraper/documentation.md
I have been using this library which gives me 3 values for a single process :
user time, system time & wait time
One example value is : 0.05, 0.01, 0.00
How can I calculate CPU percent of the particular process ?
To calculate the total CPU load/utilization percent of the system, we need to calculate "total system cpu time (during the period)" + "total user cpu time (during the period)" / "period"
In your case, suppose you take sample every 2 seconds, then for every sample, you need to calculate:
= ( (process.cpu.time.sys - previous_process.cpu.time.sys) + (process.cpu.time.user - previous_process.cpu.time.user) ) / 2
The problem scenario :
The number of tasks(n) is greater than the number of workers(m).
I need to assign multiple tasks to a single worker.
Here is the cost matrix
I have 6 tasks and 3 workers available.
C (i,j) = 1, for the cell which indicates, worker can be assigned to task.
C (i,j) = 1000, for the cell which indicates, worker can not be assigned to task.
The cost matrix
TASK/WORKER WORKER1 WORKER2 WORKER3
TASK 1 1 1000 1000
TASK 2 1000 1 1000
TASK 3 1000 1000 1000
TASK 4 1 1000 1000
TASK 5 1000 1 1000
TASK 6 1000 1000 1
Here , worker1 can do tasks ( TASK-1, TASK-4)
worker2 can do tasks ( TASK-2, TASK-5)
worker3 can do tasks ( TASK-6)
To create square matrix, I added dummy WORKERS : DWORKER1, DWORKER2 and DWORKER3) as follows and assigned very large value(1000000) to the cell value.
TASK/WORKER WORKER1 WORKER2 WORKER3 DWORKER1 DWORKER2 DWORKER3
TASK 1 1 1000 1000 1000000 100000 1000000
TASK 2 1000 1 1000 1000000 100000 1000000
TASK 3 1000 1000 1000 1000000 100000 1000000
TASK 4 1 1000 1000 1000000 100000 1000000
TASK 5 1000 1 1000 1000000 100000 1000000
TASK 6 1000 1000 1 1000000 100000 1000000
I used the scipy package scipy.optimize.linear_sum_assignment. As follows:
from scipy.optimize import linear_sum_assignment
cost = np.array([[1,1000,1000,1000000,100000,1000000],[1000,1,1000,1000000,1000000,1000000],[1000,1000,
1000,1000000,100000,1000000],[1,1000,1000,1000000,1000000,1000000],[1000,1,1000,1000000,100000, 1000000],[1000,1000,1,1000000,1000000,1000000]])
row_ind, col_ind = linear_sum_assignment(cost)
The output for col_ind is array([5, 3, 4, 0, 1, 2])
The output indicates (If I am not wrong):
- Assign 6th task to worker 1
- Assign 4th task to worker 2
- Assign 5th task to worker 3
- Assign 1st task to Dummy worker 1
- Assign 2nd task to Dummy worker 2
- Assign 3rd task to Dummy worker 3
What I am expecting is, assigning TASK ( 1, 2 and 3) to the real workers not the dummy workers.
Is that possible through this implementation? Or I am missing anything here?
Hungarian algorithm is for solving the assignment problem, where there is exactly one task assigned to each worker. By doing the trick you propose, you will indeed have 1 task assign to each dummy worker also.
If you are interested in only assigning tasks to real workers, and assigning multiple tasks, that is much easier : for each task, select the worker with the smallest cost. In your example, it means that worker 1 will do tasks 1 and 4, worker 2 will do task 2 and 5, worker 3 will do task 6, and task 3 will be done by one of the three workers (depending on how you handle the equality case).
I can't figure out how to solve this task using PLC ladder language. PLC programme must compute actual flow of water. The gate is moving and the time of gate is set to 5 minutes. In this gate the impulses are counted (with wage for example - wage of 1 m^3). The overall time (the space where the gate can move) is set to 1 hour.
Gate time: for example 5 minutes
Overall time: 1 hour
Impulse is triggered by me.
Example if we trigger the I1 input - 3 times (one input have wage 1 m^3) in a gate of 5 minutes (5 minutes is 1/12 hour) so 3 * 1m^3 divided by (1/12)h = 36 m^3 / h It gave us actual water flow. I can only use TON timer and I've got 2 binary inputs.
Do you have any idea how to start this?
Exercise
It's a logger I tried to base on, but now I don't know what to do next.
#include "MT-101.h"
IF FS1_fs MOVE 0, REG2
IF FS1_fs MOVE RTC_Sec, REG2
NE RTC_Sec, REG2, Q1
IF NOT Q1 EXT
MOVE RTC_Sec, REG2
IF NOT I1 EXT
TCPY XREG1, 511, XREG2
MOVE AN1, XREG1
I use percolator(Elasticsearch 2.3.3) and i have ~100 term queries. When i percolate 1 document in 1 thread, it took ~500ms:
{u'total': 0, u'took': 452, u'_shards': {u'successful': 12, u'failed': 0, u'total': 12}} TIME 0.467885982513
There are 4 CPU, so i want to percolate in 4 processes. But when i launch them, everyone took ~2000ms:
{u'total': 0, u'took': 1837, u'_shards': {u'successful': 12, u'failed': 0, u'total': 12}} TIME 1.890885982513
Why?
I use python module Elasticsearch 2.3.0.
I have tried to manage count of shards(from 1 to 12), but it is the same result.
When i try to percolate in 20 thread, elastic crushes with error:
RemoteTransportException[[test_node01][192.168.69.142:9300][indices:data/read/percolate[s]]];
nested: EsRejectedExecutionException[rejected execution of
org.elasticsearch.transport.TransportService$4#7906d a8a on
EsThreadPoolExecutor[percolate, queue capacity = 1000,
org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor#31a1c278[Running,
pool size = 16, active threads = 16, queued tasks = 1000, compl eted
tasks = 156823]]]; Caused by: EsRejectedExecutionException[rejected
execution of org.elasticsearch.transport.TransportService$4#7906da8a
on EsThreadPoolExecutor[percolate, queue capacity = 1000,
org.elasticsearch.common.util
.concurrent.EsThreadPoolExecutor#31a1c278[Running, pool size = 16,
active threads = 16, queued tasks = 1000, completed tasks = 156823]]]
Server has 16 CPU and 32 GB RAM