I am using jmeter distributed environment and distributing load on multiple slave machine.
I am running jmeter -g <csv file> -o <output folder> command to get html report from output csv file.
In report Time Vs Threads graph shows only one slave machines thread count instead of combined thread count on x axis.
e.g. if my slave 1 and slave 2 running 10 thread each but generated graph shows 10 number of active thread on X axis but it should be 20.
Try "KPI vs KPI Graphs" plugin. You can install it using plugins manager.
Related
I want to run Jmeter distributed testing, I want Jmeter to write info logs on log file but in distributed mode it only provides us logs which is related to connection, it doesnot really gives the log of execution.
How can I get the actual logs??
Thanks in advance.
The execution log is being written on slave side, if you run slave via jmeter-server.bat or jmeter-server you should see jmeter-server.log file in the folder where you launched slave instance from.
If you don't see the log file you can specify its name and location via -j command line argument like:
jmeter -s -j jmeter-server.log ......
More information:
Remote Testing
How to Perform Distributed Testing in JMeter
JMeter Distributed Testing Step-by-step
I am running Jmeter in Distributed Mode. What Jmeter does is- distribute the Number of threads(Users) equally between the slaves. What I want is to distribute it partially. For Eg- Total users- 10, Slave1 - 8, Slave2- 2.
JMeter slaves are totally independent, therefore if you have 10 threads in the thread group 1st slave will execute 10 threads and 2nd slave will execute 10 threads so you will have 20 threads in total.
If you want to distribute load between slaves in uneven way you can do it as follows:
Define threads property in the Thread Group using __P() function like:
${__P(threads,)}
On each remote slave set this threads property in user.properties file (located in JMeter's "bin" folder), like:
on slave 1:
threads=8
on slave 2:
threads=2
Alternatively you can pass the property value via -J command-line argument like:
on slave 1:
jmeter -Jthreads=8 -s .....
on slave 2:
jmeter -Jthreads=2 -s .....
See Apache JMeter Properties Customization Guide for more information on setting and overriding JMeter properties.
I want to run multiple worker daemons on single machine. As per damienfrancois's answer on what is the minimum number of computers for a slurm cluster it can be done. Problem is currently I am able to execute only 1 worker daemon on one machine. for example
When I run
sudo slurmd -N linux1 -cDvv
sudo slurmd -N linux2 -cDvv
linux1 goes down when I run linux2. Is it possible to run multiple worker daemons on one machine?
Here is my slurm.conf file
as your intention seems to be just testing the behavior of Slurm, I would recommend you to use the front-end mode, where you can create dummy computation nodes in the same machine.
In their FAQ, you have more details, but basically you must configure your installation to work with this mode:
./configure --enable-front-end
And configure the nodes in slurm.conf
NodeName=test[1-100] NodeHostName=localhost
In that guide, they also explain how to launch more than one real daemons in the same node by changing the ports, but for my testing purposes it was not necessary.
Good luck!
I got the same issue as you, I resolved it by modifying the paths of log files as mentioned there multiple slurmd support.
In your slurm.conf for example
SlurmdLogFile=/var/log/slurm/slurmd.log
SlurmdPidFile=/var/run/slurmd.pid
SlurmdSpoolDir=/var/spool/slurmd
must be
SlurmdLogFile=/var/log/slurm/slurmd.%n.log
SlurmdPidFile=/var/run/slurmd.%n.pid
SlurmdSpoolDir=/var/spool/slurmd.%n
Now you can launch multiple slurmd.
Note : I tried with your slurm conf, I think some parameters are missing like define two NodeName instead of one and add which Port to use for each of Nodes.
This works for me
# COMPUTE NODES
NodeName=linux[1-10] NodeHostname=linux0 Port=17004 CPUs=1 State=UNKNOWN
NodeName=linux[11-19] NodeHostname=linux0 Port=17005 CPUs=1 State=UNKNOWN
# PARTITIONS
PartitionName=main Nodes=linux1 Default=YES MaxTime=INFINITE State=UP
PartitionName=dev Nodes=linux11 Default=YES MaxTime=INFINITE State=UP
How can I control the number of threads running on each jmeter slave machines.
i.e. if I have 300 threads in total and 2 slave machines, I want the load to be distributed evenly on both slave machine - 150 threads to run on slave machine A & 150 Threads to run on slave machine B.
I have tried running in non gui mode also with the below commands
Jmeter -n -t TESTING.jmx -R 10.27.30.93 –J 6
to make it run on a specific slave server for 6 threads, but its not working.
It invokes the same number of threads saved in the test plan
Set "Number of Threads" for Thread Group(s) using __P() function like
${__P(threads,)}
Amend your JMeter startup script invocation as follows:
jmeter -n -t TESTING.jmx -R 10.27.30.93 –Gthreads=6
As per JMeter command-line help:
-G, --globalproperty <argument>=<value>
Define Global properties (sent to servers)
e.g. -Gport=123
or -Gglobal.properties
Another option is configure desired number of threads for each remote engine in user.properties file (lives under /bin folder of JMeter installation).
See Apache JMeter Properties Customization Guide for more information on setting and/or overriding JMeter Properties.
This question already has answers here:
How to use start-all.sh to start standalone Worker that uses different SPARK_HOME (than Master)?
(3 answers)
Closed 4 months ago.
I'm setting up a [somewhat ad-hoc] cluster of Spark workers: namely, a couple of lab machines that I have sitting around. However, I've run into a problem when I attempt to start the cluster with start-all.sh: namely, Spark is installed in different directories on the various workers. But the master invokes $SPARK_HOME/sbin/start-all.sh on each one using the master's definition of $SPARK_HOME, even though the path is different for each worker.
Assuming I can't install Spark on identical paths on each worker to the master, how can I get the master to recognize the different worker paths?
EDIT #1 Hmm, found this thread in the Spark mailing list, strongly suggesting that this is the current implementation--assuming $SPARK_HOME is the same for all workers.
I'm playing around with Spark on Windows (my laptop) and have two worker nodes running by starting them manually using a script that contains the following
set SPARK_HOME=C:\dev\programs\spark-1.2.0-worker1
set SPARK_MASTER_IP=master.brad.com
spark-class org.apache.spark.deploy.worker.Worker spark://master.brad.com:7077
I then create a copy of this script with a different SPARK_HOME defined to run my second worker from. When I kick off a spark-submit I see this on Worker_1
15/02/13 16:42:10 INFO ExecutorRunner: Launch command: ...C:\dev\programs\spark-1.2.0-worker1\bin...
and this on Worker_2
15/02/13 16:42:10 INFO ExecutorRunner: Launch command: ...C:\dev\programs\spark-1.2.0-worker2\bin...
So it works, and in my case I duplicated the spark installation directory, but you may be able to get around this
You might want to consider assign the name by changing SPARK_WORKER_DIR line in the spark-env.sh file.
A similar question was asked here
The solution I used was to create a symbolic link mimicking the master node's installation path on each worker node so when the start-all.sh executing on the master node does its SSH into the worker node, it will see identical pathing to run the worker scripts.
Example in my case, I had 2 Macs and 1 Linux machine. Both Macs had spark installed under /Users/<user>/spark however the Linux machine had it under /home/<user>/spark. One of the Macs was the master node so running the start-all.sh it would error each time on the Linux machine due to pathing (error: /Users/<user>/spark does not exist)).
The simple solution was to mimic the Mac's pathing on the Linux machine using a symbolic link:
open terminal
cd / <-- go to the root of the drive
sudo ln -s home Users <-- create a sym link "Users" pointing to the actual "home" directory.