Merge two *.jtl file of test report running on different machines - performance

How can i merge reports of same script running on different machines using j meter.
i Avoid remote testing but having issue in get combined result at one place while script running from all machines.

Use a decent merge program like Beyond Compare
Write a merge script
Use remote testing as recommended

Related

writing a bash script to run in gpu

I am writing a script to run a sequence of bash scripts and it would be idea to run the programs embedded in them on the GPU to speed this up. I haven't found any obvious way to make this happen without appealing to python. I am running this on Ubtuntu 18 and don't have access to a network to run a grid system and I haven't been able to successfully set up this to run in parallel either...it would be ideal to cut some time off if possible.
Ideas?

Julia invoke script on existing REPL from command line

I want to run a Julia script from window command line, but it seems everytime I run > Julia code.jl, a new instance of Julia is created and the initiation time (package loading, compiling?) is quite long.
Is there a way for me to skip this initiation time by running the script on the current REPL/Julia instance? (which usually saves me 50% of running time).
I am using Julia 1.0.
Thank you,
You can use include:
julia> include("code.jl")
There are several possible solutions. All of them involve different ways of sending commands to a running Julia session. The first few that come to my mind are:
use sockets as explained in https://docs.julialang.org/en/v1/manual/networking-and-streams/#A-simple-TCP-example-1
set up a HTTP server e.g. using https://github.com/JuliaWeb/HTTP.jl
use named pipes, as explained in Named pipe does not wait until completion in bash
communicate e.g. through the file system (e.g. make Julia scan some folder for .jl files and if it finds them there they get executed and moved to another folder or deleted) - this is probably simplest to implement correctly
In all the solutions you can send the command to Julia by executing some shell command.
No matter which approach you prefer the key challenge is sanitizing the code to handle errors properly (i.e. a situation when you sent some command to the Julia session and it crashes or when you send requests faster than Julia is able to handle them). This is especially important if you want the Julia server to be detached from the terminal.
As a side note: when using the Distributed module from stdlib in Julia for multiprocessing you actually do a very similar thing (but the communication is Julia to Julia) so you can also have a look how this module is implemented to get the feeling how it can be done.

Running julia code on multiple machines

I have parallelized my algorithm using pmap. The performance improvement on one machine using the -p option is great. Now I would like to run on multiple machines.
I used the --machinefile option on julia start. It works but it launches only one process on remote machine. I would like to have multiple processes running on each machine. Option -p enables multiple processes only on the local machine. Is there a way to specify number of processes on remote machines?
On Julia 0.3 you have to list the remote machines multiple times to open multiple Julia copies.
On Julia 0.4 (unreleased) you can actually put a count next to each address, see this pull request.

Stale NFS file handle issue on a remote cluster

I need to run a bunch of simulations using a tool called ngspice, and since I want to run a million simulations, I am distributing them across a cluster of machines (master+ a slave to start with, which have 12 cores each).
This is the command:
ngspice deck_1.sp; ngspice deck_2.sp etc.,
Step 1: A python script is used to generate these sp files.
Step 2: Python invokes GNU parallel to distribute the sp files across the master/slave and run the simulations using ngspice
Step 3: I post-process the results (python script).
I generate and process only 1000 files at a time to save disk space. So the above Step 1 to 3 are repeated in a loop till a million files are simulated.
Now, my problem is:
When I execute the loop for the 1st time, I have no problem. The files are distributed across the master/slave till the 1000 simulations are complete. When the loop starts off the second time, I clear off the existing sp files and regenerate them (step 1). Now, when I execute step 2- for some strange reason, some files are not being detected. After some debugging, the error I get is- "Stale NFS file handle" and "No such file or directory deck_21.sp" etc., for certain sp files that are created in step 1.
I paused my python script and did an 'ls' in the directory and I see that the files actually exist, but like the error points out, it is because of the Stale NFS file handle. This link recommends that I remount the client etc., but I am logged into a machine to which I have no admin privileges to mount.
Is there a way I can resolve this?
Thanks!
No. You need admin prviledges to fix this.

Performance problems reading lots of small files using libssh2

I am trying to read lots of small files using libssh2.
I am currently using libssh2_scp_recv/libssh2_channel_read and I have also tried libssh2_sftp_open/libssh2_sftp_read.
With large files, I am able to get a speed similar to scp. But with small files most of my time is passed opening a handle to my remote file (libssh2_scp_recv) and not downloading the file (libssh2_channel_read).
How does scp does it?
Is there a simple way to batch download multiple files so I will be able to saturate my connection?
Not unless you write your own SFTP layer on top of libssh able to use pipelining.
Maybe an easier solution would be to use several threads, every one establishing an independent SSH connection in order to retrieve several files in parallel.

Resources