i am trying to find the resources used by my application. i am using time utility advised here stackoverflow.com/questions/560089/unix-command-for-benchmarking-code-running-k-times
i used command as time (for i in $(seq 100); do ./mycode; done) as suggested by one of the answers.
problem: application run one by one but not n parallel. i need to run mycode 100/1000 times in parallel. any suggestion how to run application in parallel more than once. In other words how to run above command 100 times so 100X100 instances will be running at the same time.
I am also not able to place any format switch along with for loop.
i tried time -v (for i in $(seq 100); do ./mycode; done) for verbose output
note: I also tried /usr/bin/time and complete switch --verbose
EDIT: I changed my code as per instructions from Cyrus reply. simultaneous ruuning of mycode is solved. but still I am looking for my second question how to get verbose output of time utility by using -v or --verbose.
Related
I have an application that can run commands in parallel. This application can work on a cluster using SLURM to get the resources, then internally, I assign each of the tasks I require to be performed by a different CPU/worker. Now, I want to run this application on my laptop (macOS) through the command line, the same code (except for the SLURM part) works fine with the only difference being that it is only performing one task at a time.
I have run code in parallel in MATLAB using the commands parcluster, parfor, etc. In this code, I can get up to 16 workers to work in parallel on my laptop.
I was hoping there is a similar solution for any other application that is not MATLAB to run other code in parallel, especially to assign the resources. Then my application itself is built to manage them.
If it is of any help, I run my application from the command line as follows:
chmod +x ./bin/OpenSees
./bin/OpenSees Run.tcl
I have read about GNU parallel or even using SLURM on my laptop, but I am not sure if these are the best (or feasible) solutions.
I tried using GNU parallel:
chmod +x ./bin/OpenSees
parallel -j 4 ./bin/OpenSees ::: Run.tcl
but it continues doing one at a time, do you have any suggestions?
Given a project that consists of large number of bash scripts that are being launched from crontab periodically how can one track execution time of each script?
There is straightforward approach to edit each of those file by adding date
But what I really want is some kind of daemon that could track execution time and submit results to somewhere several times a day.
So the question is:
Is it possible to gather information about execution time of 200 bash scripts without editing each of them?
time module considered as fallback solution, if nothing better could be found
Depending on your systems cron implementation you may define the log-levels of the cron daemon. For ubuntus default vixie-cron setting log-level will log start and end of a job-execution which can then be analyzed.
On current LTS Ubuntu it works defining the log-level in /etc/init/cron
appending the -L 3 option to the exec line letting it look like:
exec cron -L 3
You could change your cron to run your scripts under time?
time scriptname
And pipe output to you logs.
I'm trying to find execution time of a script in milliseconds.
I'm running a bunch of queries which are basically placed inside a for loop. Each query here is a actually a script. I want to find the time taken for each query.
Can someone help me to find the execution time in milliseconds please.
I have tried a bunch approaches with the 'time' and 'date' commands but couldn't land on a precise solution.
Thanks.
You could try:
time -v script_name
/usr/bin/time -v script_name # if mac delivers the program
https://coderwall.com/p/er_zca
the equivalent of strace for macos which should allow you to see the time of system calls between each of them.
On my old machine with gnu parallel version 20121122 after I
parallel -j 15 < jobs.txt
I see output of the jobs being run which is very handy for debug purposes.
On my new machine with parallel version 20140122 then I execute above-mentioned command I see nothing in the terminal.
From another SO thread I found out about --tollef flag which fixes problem for me, but soon it is to be retired. How do I keep things working after retirement of --tollef flag?
--ungroup (if half-lines are allowed to mix - as they are with --tollef).
--line-buffer (if you only want complete lines printed - i.e. no half-line mixing).
I'm trying to step through an OpenFOAM application (in this case, icoFoam, but this question is in general for any OpenFOAM app).
I'd like to use gdb to step through an analysis running in parallel (let's say, 2 procs).
To simply launch the app in parallel, I type:
mpirun -np 2 icoFoam -parallel
Now I want to step through it in gdb. But I'm having trouble launching icoFoam in parallel and debugging it, since I can't figure out how to set a break point before the application begins to execute.
One thing I know I could do is insert a section of code after the MPI_Initialize that waits (and endless loop) until I change some variable in gdb. Then I'd run the app in parallel, attach a gdb session to each of those PIDs, and happily debug. But I'd rather not have to alter the OpenFOAM source and recompile.
So, how can I start the application running in parallel, some how get it to stop (like at the beginning of main) and then step through it in gdb? All without changing the original source code?
Kindest regards,
Madeleine.
You could try this resource
It looks like the correct command is:
mpirunDebug -np 2 xxxFoam -parallel