I'm currently trying to benchmark a bash script in 4 different versions. Each one does a giant rsync job and it usually takes a very long time to finish. There are many steps in the bash script which involves setting up and tearing down the environment to rsync to.
However, when I ran strace on the bash scripts, I get surprisingly short results, which leads me to believe that strace is not actually tracing the time waiting for a command like rsync(which might be spawned in a subshell and is completely not recorded by rsync), or, it's waking up intermittently and sleep for another amount of time of which strace is not counting. Here's a snippet:
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
99.98 12.972555 120116 108 52 wait4
0.01 0.000751 13 56 clone
0.00 0.000380 1 553 rt_sigprocmask
0.00 0.000303 2 197 85 stat
0.00 0.000274 2 134 read
0.00 0.000223 19 12 open
0.00 0.000190 48 4 getdents
0.00 0.000110 1 82 8 close
0.00 0.000110 1 153 rt_sigaction
0.00 0.000084 1 61 getegid
0.00 0.000074 4 19 write
So what tools can I use that are similar to strace, OR, maybe I'm missing some type of recursive flag in strace to find out correctly where my bash script is waiting on?
I would like something along the lines of:
% time command
------ --------
... rsync
... ls
Any suggestions would be appreciated. Thank you!
Related
I am trying to use parallel in a bash script, to verify if s3 path exists or not and I am trying to verify multiple s3 paths, by counting the objects in the path. If the count of the object is zero it will continue to the next date in the for loop, with parallel it is not working as expected.
For Date range I provided in the for loop, we actually don't have those folders in the s3bucket, and in the function checkS3Path if s3 path doesnt exists, I am creating a 0KB file, but I dont see those 0KB files being created after script is executed. From the output of the script, I am seeing S3 Path Consists CSV Files, Proceeding to next step folder1:+2019-10-03, instead of S3 Path Doesnt Exists folder1:+2019-10-03. Please see the output below.
please let me what might be the issue.
Here is the sample code.
#!/bin/bash
#set -x
s3Bucket=testbucket
version=v20
Array=(folder1 folder2 folder3)
checkS3Path() {
fldName=$1
date=$2
objectNum=$(aws s3 ls s3://${s3Bucket}/${version}/${fldName}/date=${date}/ | wc -l)
echo $objectNum
if [ "$objectNum" -eq 0 ]
then
echo "S3 Path Doesnt Exists ${fldName}:${date}" >> /app/${fldName}.log
touch /home/ubuntu/${fldName}_${date}.txt
continue
else
echo "S3 Path Consists csv Files, Proceeding to next step ${fldName}:${date}"
fi
}
final() {
fldName=$1
date=$2
checkS3Path $fldName $date
function2 $fldName $date
function3 $fldName $date
}
export -f final checkS3Path
for date in 2019-10-{01..03}
do
# finalstep folder1 $date
parallel --jobs 4 --eta finalstep ::: "${Array[#]}" ::: +"$date"
done
Here is the output I am seeing.
$ ./test.sh
Academic tradition requires you to cite works you base your article on.
When using programs that use GNU Parallel to process data for publication
please cite:
O. Tange (2011): GNU Parallel - The Command-Line Power Tool,
;login: The USENIX Magazine, February 2011:42-47.
This helps funding further development; AND IT WON'T COST YOU A CENT.
If you pay 10000 EUR you should feel free to use GNU Parallel without citing.
To silence this citation notice: run 'parallel --citation'.
Computers / CPU cores / Max jobs to run
1:local / 4 / 4
Computer:jobs running/jobs completed/%of started jobs/Average seconds to complete
ETA: 0s Left: 14 AVG: 0.00s local:4/0/100%/0.0s 202
S3 Path Consists CSV Files, Proceeding to next step folder1:+2019-10-01
ETA: 0s Left: 13 AVG: 0.00s local:4/1/100%/2.0s 202
S3 Path Consists CSV Files, Proceeding to next step folder2:+2019-10-01
ETA: 0s Left: 12 AVG: 0.00s local:4/2/100%/1.0s 202
S3 Path Consists CSV Files, Proceeding to next step folder3:+2019-10-01
Academic tradition requires you to cite works you base your article on.
When using programs that use GNU Parallel to process data for publication
please cite:
O. Tange (2011): GNU Parallel - The Command-Line Power Tool,
;login: The USENIX Magazine, February 2011:42-47.
This helps funding further development; AND IT WON'T COST YOU A CENT.
If you pay 10000 EUR you should feel free to use GNU Parallel without citing.
To silence this citation notice: run 'parallel --citation'.
Computers / CPU cores / Max jobs to run
1:local / 4 / 4
Computer:jobs running/jobs completed/%of started jobs/Average seconds to complete
ETA: 0s Left: 14 AVG: 0.00s local:4/0/100%/0.0s 202
S3 Path Consists CSV Files, Proceeding to next step folder1:+2019-10-02
ETA: 0s Left: 13 AVG: 0.00s local:4/1/100%/0.0s 202
S3 Path Consists CSV Files, Proceeding to next step folder2:+2019-10-02
ETA: 6s Left: 12 AVG: 0.50s local:4/2/100%/0.5s 202
S3 Path Consists CSV Files, Proceeding to next step folder3:+2019-10-02
ETA: 3s Left: 11 AVG: 0.33s local:4/3/100%/0.3s 202
Academic tradition requires you to cite works you base your article on.
When using programs that use GNU Parallel to process data for publication
please cite:
O. Tange (2011): GNU Parallel - The Command-Line Power Tool,
;login: The USENIX Magazine, February 2011:42-47.
This helps funding further development; AND IT WON'T COST YOU A CENT.
If you pay 10000 EUR you should feel free to use GNU Parallel without citing.
To silence this citation notice: run 'parallel --citation'.
Computers / CPU cores / Max jobs to run
1:local / 4 / 4
Computer:jobs running/jobs completed/%of started jobs/Average seconds to complete
ETA: 0s Left: 14 AVG: 0.00s local:4/0/100%/0.0s 202
S3 Path Consists CSV Files, Proceeding to next step folder1:+2019-10-03
ETA: 0s Left: 13 AVG: 0.00s local:4/1/100%/1.0s 202
S3 Path Consists CSV Files, Proceeding to next step folder2:+2019-10-03
ETA: 0s Left: 12 AVG: 0.00s local:4/2/100%/0.5s 202
S3 Path Consists CSV Files, Proceeding to next step folder3:+2019-10-03
ETA: 0s Left: 11 AVG: 0.00s local:4/3/100%/0.3s 202
$
Thanks
If checkS3Path works when run by hand, then you probably just need to:
export s3Bucket=testbucket
export version=v20
Each GNU Parallel job runs in its own shell (started from Perl) which is the reason you need to export variables, if you want them to be visible to the job.
Also look at env_parallel to do this automatically.
I'm running a KornShell script which originally has 61 input arguments:
./runOS.ksh 2.8409 24 40 0.350 0.62917 8 1 2 1.00000 4.00000 0.50000 0.00 1 1 4900.00 1.500 -0.00800 1.500 -0.00800 1 100.00000 20.00000 4 1.0 0.0 0.0 0.0 1 90 2 0.10000 0.10000 0.10000 1.500 -0.008 3.00000 0.34744 1.500 -0.008 1.500 -0.008 0.15000 0.21715 1.500 -0.008 0.00000 1 1.334 0 0.243 0.073 0.642 0.0229 38.0 0.03071 2 0 15 -1 20 1
I only vary 6 of them. Would it make a difference in performance if I fixed the remaining 55 arguments inside the script and just call the variable ones, say:
./runOS.ksh 2.8409 24 40 0.350 0.62917 8
If anyone has a quick/general answer to this, it will be highly appreciated, since it might take me a long time to fix the 55 extra arguments inside the script and I'm afraid it won't change anything.
There's no performance impact, as you're asking, but I see other threads:
What is the commandline limitation for your system? You mention 61 input parameters, some of them having a length of 8 characters. If the number of input parameters increases, you might have problems with the maximum command length.
Are you performing 440 million scripts? That's too much, far too much. You need to consider why you're doing this: you mention needing to wait ±153 days for their execution to finish, which is far too much (and unpredictable).
I would like to know how the OS prioritises the execution of background processes in Linux.
Suppose I have the below command, would it be executed right away, or would the OS prioritise the execution order.
nohup /bin/bash /tmp/kill_loop.sh &
Thanks
All processes running at the same nice value will get an equal cpu-timeslice.
Here is a simple test that launches 2 processes, both performing the exact same operations. One is launched in the background and the other in the foreground.
dd if=/dev/zero of=/dev/null bs=1 &
dd if=/dev/zero of=/dev/null bs=1
The relevant extract from subsequently running the top command
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1366 root 20 0 1576 532 436 R 100 0.0 0:30.79 dd
1365 root 20 0 1576 532 436 R 100 0.0 0:30.79 dd
Next, if both the processes are restricted to the same CPU,
taskset -c 0 dd if=/dev/zero of=/dev/null bs=1 &
taskset -c 0 dd if=/dev/zero of=/dev/null bs=1
Again the relevant extract from subsequently running the top command shows
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1357 root 20 0 1576 532 436 R 50 0.0 0:38.74 dd
1358 root 20 0 1576 532 436 R 50 0.0 0:38.74 dd
both the processes compete for CPU-timeslice and are equally prioritised.
Finally,
kill -SIGINT 1357 &
kill -SIGINT 1358 &
kill -SIGINT 1365 &
kill -SIGINT 1366 &
results in similar amounts of data copied and throughput.
25129255+0 records in
25129255+0 records out
25129255 bytes (25 MB) copied, 34.883 s, 720 kB/s
Slight discrepancies in output may occur in the throughput due to differences in the exact moment the individual processes respond to the break-signal and stop running.
However also note that sched_autogroup_enabled exists.
if enabled, sched_autogroup_enabled ensures that the fairness in distributing cpu-timeslice is now performed between individual shells. By distributing cpu equally amongst the various active shells.
Thus if a shell launches 1 process A,
and another shell launches 2 processes B and C,
then the CPU execution timeslice will typically be distributed as
A <-- 50% <---- shell1 50%
B <-- 25% <-.
C <-- 25% <--`- shell2 50%
(though all 3 processes A, B & C are running at the same nice level.)
The process priorities in Linux kernel is given by NICE values.
Refer to the link
http://en.wikipedia.org/wiki/Nice_(Unix)
The nice values (ranging between -20 to +19) define the process priorities, -20 being the highest priority task. Usually the user-space processes are given default nice values of '0'. You can check the nice values for the running processes on your shell using the below command.
ps -al
F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD
0 S 1039 1268 16889 0 80 0 - 11656 poll_s pts/8 00:00:08 vim
0 S 1047 1566 17683 0 80 0 - 2027 wait pts/18 00:00:00 arm-linux-andro
0 R 1047 1567 1566 21 80 0 - 9143 ? pts/18 00:00:00 cc1
0 R 1031 1570 15865 0 80 0 - 2176 - pts/24 00:00:00 ps
0 R 1031 17357 15865 99 80 0 - 2597 - pts/24 00:03:29 top
So from above output if you see the 'NI' column shows your nice values. When i tried running a background process, that too got a nice value of '0' (top is that process with PID 17357). That would mean, it will also be queued up for like a foreground process and will be scheduled likewise.
Need to recruit the help of any budding bioinformaticians that are lurking in the shadows here.
I am currently in the process of formatting some .fasta files for use in a set of grouping programs but I cannot for the life of me get them to work. First things first, all the files have to have a 3 or 4 character name such as the following:
PP41.fasta
PP59.fasta
PPBD.fasta
...etc...
The files must have headers for each gene sequence that look like so: >xxxx|yyyyyyyyyy where xxxx is the same 3 or 4 letter 'taxon' identifier as the file names I put above and yyyyyyy is a numerical identifier for each of the proteins within each of the taxons (the pipe symbol can also be replaced with an _ as below). I then cat all of these in to one file which has a header that looks correct like so:
>PP49_00001
MIENFNENNDMSDMFWEVEKGTGEVINLVPNTSNTVQPVVLMRLGLFVPTLKSTKRGHQG
EMSSMDATAELRQLAIVKTEGYENIHITGARLDMDNDFKTWVGIIHSFAKHKVIGDAVTL
SFVDFIKLCGIPSSRSSKRLRERLGASLRRIATNTLSFSSQNKSYHTHLVQSAYYDMVKD
TVTIQADPKIFELYQFDRKVLLQLRAINELGRKESAQALYTYIESLPPSPAPISLARLRA
RLNLRSRVTTQNAIVRKAMEQLKGIGYLDYTEIKRGSSVYFIVHARRPKLKALKSSKSSF
KRKKETQEESILTELTREELELLEIIRAEKIIKVTRNHRRKKQTLLTFAEDESQ*
>PP49_00002
MQNDIILPINKLHGLKLLNSLELSDIELGELLSLEGDIKQVSTGNNGIVVHRIDMSEIGS
FLIIDSGESRFVIKAS*
Next step is to construct a blast database which I do as follows, using the formatdb tool of NCBI Blast:
formatdb -i allproteins.fasta -p T -o T
This produces a set of files for the database. Next I conduct an all-vs-all BLAST of the concatenated proteins against the database that I made of them like so, which outputs a tabular file which I suspect is where my issues are beginning to arise:
blastall -p blastp -d allproteins.fasta -i allproteins.fasta -a 6 -F '0 S' -v 100000 -b 100000 -e 1e-5 -m 8 -o plasmid_allvall_blastout
These files have 12 columns and look like the below. It appears correct to me, but my supervisor suspects the error is in the blast file - I don't know what I'm doing wrong however.
PP49_00001 PP51_00025 100.00 354 0 0 1 354 1 354 0.0 552
PP49_00001 PP49_00001 100.00 354 0 0 1 354 1 354 0.0 552
PP49_00001 PPTI_00026 90.28 288 28 0 1 288 1 288 3e-172 476
PP49_00001 PPNP_00026 90.28 288 28 0 1 288 1 288 3e-172 476
PP49_00001 PPKC_00016 89.93 288 29 0 1 288 1 288 2e-170 472
PP49_00001 PPBD_00021 89.93 288 29 0 1 288 1 288 2e-170 472
PP49_00001 PPJN_00003 91.14 79 7 0 145 223 2 80 8e-47 147
PP49_00002 PPTI_00024 100.00 76 0 0 1 76 1 76 3e-50 146
PP49_00002 PPNP_00024 100.00 76 0 0 1 76 1 76 3e-50 146
PP49_00002 PPKC_00018 100.00 76 0 0 1 76 1 76 3e-50 146
SO, this is where the problems really begin. I now pass the above file to a program called orthAgogue which analyses the paired sequences I have above using parameters laid out in the manual (still no idea if I'm doing anything wrong) - all I know is the several output files that are produced are all just nonsense/empty.
Command looks like so:
orthAgogue -i plasmid_allvsall_blastout -t 0 -p 1 -e 5 -O .
Any and all ideas welcome! (Hope I've covered everything - sorry about the long post!)
EDIT Never did manage to find a solution to this. Had to use an alternative piece of software. If admins wish to close this please do, unless it is worth having open for someone else (though I suspect its a pretty niche issue).
Discovered this issue (of orthAgogue) first today:
though my reply may be old, I hope it may help future users;
issue is due to a missing parameter: seems like you forgot to specify the separator: -s '_', ie, the following set of command-line parameters should do the trick*:
orthAgogue -i plasmid_allvsall_blastout -t 0 -p 1 -e 5 -O -s '_'
(* Under the assumption that your input-file is a tabular-seperated file of columns.)
A brief update after comment made by Joe:
In brief, the problem described in the intiail error report (by Joe) is (in most cases) not a bug. Instead it is one of the core properties of the Inparanoid algorithm which orthAgogue implements: if your ortholog-result-file is empty (though constructed), this (in most cases) implies that there are no reciprocal best match between a protein-pair from two different taxa/species.
One (of many) explanations for this could be that your blastp-scores are too similar, a case where I would suggest a combined tree-based/homology clustering as in TREEFAM.
Therefore, when I receive your data, I'll send it to one of the biologists I'm working with, with goal of identifying the tool proper for your data: hope my last comment makes your day ;)
Ole Kristian Ekseth, developer of orthAgogue
I implemented some code, which runs in a loop:
loop do
..
end
In that loop, I handle keypresses with Curses library. If I press N and entered something, I start a new Thread, which counts time( loop do .. end again)
The question is, why loop or while true causes 100% cpu load on one of the cpu cores? Is the problem actaully in loop?
Is there a way to do infinite loop with lower cpu consumption in ruby?
The full sources available here
UPD - Strace
$ strace -c -p 5480
Process 5480 attached - interrupt to quit
^CProcess 5480 detached
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
51.52 0.002188 0 142842 ioctl
24.21 0.001028 0 71421 select
14.22 0.000604 0 47614 gettimeofday
10.05 0.000427 0 47614 rt_sigaction
0.00 0.000000 0 25 write
0.00 0.000000 0 16 futex
------ ----------- ----------- --------- --------- ----------------
100.00 0.004247 309532 total
After some thinking and suggestions from user2246674 I managed to resolve the issue. It was not inside the threads, it was the main loop.
I had such code inside the main loop:
c = Curses.getch
unless c.nil?
# input handling
After adding sleep 1 to else problem was resolved. It does nothing when there's no input from Curses, then checks again in one second, and this stops it from actively polling STDIN and generating high CPU load