I know that time will send timing statistics output to stderr. But somehow I couldn't capture it either in a bash script or into a file via redirection:
time $cmd 1>/dev/null 2>file
$output=`cat file`
Or
$output=`time $cmd 1>/dev/null`
I'm only interested in timing, not the direct output of the command. I've read some posts overhere but still no luck finding a viable solution. Any suggestions?
Thanks!
Try:
(time $cmd) 1>/dev/null 2>file
so that (time $cmd) is executed in a subshell environment and you can then redirect its output.
(Using GNU time /usr/bin/time rather than bash builtin) (Thanks #Michael Krelin)
(Or invoke as \time) (Thanks #Sorpigal, if I ever knew this I'd entirely forgotten)
How about using the -o and maybe -a command line options:
-o FILE, --output=FILE
Do not send the results to stderr, but overwrite the specified file.
-a, --append
(Used together with -o.) Do not overwrite but append.
I had a similar issue where I wanted to bench optimizations. The idea was to run the program several times then output statistics on run durations.
I used the following command lines:
1st run: (time ./myprog)2>times.log
Next runs: (time ./myprog)2>>times.log
Note that my (bash?) built-in time outputs statistics in the form:
real 0m2.548s
user 0m7.341s
sys 0m0.007s
Then I ran the following Perl script to retrieve statistics:
#!/usr/bin/perl -w
open FH, './times.log' or die "ERROR: ", $!;
my $useracc1 = 0;
my $useracc2 = 0;
my $usermean = 0;
my $uservar = 0;
my $temp = 0;
while(<FH>)
{
if("$_" =~ /user/)
{
if("$_" =~ /(\d+)m(\d{1,2})\.(\d{3})s/)
{
$usercpt++;
$temp = $1*60 + $2 + $3*0.001;
$useracc1 += $temp;
$useracc2 += $temp**2;
}
}
}
if($usercpt ne 0)
{
$usermean = $useracc1 / $usercpt;
$userdev = sqrt($useracc2 / $usercpt - $usermean**2);
$usermean = int($usermean*1000)/1000;
$userdev = int($userdev*1000)/1000;
}
else
{
$usermean = "---";
$userdev = "---";
}
print "User: ", $usercpt, " runs, avg. ", $usermean, "s, std.dev. ", $userdev,"s\n";
Of course, regular expressions may require adjustements depending on your time output format. It can also be easily extended to include real and system statistics.
Related
I have something like this on a Jenkinsfile (Groovy) and I want to record the stdout and the exit code in a variable in order to use the information later.
sh "ls -l"
How can I do this, especially as it seems that you cannot really run any kind of groovy code inside the Jenkinsfile?
The latest version of the pipeline sh step allows you to do the following;
// Git committer email
GIT_COMMIT_EMAIL = sh (
script: 'git --no-pager show -s --format=\'%ae\'',
returnStdout: true
).trim()
echo "Git committer email: ${GIT_COMMIT_EMAIL}"
Another feature is the returnStatus option.
// Test commit message for flags
BUILD_FULL = sh (
script: "git log -1 --pretty=%B | grep '\\[jenkins-full]'",
returnStatus: true
) == 0
echo "Build full flag: ${BUILD_FULL}"
These options where added based on this issue.
See official documentation for the sh command.
For declarative pipelines (see comments), you need to wrap code into script step:
script {
GIT_COMMIT_EMAIL = sh (
script: 'git --no-pager show -s --format=\'%ae\'',
returnStdout: true
).trim()
echo "Git committer email: ${GIT_COMMIT_EMAIL}"
}
Current Pipeline version natively supports returnStdout and returnStatus, which make it possible to get output or status from sh/bat steps.
An example:
def ret = sh(script: 'uname', returnStdout: true)
println ret
An official documentation.
quick answer is this:
sh "ls -l > commandResult"
result = readFile('commandResult').trim()
I think there exist a feature request to be able to get the result of sh step, but as far as I know, currently there is no other option.
EDIT: JENKINS-26133
EDIT2: Not quite sure since what version, but sh/bat steps now can return the std output, simply:
def output = sh returnStdout: true, script: 'ls -l'
If you want to get the stdout AND know whether the command succeeded or not, just use returnStdout and wrap it in an exception handler:
scripted pipeline
try {
// Fails with non-zero exit if dir1 does not exist
def dir1 = sh(script:'ls -la dir1', returnStdout:true).trim()
} catch (Exception ex) {
println("Unable to read dir1: ${ex}")
}
output:
[Pipeline] sh
[Test-Pipeline] Running shell script
+ ls -la dir1
ls: cannot access dir1: No such file or directory
[Pipeline] echo
unable to read dir1: hudson.AbortException: script returned exit code 2
Unfortunately hudson.AbortException is missing any useful method to obtain that exit status, so if the actual value is required you'd need to parse it out of the message (ugh!)
Contrary to the Javadoc https://javadoc.jenkins-ci.org/hudson/AbortException.html the build is not failed when this exception is caught. It fails when it's not caught!
Update:
If you also want the STDERR output from the shell command, Jenkins unfortunately fails to properly support that common use-case. A 2017 ticket JENKINS-44930 is stuck in a state of opinionated ping-pong whilst making no progress towards a solution - please consider adding your upvote to it.
As to a solution now, there could be a couple of possible approaches:
a) Redirect STDERR to STDOUT 2>&1
- but it's then up to you to parse that out of the main output though, and you won't get the output if the command failed - because you're in the exception handler.
b) redirect STDERR to a temporary file (the name of which you prepare earlier) 2>filename (but remember to clean up the file afterwards) - ie. main code becomes:
def stderrfile = 'stderr.out'
try {
def dir1 = sh(script:"ls -la dir1 2>${stderrfile}", returnStdout:true).trim()
} catch (Exception ex) {
def errmsg = readFile(stderrfile)
println("Unable to read dir1: ${ex} - ${errmsg}")
}
c) Go the other way, set returnStatus=true instead, dispense with the exception handler and always capture output to a file, ie:
def outfile = 'stdout.out'
def status = sh(script:"ls -la dir1 >${outfile} 2>&1", returnStatus:true)
def output = readFile(outfile).trim()
if (status == 0) {
// output is directory listing from stdout
} else {
// output is error message from stderr
}
Caveat: the above code is Unix/Linux-specific - Windows requires completely different shell commands.
this is a sample case, which will make sense I believe!
node('master'){
stage('stage1'){
def commit = sh (returnStdout: true, script: '''echo hi
echo bye | grep -o "e"
date
echo lol''').split()
echo "${commit[-1]} "
}
}
For those who need to use the output in subsequent shell commands, rather than groovy, something like this example could be done:
stage('Show Files') {
environment {
MY_FILES = sh(script: 'cd mydir && ls -l', returnStdout: true)
}
steps {
sh '''
echo "$MY_FILES"
'''
}
}
I found the examples on code maven to be quite useful.
All the above method will work. but to use the var as env variable inside your code you need to export the var first.
script{
sh " 'shell command here' > command"
command_var = readFile('command').trim()
sh "export command_var=$command_var"
}
replace the shell command with the command of your choice. Now if you are using python code you can just specify os.getenv("command_var") that will return the output of the shell command executed previously.
How to read the shell variable in groovy / how to assign shell return value to groovy variable.
Requirement : Open a text file read the lines using shell and store the value in groovy and get the parameter for each line .
Here , is delimiter
Ex: releaseModule.txt
./APP_TSBASE/app/team/i-home/deployments/ip-cc.war/cs_workflowReport.jar,configurable-wf-report,94,23crb1,artifact
./APP_TSBASE/app/team/i-home/deployments/ip.war/cs_workflowReport.jar,configurable-temppweb-report,394,rvu3crb1,artifact
========================
Here want to get module name 2nd Parameter (configurable-wf-report) , build no 3rd Parameter (94), commit id 4th (23crb1)
def module = sh(script: """awk -F',' '{ print \$2 "," \$3 "," \$4 }' releaseModules.txt | sort -u """, returnStdout: true).trim()
echo module
List lines = module.split( '\n' ).findAll { !it.startsWith( ',' ) }
def buildid
def Modname
lines.each {
List det1 = it.split(',')
buildid=det1[1].trim()
Modname = det1[0].trim()
tag= det1[2].trim()
echo Modname
echo buildid
echo tag
}
If you don't have a single sh command but a block of sh commands, returnstdout wont work then.
I had a similar issue where I applied something which is not a clean way of doing this but eventually it worked and served the purpose.
Solution -
In the shell block , echo the value and add it into some file.
Outside the shell block and inside the script block , read this file ,trim it and assign it to any local/params/environment variable.
example -
steps {
script {
sh '''
echo $PATH>path.txt
// I am using '>' because I want to create a new file every time to get the newest value of PATH
'''
path = readFile(file: 'path.txt')
path = path.trim() //local groovy variable assignment
//One can assign these values to env and params as below -
env.PATH = path //if you want to assign it to env var
params.PATH = path //if you want to assign it to params var
}
}
Easiest way is use this way
my_var=`echo 2`
echo $my_var
output
: 2
note that is not simple single quote is back quote ( ` ).
I've written this script (called SpeedTest.pl) to log internet speed due to resolve a problem with my ISP.
It work well, but just if I use a Perl interpreter (if I double-click on the script). I want to compile it to generate a stand-alone executable to run in a different PC without Perl installed.
Well, I've try with pp and Perl2Exe both, but when I launch the SpeedTest.exe i see a lot of process called "SpeedTest.exe" in task manager. If I don't block all these process, the PC OS will crash (a pop-up say: "the memory can't be written, blah blah blah).
Any ideas?
This is the script:
#!/usr/local/bin/perl
use strict;
use warnings;
use App::SpeedTest;
my($day, $month_temp, $year_temp)=(localtime)[3,4,5];
my $year = $year_temp+1900;
my $month = $month_temp+1;
my $date = "0"."$day"."-"."0"."$month"."-"."$year";
my $filename = "Speed Test - "."$date".".csv";
if (-e $filename) {
goto SPEEDTEST;
} else {
goto CREATEFILE;
}
CREATEFILE:
open(FILE, '>', $filename);
print FILE "Date".";"."Time".";"."Download [Mbit/s]".";"."Upload [Mbit/s]".";"."\n";
close FILE;
goto SPEEDTEST;
SPEEDTEST:
my $download = qx(speedtest -Q -C --no-upload);
my $upload = qx(speedtest -Q -C --no-download);
my #download_chars = split("", $download);
my #upload_chars = split("", $upload);
my $time = "$download_chars[12]"."$download_chars[13]"."$download_chars[14]"."$download_chars[15]"."$download_chars[16]";
my $download_speed = "$download_chars[49]"."$download_chars[50]"."$download_chars[51]"."$download_chars[52]"."$download_chars[53]";
my $upload_speed = "$upload_chars[49]"."$upload_chars[50]"."$upload_chars[51]"."$upload_chars[52]"."$upload_chars[53]";
my $output = "$date".";"."$time".";"."$download_speed".";"."$upload_speed".";";
open(FILE, '>>', $filename);
print FILE $output."\n";
close FILE;
sleep 300;
my($day_check, $month_temp_check, $year_temp_check)=(localtime)[3,4,5];
my $year_check = $year_temp_check+1900;
my $month_check = $month_temp_check+1;
my $date_check = "0"."$day_check"."-"."0"."$month_check"."-"."$year_check";
my $filename_check = "Speed Test - "."$date_check".".csv";
if ($filename = $filename_check) {
goto SPEEDTEST;
} else {
$filename = $filename_check;
goto CREATEFILE;
}
Well, Steffen really answered this by way of a Comment, but here it is as an Answer. Just compile your Perl into an EXE that does NOT have the same name as the one that the Perl script is calling, for example:
speedtest.pl compiled into myspeedtest.exe, which calls speedtest.exe
See the discussion at Is `command -v` option required in a POSIX shell? Is posh compliant with POSIX?. It describes that type as well as command -v option is optional in POSIX.1-2004.
The answer marked correct at Check if a program exists from a Bash script doesn't help either. Just like type, hash is also marked as XSI in POSIX.1-2004. See http://pubs.opengroup.org/onlinepubs/009695399/utilities/hash.html.
Then what would be a POSIX compliant way to write a shell script to find if a command exists on the system or not?
How do you want to go about it? You can look for the command on directories in the current value of $PATH; you could look in the directories specified by default for the system PATH (getconf PATH as long as getconf
exists on PATH).
Which implementation language are you going to use? (For example: I have a Perl implementation that does a decent job finding executables on $PATH — but Perl is not part of POSIX; is it remotely relevant to you?)
Why not simply try running it? If you're going to deal with Busybox-based systems, lots of the executables can't be found by searching — they're built into the shell. The major caveat is if a command does something dangerous when run with no arguments — but very few POSIX commands, if any, do that. You might also need to determine what command exit statuses indicate that the command is not found versus the command objecting to not being called with appropriate arguments. And there's little guarantee that all systems will be consistent on that. It's a fraught process, in case you hadn't gathered.
Perl implementation pathfile
#!/usr/bin/env perl
#
# #(#)$Id: pathfile.pl,v 3.4 2015/10/16 19:39:23 jleffler Exp $
#
# Which command is executed
# Loosely based on 'which' from Kernighan & Pike "The UNIX Programming Environment"
#use v5.10.0; # Uses // defined-or operator; not in Perl 5.8.x
use strict;
use warnings;
use Getopt::Std;
use Cwd 'realpath';
use File::Basename;
my $arg0 = basename($0, '.pl');
my $usestr = "Usage: $arg0 [-AafhqrsVwx] [-p path] command ...\n";
my $hlpstr = <<EOS;
-A Absolute pathname (determined by realpath)
-a Print all possible matches
-f Print names of files (as opposed to symlinks, directories, etc)
-h Print this help message and exit
-q Quiet mode (don't print messages about files not found)
-r Print names of files that are readable
-s Print names of files that are not empty
-V Print version information and exit
-w Print names of files that are writable
-x Print names of files that are executable
-p path Use PATH
EOS
sub usage
{
print STDERR $usestr;
exit 1;
}
sub help
{
print $usestr;
print $hlpstr;
exit 0;
}
sub version
{
my $version = 'PATHFILE Version $Revision: 3.4 $ ($Date: 2015/10/16 19:39:23 $)';
# Beware of RCS hacking at RCS keywords!
# Convert date field to ISO 8601 (ISO 9075) notation
$version =~ s%\$(Date:) (\d\d\d\d)/(\d\d)/(\d\d) (\d\d:\d\d:\d\d) \$%\$$1 $2-$3-$4 $5 \$%go;
# Remove keywords
$version =~ s/\$([A-Z][a-z]+|RCSfile): ([^\$]+) \$/$2/go;
print "$version\n";
exit 0;
}
my %opts;
usage unless getopts('AafhqrsVwxp:', \%opts);
version if ($opts{V});
help if ($opts{h});
usage unless scalar(#ARGV);
# Establish test and generate test subroutine.
my $chk = 0;
my $test = "-x";
my $optlist = "";
foreach my $opt ('f', 'r', 's', 'w', 'x')
{
if ($opts{$opt})
{
$chk++;
$test = "-$opt";
$optlist .= " -$opt";
}
}
if ($chk > 1)
{
$optlist =~ s/^ //;
$optlist =~ s/ /, /g;
print STDERR "$arg0: mutually exclusive arguments ($optlist) given\n";
usage;
}
my $chk_ref = eval "sub { my(\$cmd) = \#_; return -f \$cmd && $test \$cmd; }";
my #PATHDIRS;
my %pathdirs;
my $path = defined($opts{p}) ? $opts{p} : $ENV{PATH};
#foreach my $element (split /:/, $opts{p} // $ENV{PATH})
foreach my $element (split /:/, $path)
{
$element = "." if $element eq "";
push #PATHDIRS, $element if $pathdirs{$element}++ == 0;
}
my $estat = 0;
CMD:
foreach my $cmd (#ARGV)
{
if ($cmd =~ m%/%)
{
if (&$chk_ref($cmd))
{
print "$cmd\n" unless $opts{q};
next CMD;
}
print STDERR "$arg0: $cmd: not found\n" unless $opts{q};
$estat = 1;
}
else
{
my $found = 0;
foreach my $directory (#PATHDIRS)
{
my $file = "$directory/$cmd";
if (&$chk_ref($file))
{
$file = realpath($file) if $opts{A};
print "$file\n" unless $opts{q};
next CMD unless defined($opts{a});
$found = 1;
}
}
print STDERR "$arg0: $cmd: not found\n" unless $found || $opts{q};
$estat = 1;
}
}
exit $estat;
I'm reading How do we capture CTRL ^ C - Perl Monks, but I cannot seem to get the right info to help with my problem.
The thing is - I have an infinite loop, and 'multiline' printout to terminal (I'm aware I'll be told to use ncurses instead - but for short scripts, I'm more comfortable writing a bunch of printfs). I'd like to trap Ctrl-C in such a way, that the script will terminate only after this multiline printout has finished.
The script is (Ubuntu Linux 11.04):
#!/usr/bin/env perl
use strict;
use warnings;
use Time::HiRes;
binmode(STDIN); # just in case
binmode(STDOUT); # just in case
# to properly capture Ctrl-C - so we have all lines printed out
# unfortunately, none of this works:
my $toexit = 0;
$SIG{'INT'} = sub {print "EEEEE"; $toexit=1; };
#~ $SIG{INT} = sub {print "EEEEE"; $toexit=1; };
#~ sub REAPER { # http://www.perlmonks.org/?node_id=436492
#~ my $waitedpid = wait;
#~ # loathe sysV: it makes us not only reinstate
#~ # the handler, but place it after the wait
#~ $SIG{CHLD} = \&REAPER;
#~ print "OOOOO";
#~ }
#~ $SIG{CHLD} = \&REAPER;
#~ $SIG{'INT'} = 'IGNORE';
# main
# http://stackoverflow.com/questions/14118/how-can-i-test-stdin-without-blocking-in-perl
use IO::Select;
my $fsin = IO::Select->new();
$fsin->add(\*STDIN);
my ($cnt, $string);
$cnt=0;
$string = "";
while (1) {
$string = ""; # also, re-initialize
if ($fsin->can_read(0)) { # 0 timeout
$string = <STDIN>;
}
$cnt += length($string);
printf "cnt: %10d\n", $cnt;
printf "cntA: %10d\n", $cnt+1;
printf "cntB: %10d\n", $cnt+2;
print "\033[3A"; # in bash - go three lines up
print "\033[1;35m"; # in bash - add some color
if ($toexit) { die "Exiting\n" ; } ;
}
Now, if I run this, and I press Ctrl-C, I either get something like this (note, the _ indicates position of terminal cursor after script has terminated):
MYPROMPT$ ./test.pl
cnEEEEEcnt: 0
MYPROMPT$ _
cntB: 2
Exiting
or:
MYPROMPT$ ./test.pl
cncnt: 0
MYPROMPT$ _
cntB: 2
Exiting
... however, I'd like to get:
MYPROMPT$ ./test.pl
cncnt: 0
cntA: 1
cntB: 2
Exiting
MYPROMPT$ _
Obviously, handlers are running - but not quite in the timing (or order) I expect them to. Can someone clarify how do I fix this, so I get the output I want?
Many thanks in advance for any answers,
Cheers!
Hmmm... seems solution was easier than I thought :) Basically, the check for "trapped exit" should run after the lines are printed - but before the characters for "go three lines up" are printed; that is, that section should be:
printf "cnt: %10d\n", $cnt;
printf "cntA: %10d\n", $cnt+1;
printf "cntB: %10d\n", $cnt+2;
if ($toexit) { die "Exiting\n" ; } ;
print "\033[3A"; # in bash - go three lines up
print "\033[1;35m"; # in bash - add some color
... and then the output upon Ctrl-C seems to be like:
MYPROMPT$ ./test.pl
cnt: 0
^CcntA: 1
cntB: 2
Exiting
MYPROMPT$ _
Well, hope this may help someone,
Cheers!
I'm trying to emulate RapidCRC's ability to check crc32 values within filenames on Windows Vista Ultimate 64-bit. However, I seem to be running into some kind of argument limitation.
I wrote a quick Perl script, created a batch file to call it, then placed a shortcut to the batch file in %APPDATA%\Microsoft\Windows\SendTo
This works great when I select about 20 files or less, right-click and "send to" my batch file script. However, nothing happens at all when I select more than that. I suspect there's a character or number of arguments limit somewhere.
Hopefully I'm missing something simple and that the solution or a workaround isn't too painful.
References:
batch file (crc32_inline.bat):
crc32_inline.pl %*
Perl notes:
I'm using (strawberry) perl v5.10.0
I have C:\strawberry\perl\bin in my path, which is where crc32.bat exists.
perl script (crc32_inline.pl):
#!/usr/bin/env perl
use strict;
use warnings;
use Cwd;
use English qw( -no_match_vars );
use File::Basename;
$OUTPUT_AUTOFLUSH = 1;
my $crc32_cmd = 'crc32.bat';
my $failure_report_basename = 'crc32_failures.txt';
my %failures = ();
print "\n";
foreach my $arg (#ARGV) {
# if the file has a crc, check to see if it matches the calculated
# crc.
if (-f $arg and $arg =~ /\[([0-9a-f]{8})\]/i) {
my $crc = uc $1;
my $basename = basename($arg);
print "checking ${basename}... ";
my $calculated_crc = uc `${crc32_cmd} "${arg}"`;
chomp($calculated_crc);
if ($crc eq $calculated_crc) {
print "passed.\n";
}
else {
print "FAILED (calculated ${calculated_crc})\n";
my $dirname = dirname($arg);
$failures{$dirname}{$basename} = $calculated_crc;
}
}
}
print "\nReport Summary:\n";
if (scalar keys %failures == 0) {
print " All files OK\n";
}
else {
print sprintf(" %d / %d files failed crc32 validation.\n" .
" See %s for details.\n",
scalar keys %failures,
scalar #ARGV,
$failure_report_basename);
my $failure_report_fullname = $failure_report_basename;
if (defined -f $ARGV[0]) {
$failure_report_fullname
= dirname($ARGV[0]) . '/' . $failure_report_basename;
}
$OUTPUT_AUTOFLUSH = 0;
open my $fh, '>' . $failure_report_fullname or die $!;
foreach my $dirname (sort keys %failures) {
print {$fh} $dirname . "\n";
foreach my $basename (sort keys %{$failures{$dirname}}) {
print {$fh} sprintf(" crc32(%s) basename(%s)\n",
$failures{$dirname}{$basename},
$basename);
}
}
close $fh;
$OUTPUT_AUTOFLUSH = 1;
}
print sprintf("\n%s done! (%d seconds elapsed)\n" .
"Press enter to exit.\n",
basename($0),
time() - $BASETIME);
<STDIN>;
I will recommend just putting a shortcut to your script in the "Send To" directory instead of doing it via a batch file (which is subject to cmd.exes limits on command line length).