Suppressing system command called from awk script - windows

I am currently running this script in Windows 7.
So, I have a program that is meant to color-code output from another command (mkmk) and tally up varying numbers of errors and other notable stats, etc. So right now, it starts as a batch file which
Turns off echo
Sets some color values to variables
Calls the mkmk command and the awk script together
The awk script then parses line by line, and calls a function which then calls an exe that performs the Colorizing. Works like follows:
/: error/ {
CntError++ ;
TraError[CntError] = $0;
colorf(cErrorDcb,$0) ;
next
}
function colorf(fg, str ) {
if ( Couleur < 1 )
{
printf("%s\n",str);
}
else
{
if ( System == "UNIX" )
{
printf("%s%s%s%s\n",fg,bg,str,NORMAL);
}
else
{
system("Colorize.exe /c:" fg " \"" str "\"");
printf("\n");
}
}
}
So, everything works, EXCEPT that every time system("Colorize.exe") is called (a lot), the command is outputted in the terminal and clutters up the output. It doesn't appear to be affected by the #echo off command in my batch file since it is called inside the awk script. Is there anyway to hide only these system commands but keep the rest of my awk output?

I have no idea what the magical Windows incantations are to control what displays on the terminal but in general instead of calling system() and letting the command it calls produce it's own output that's getting mixed in with the awk output, use getline to read the result of the call to populate a variable and then print that from awk. Something like this:
/: error/ {
TraError[++CntError] = $0
print colorf(cErrorDcb,$0)
next
}
function colorf(fg, str, cmd, line, colorStr) {
if ( Couleur < 1 ) {
colorStr = str
}
else if ( System == "UNIX" ) {
colorStr = fg bg str NORMAL
}
else {
cmd = "Colorize.exe /c:" fg " \"" str "\""
colorStr = ( (cmd | getline line) > 0 ? line : str )
close(cmd)
}
return colorStr
}
I got rid of all of the useless semi-colons too.
Best advice - get cygwin!

Related

Writing multiline variable to file cmd

I am writing multiple lines in a variable to a file as part of my build step in groovy jenkins.
def output = GetProfiles()
String[] rev = output.toString().split("\n");
for(int i=0;i<rev.length;i++) {
String cred = rev[i]
bat "#echo off && echo $cred >> \"$tmpDir\\credentials\""
}
GetProfiles returns multiple lines of data and the above code works. However, it takes alot of time as it writes to the file line by line.
Is there a better approach to writing the entire set of lines in the variable to a file in one go? I tried encasing the echo command in () but that doesn't work.
bat "#echo off && (echo $output) > \"$tmpDir\\credentials\""
def output = GetProfiles().toString()
writeFile( file: "$tmpDir/credentials", text: output )

Executing Perl script from windows-command line with 2 entry

this is my Perl script
use strict;
use warnings;
use XML::Twig;
use Data::Dumper;
sub xml2array{
my $path = shift;
my $twig = XML::Twig->new->parsefile($path);
return map { $_ -> att('VirtualPath') } $twig -> get_xpath('//Signals');
}
sub compareMappingToArray {
my $mapping = shift;
my $signalsRef = shift;
my $i = 1;
print "In file : $mapping\n";
open(my $fh, $mapping);
while (my $r = <$fh>) {
chomp $r;
if ($r =~ /\'(ModelSpecific.*)\'/) {
my $s = $1;
my #matches = grep { /^$s$/ } #{$signalsRef};
print "line $i : not found - $s\n" if scalar #matches ==0;
print "line $i : multiple $s\n" if scalar #matches > 1;
}
$i = $i + 1 # keep line index
}
}
my $mapping = "C:/Users/HOR1DY/Desktop/Global/TA_Mapping/CAN/CAN_ESP_002_mapping.pm";
my #virtualpath = xml2array("SignalModel.xml");
compareMappingToArray($mapping, \#virtualpath);
The script works well, the aim of it is to compare the file "SignalModel.xml" and "CAN_ESP_002_mapping.pm" and putting the lines that didn't matches in a .TXT file. Here is how the .TXT file looks like:
In file : C:/Users/HOR1DY/Desktop/Global/TA_Mapping/CAN/CAN_ESP_002_mapping.pm
line 331 : not found - ModelSpecific.EID.NET.CAN_Engine.VCU.Transmit.VCU_202.R2B_VCU_202__byte_3
line 348 : not found - ModelSpecific.EID.NET.CAN_Engine.CMM_WX.Transmit.CMM_HYB_208.R2B_CMM_HYB_208__byte_2
line 368 : not found - ModelSpecific.EID.NET.CAN_Engine.VCU.Transmit.VCU_222.R2B_VCU_222__byte_0
But for this script, I put the two files that need to be compare inside of the code and instead of doing that, I would like to run the script in windows cmd line and having something like:
C:\Users>perl CANMappingChecker.pl -'file 1' 'file 2'
All the files are in .zip file so if I can execute the script that he goes inside and take the 2 files that I need for comparison, it should be perfect.
I really don't know how to do and what to put inside my script to make that in the cmd windows. Thanks for your help !
Program (or script) parameters are stored in the #ARGV array. shift and pop without any parameter will work on #ARGV when used outside of a sub, in a sub they operate on #_.
See Archive::Zip for zip file handling.

bash time output processing

I know that time will send timing statistics output to stderr. But somehow I couldn't capture it either in a bash script or into a file via redirection:
time $cmd 1>/dev/null 2>file
$output=`cat file`
Or
$output=`time $cmd 1>/dev/null`
I'm only interested in timing, not the direct output of the command. I've read some posts overhere but still no luck finding a viable solution. Any suggestions?
Thanks!
Try:
(time $cmd) 1>/dev/null 2>file
so that (time $cmd) is executed in a subshell environment and you can then redirect its output.
(Using GNU time /usr/bin/time rather than bash builtin) (Thanks #Michael Krelin)
(Or invoke as \time) (Thanks #Sorpigal, if I ever knew this I'd entirely forgotten)
How about using the -o and maybe -a command line options:
-o FILE, --output=FILE
Do not send the results to stderr, but overwrite the specified file.
-a, --append
(Used together with -o.) Do not overwrite but append.
I had a similar issue where I wanted to bench optimizations. The idea was to run the program several times then output statistics on run durations.
I used the following command lines:
1st run: (time ./myprog)2>times.log
Next runs: (time ./myprog)2>>times.log
Note that my (bash?) built-in time outputs statistics in the form:
real 0m2.548s
user 0m7.341s
sys 0m0.007s
Then I ran the following Perl script to retrieve statistics:
#!/usr/bin/perl -w
open FH, './times.log' or die "ERROR: ", $!;
my $useracc1 = 0;
my $useracc2 = 0;
my $usermean = 0;
my $uservar = 0;
my $temp = 0;
while(<FH>)
{
if("$_" =~ /user/)
{
if("$_" =~ /(\d+)m(\d{1,2})\.(\d{3})s/)
{
$usercpt++;
$temp = $1*60 + $2 + $3*0.001;
$useracc1 += $temp;
$useracc2 += $temp**2;
}
}
}
if($usercpt ne 0)
{
$usermean = $useracc1 / $usercpt;
$userdev = sqrt($useracc2 / $usercpt - $usermean**2);
$usermean = int($usermean*1000)/1000;
$userdev = int($userdev*1000)/1000;
}
else
{
$usermean = "---";
$userdev = "---";
}
print "User: ", $usercpt, " runs, avg. ", $usermean, "s, std.dev. ", $userdev,"s\n";
Of course, regular expressions may require adjustements depending on your time output format. It can also be easily extended to include real and system statistics.

Perl: Weird Tie::File behaviour in Windows as opposed to Unix

I have this perl script that uses Tie::File.
In Linux(Ubuntu) when I invoke the script via Bash it works as expected but in Windows when I invoke the script via Powershell it behaves weirdly (check P.S. below).
Code:
#!/usr/bin/perl -T
use strict;
use warnings;
use Tie::File;
use CommonStringTasks;
if ( #ARGV != 4 ) {
print "ERROR:Inadequate/Redundant arguments.\n";
print "Usage: perl <pl_executable> <path/to/peer_main.java> <peer_main.java>\n";
print " <score_file_index> <port_step_index>\n";
print $ARGV[0], "\n";
print $ARGV[1], "\n";
print $ARGV[2], "\n";
print $ARGV[3], "\n";
exit 1;
}
my $PEER_DIR = $ARGV[0];
my $PEER_FILE = $ARGV[1];
my $PEER_PACKAGE = "src/planetlab/app";
my $PEER_PATH = "${PEER_DIR}/${PEER_PACKAGE}/${PEER_FILE}";
# Check if args are tainted ...
# Check $PEER_PATH file permissions ...
open(my $file, "+<", "$PEER_PATH")
or
die("File ", $PEER_FILE, " could not be opened for editing:$!");
# Edit the file and change variables for debugging/deployment setup.
# Number demanglers:
# -flock -> arg2 -> 2 stands for FILE_EX
# Options (critical!):
# -Memory: Inhibit caching as this will allow record changes on the fly.
tie my #fileLines,
'Tie::File',
$file,
memory => 0
or
die("File ", $PEER_FILE, " could not be tied with Tie::File:$!");
flock $file, 2;
my $i = 0;
my $scoreLine = "int FILE_INDEX = " . $SCORE . ";";
my $portLine = "int SERVER_PORT = " . $PORT . ";";
my $originalScoreLine = "int FILE_INDEX =";
my $originalPortLine = "int SERVER_PORT =";
(tied #fileLines)->defer;
while (my $line = <$file>) {
if ( ($line =~ m/($scoreLine)/) && ($SCORE+1 > 0) ) {
print "Original line (score): ", "\n", $scoreLine, "\n";
chomp $line;
$line = substr($line, 0, -($scoreDigits+1));
$line = $line . (++$SCORE) . ";";
print "Editing line (score): ", $i, "\n", trimLeadSpaces($fileLines[$i]), "\n";
$fileLines[$i] = $line;
print "Line replaced with:\n", trimLeadSpaces($line), "\n";
next;
}
if ( ($line =~ m/($portLine)/) && ($PORT > 0) ) {
print "Original line (port): ", "\n", $portLine, "\n";
chomp $line;
$line = substr($line, 0, -($portDigits+1));
$line = $line . (++$PORT) . ";";
print "Editing line (port): ", $i, "\n", trimLeadSpaces($fileLines[$i]), "\n";
$fileLines[$i] = $line;
print "Line replaced with:\n", trimLeadSpaces($line), "\n";
last;
}
# Restore original settings.
if ( ($line =~ m/($originalScoreLine)/) && ($SCORE < 0) ) {
print "Restoring line (score) - FROM: ", "\n", $fileLines[$i], "\n";
$fileLines[$i] = " private static final int FILE_INDEX = 0;";
print "Restoring line (score) - TO: ", "\n", $fileLines[$i], "\n";
next;
}
if ( ($line =~ m/($originalPortLine)/) && ($PORT < 0) ) {
print "Restoring line (port) - FROM: ", "\n", $fileLines[$i], "\n";
$PORT = abs($PORT);
$fileLines[$i] = " private static final int SERVER_PORT = " . $PORT . ";";
print "Restoring line (port) - TO: ", "\n", $fileLines[$i], "\n";
last;
}
} continue {
$i++;
}
(tied #fileLines)->flush;
untie #fileLines;
close $file;
The perl version in both OSes is 5+(in Windows Active-State Perl with CPAN modules).
Could it be the way I open the filehandle? Any ideas anyone?
P.S.: The first version had a while (<$file>) and instead of $line I used the $_ variable but when I did that I had a behaviour where specific lines would not be edited but instead the file would get appended with a hundred newlines or so followed by the (correctly) edited line and so on. I also had a warning about $fileLines[$i] being uninitialized!Clearly something's wrong with the Tie::File structure in Windows or something else that I am not aware of. Same erratic behaviour takes place with the changes and in Linux(Ubuntu) behaviour again is as expected.
The OPs question is vague, and lacks input and expected output. Therefore I will simply note some of my concerns:
First, using Tie::File and <$file> and flock on the same handle seems to be both overkill and dangerous. I would recommend simply using Tie::File to iterate and to edit, such as:
#!/usr/bin/env perl
use strict;
use warnings;
use Tie::File;
tie my #lines, 'Tie::File', 'filename';
foreach my $linenum ( 0..$#lines ) {
if ($lines[$linenum] =~ /something/) {
$lines[$linenum] = 'somethingelse';
}
}
Perhaps better than edit inline, as Tie::File allows, copy the file to a backup, iterate over the lines using <$file>, then write to a new file with the old name.
#!/usr/bin/env perl
use strict;
use warnings;
use File::Copy 'move';
my $infile = $ARGV[0];
move( $infile, "$infile.bak");
open my $inhandle, '<', "$infile.bak";
open my $outhandle, '>', $infile;
while( my $line = <$inhandle> ) {
if ($line =~ /something/) {
$line = 'somethingelse';
}
print $outhandle $line;
}
Second, the -MModule flag simply translates to a use Module; at the top of the script. Therefore -MCPAN is use CPAN;, however loading the CPAN module does nothing for the script. CPAN.pm gives a script the ability to install modules.
Third, we will be able to help better if you give and example input, an expected output, and a stripped down script that clearly shows how this operation is to perform while still failing in the same way that the actual script does.
I found out the source of my problems. The reason was the record separator!
Tie::File expected in Windows a /r/n record separator so it read the whole file in just one pass. My files are in UTF-8, with Unix line endings.
That is why when I was traversing the $fileLines and accessed any index beyond 0 I got from perl a warning that the string was not initialized. Fixed the problem and now I am ready to go on! :D
P.S.: Mr Joel Berger I am marking your answer as valid/appropriate because you really tried helping me and I followed your first advice about the file handle :).
Thank you everyone for assisting me xD xD xD

Why can't I use more than 20 files with my Perl script and Windows's SendTo?

I'm trying to emulate RapidCRC's ability to check crc32 values within filenames on Windows Vista Ultimate 64-bit. However, I seem to be running into some kind of argument limitation.
I wrote a quick Perl script, created a batch file to call it, then placed a shortcut to the batch file in %APPDATA%\Microsoft\Windows\SendTo
This works great when I select about 20 files or less, right-click and "send to" my batch file script. However, nothing happens at all when I select more than that. I suspect there's a character or number of arguments limit somewhere.
Hopefully I'm missing something simple and that the solution or a workaround isn't too painful.
References:
batch file (crc32_inline.bat):
crc32_inline.pl %*
Perl notes:
I'm using (strawberry) perl v5.10.0
I have C:\strawberry\perl\bin in my path, which is where crc32.bat exists.
perl script (crc32_inline.pl):
#!/usr/bin/env perl
use strict;
use warnings;
use Cwd;
use English qw( -no_match_vars );
use File::Basename;
$OUTPUT_AUTOFLUSH = 1;
my $crc32_cmd = 'crc32.bat';
my $failure_report_basename = 'crc32_failures.txt';
my %failures = ();
print "\n";
foreach my $arg (#ARGV) {
# if the file has a crc, check to see if it matches the calculated
# crc.
if (-f $arg and $arg =~ /\[([0-9a-f]{8})\]/i) {
my $crc = uc $1;
my $basename = basename($arg);
print "checking ${basename}... ";
my $calculated_crc = uc `${crc32_cmd} "${arg}"`;
chomp($calculated_crc);
if ($crc eq $calculated_crc) {
print "passed.\n";
}
else {
print "FAILED (calculated ${calculated_crc})\n";
my $dirname = dirname($arg);
$failures{$dirname}{$basename} = $calculated_crc;
}
}
}
print "\nReport Summary:\n";
if (scalar keys %failures == 0) {
print " All files OK\n";
}
else {
print sprintf(" %d / %d files failed crc32 validation.\n" .
" See %s for details.\n",
scalar keys %failures,
scalar #ARGV,
$failure_report_basename);
my $failure_report_fullname = $failure_report_basename;
if (defined -f $ARGV[0]) {
$failure_report_fullname
= dirname($ARGV[0]) . '/' . $failure_report_basename;
}
$OUTPUT_AUTOFLUSH = 0;
open my $fh, '>' . $failure_report_fullname or die $!;
foreach my $dirname (sort keys %failures) {
print {$fh} $dirname . "\n";
foreach my $basename (sort keys %{$failures{$dirname}}) {
print {$fh} sprintf(" crc32(%s) basename(%s)\n",
$failures{$dirname}{$basename},
$basename);
}
}
close $fh;
$OUTPUT_AUTOFLUSH = 1;
}
print sprintf("\n%s done! (%d seconds elapsed)\n" .
"Press enter to exit.\n",
basename($0),
time() - $BASETIME);
<STDIN>;
I will recommend just putting a shortcut to your script in the "Send To" directory instead of doing it via a batch file (which is subject to cmd.exes limits on command line length).

Resources