Perl script hangs for no reason - windows

So I have this small script which checks two log files for a specific line and compares the lines.
The script is used on several different Windows Bamboo Agents. But on one it just hangs and doesn't exit. Since the script is used in bamboo the whole job hangs, when this script doesn't exit.
When I check the computer via remote access and kill the script the job continues until it reaches the script again.
This is the script, which is started by another script.
#! /usr/bin/perl
my $naluresult = 2;
my $hevcresult = 2;
my $hevcfailed = 0;
use strict;
use warnings;
#---------------------------------------------
#check for $ARGV[0] and $ARGV[1]
open( my $nalulog, "<", $ARGV[1] )
or die "cannot open File:$!\n\n";
while (<$nalulog>) {
chomp;
$_ =~ s/\s+//g;
if ( $_ =~ m/MD5:OK/ ) {
$naluresult = 1;
} else {
if ( $_ =~ m/MD5:MISSING/ ) {
$naluresult = 0;
}
}
}
close $nalulog;
#---------------------------------------------
open( my $hevclog, "<", $ARGV[0] )
or die "cannot open File:$!\n\n";
while (<$hevclog>) {
chomp;
$_ =~ s/\s+//g;
if ( $_ =~ m/MD5check:OK/ ) {
$hevcresult = 1;
last;
} else {
if ( $_ =~ m/MD5check:FAILED/ ) { $hevcfailed = 1; }
}
if ( $hevcfailed == 1 ) {
#do stuff
}
}
close $hevclog;
#---------------------------------------------
if ( $hevcresult == 2 ) {
print("Missing MD5 status in HEVC Output");
exit(-1);
} elsif ( $naluresult == 2 ) {
print("Missing MD5 status in NALU Output");
exit(-2);
} else {
if ( $naluresult == $hevcresult ) { exit(0); }
else {
#different if-statements to print() to log
exit(1);
}
}
#---------------------EOF---------------------

If your files are just normal disk files that aren't being simultaneously written to by other processes, or locked, or anything like that, then there is nothing in the code you have here that should hang. If the files are both reasonable sizes, the code you have here should read through the files and finish.
However, if one of the files is locked, or is immensely large, or if you have other code that can get stuck in an infinite loop, that would explain why your program is hanging.

Related

I need to create a file with all hashicorp vault key value pairs data using shell script

I need to create a file with all hashicorp vault key value pairs data using shell script.
I want to dump all the data from vault to a flat file.
please advice best way to do it.
Thanks in advance
Prudhvi
Just for keys and values you can use my little Perl script 'vault-backup', that also freezes the data using the correct vault commands.
Please note that this does NOT create a full backup of your Vault! There are no methods being backed up, or any other (unlistable) stuff outside the secrets. It's only usable for simple keys and values. It also probably isn't usable for multiline or binary values. You can patch the script to support that, if you like. ;)
#!/usr/bin/perl
#
# Usage: vault-backup [<PATH> [stdout]]
use Data::Dumper;
use Storable qw(freeze thaw);
# Set vault environment variables
# Always end with a " && " for the actual command
my $setenv =
"VAULT_ADDR=https://myvault.somewhere.com:8200 && ".
"VAULT_CA_PATH=/etc/yourcertificates/ && ";
my $path = $ARGV[0] || "secret/";
if ($path!~/\/$/) {
$path="$path/";
}
push #list, getData($path);
if ($ARGV[1] eq "stdout") {
print Dumper(\#list);
} else {
my $fn="vault-backup-frozen-".time().".dat";
open W,">$fn";
print W freeze(\#list);
close W;
print STDERR "Wrote data to $fn\n";
}
sub getData {
my $path=shift;
print STDERR "Starting getData($path)\n";
my #ret=();
my $command="$setenv vault kv list -tls-skip-verify $path | tail -n+3 ";
print STDERR "starting command: $command\n";
my #lines = `$command`;
chomp #lines;
foreach my $line (#lines) {
if ($line=~/\/$/) {
my #result = getData($path.$line);
if (scalar(#result)>0) {
# Find deeper results
push #ret, #result;
} else {
# empty final dir, no values
push #ret, { path => $path.$line };
}
} else {
# Found a key!
my $command="$setenv vault kv get -tls-skip-verify $path$line";
print STDERR "starting command: $command\n";
my $values = `$command`;
push #ret, {path=>$path.$line, value=>$values};
}
}
return #ret;
}
To restore the data, you can use the script below. It handles data only, it does not act on metadata.
#!/usr/bin/perl
# Usage: vault-restore <backup-filename>
use Data::Dumper;
use Storable qw(thaw);
my %all_entries;
# Set vault environment variables
# Always end with a " && " for the actual command
my $setenv =
"VAULT_ADDR=https://myothervault.somewhere.com:8200 && ".
"VAULT_CA_PATH=/etc/mycertificates/ && ";
# Read the data
my $fn = $ARGV[0] || die("I need a filename with the frozen data");
open F,"<$fn";
my #list = #{ thaw(join("",<F>)) };
close F;
print STDERR "Read ".scalar(#list)." entries.\n";
# Process the data
foreach my $entry (#list) {
print STDERR "\n# adding entry -> $entry->{path}\n";
addEntry($entry);
}
foreach my $path (keys %all_entries) {
my $keyvalues="";
foreach my $key (keys %{$all_entries{$path}}) {
my $value=$all_entries{$path}{$key};
$keyvalues.="'$key=$value' ";
}
print STDERR "vault kv put $path $keyvalues\n";
# `$command`;
}
sub addEntry {
my $entry=shift;
my $path = $entry->{'path'};
if ($entry->{'value'}) {
my $values = $entry->{value};
my #list=split("\n", $values);
my $metadata_engage=0;
my $data_engage=0;
foreach my $keyvalue (#list) {
if ($keyvalue=~/==== Metadata ====/) {
$metadata_engage=1;
$data_engage=0;
} elsif ($keyvalue=~/==== Data ====/) {
$metadata_engage=0;
$data_engage=1;
} elsif ($data_engage) {
my ($key,$value)=($keyvalue=~/^([^ ]+) +(.*)$/);
if ($key ne "Key" && $key ne "---") {
# print STDERR "key=$key ; value=$value\n";
$all_entries{$path}{$key}=$value;
} else {
# print STDERR "-- separator\n";
}
}
}
} else {
print STDERR "Found a final but empty path: $path\n";
}
}

Error executing SQL using Perl

I am trying to execute the open source code which finds the list of tables involved in SQL.
I am working on Retrieve table names from Oracle queries.
I understood the expressions and commands to some extent and tried it.
Details of my execution:
GetTable.pl file
same as in the link
test.sql file
I didn't use the one in link. Instead I had only a single SQL for testing.
SELECT emp_name FROM load_tables.temp;
Executed in Strawberry Perl
I tried the following
$ perl GetTable.pl
Usage : GetTable <sql query file>
$ perl test.sql
Can't locate object method "FROM" via package "load_tables" (perhaps you forgot to load "load_tables"?) at test.sql line 1
Can someone help me in executing it? I'm not sure if there is problem with code as I could see two people have executed successfully.
Perl code
#!/usr/bin/perl
use warnings;
#Function which gets the table names and formats and prints them.
sub printTable {
my $tab = shift;
$tab =~ s/,\s+/,/g;
$tab =~ s/\s+,/,/g;
my #out = split( /,/, $tab );
foreach ( #out ) {
$_ =~ s/ .*//;
print $opr, $_, "\n";
}
}
# Function which gets the indivdual queries and separtes the table
# names from the queries. Sub-Queries, co-related queries, etc..
# will also be handled.
sub process {
local $opr;
my $line = shift;
$line =~ s/\n/ /g;
if ( $line =~ m/^\s*(select|delete)/i ) {
if ( $line =~ m/^\s*select/i ) {
$opr = "SELECT: ";
}
else {
$opr = "DELETE: ";
}
if ( $line =~ m/from.*where/i ) {
while ( $line =~ m/from\s+(.*?)where/ig ) {
&printTable( $1 );
}
}
elsif ( $line =~ m/from.*;/i ) {
while ( $line =~ m/from\s+(.*);/ig ) {
&printTable( $1 );
}
}
}
elsif ( $line =~ m/^\s*update\s+(\w+)\s+/i ) {
$opr = "UPDATE: ";
&printTable( $1 );
}
elsif ( $line =~ m/^\s*insert\s+into\s+(\w+)\s+/i ) {
$opr = "INSERT: ";
&printTable( $1 );
}
}
#The main function which reads the files and reads the
#query into a variable and sends it to process function.
if ( #ARGV != 1 ) {
print "Usage: GetTable <sql query file>\n";
exit 1;
}
open QFILE, $ARGV[0] or die "File $ARGV[0]: $! \n";
my $flag = 0;
my $query = "";
my $conds = "select|insert|update|delete";
while ( <QFILE> ) {
next if ( /^$/ );
if ( $flag == 1 ) {
$query .= $_;
if ( /;\s*$/ ) {
$flag = 0;
&process( $query );
}
}
elsif ( /^\s*($conds).*;\s*/i ) {
&process( $_ );
}
elsif ( /^\s*($conds)/i ) {
$flag = 1;
$query = $_;
}
}
close QFILE;
Two important skills to learn as a programmer are a) accuracy in following instructions and b) reading the error message carefully.
You started by running GetTable.pl. But that program requires a parameter (the name of an SQL file to analyse) and the error message tried to tell you that.
I don't know why, but instead of doing what the error message told you to do (which would have been to run perl GetTable.pl test.sql) you decided to ask Perl to run your SQL file.
The second error message you got was the Perl compiler trying to make sense of the SQL that you asked it to run. But the Perl compiler doesn't understand SQL, it understands Perl. So it's no surprise that it got confused.
To fix it, do what your first error message suggested—run the command
$ perl GetTable.pl test.sql

Keep shared entries among many files

I have hundreds of files, each with different number of entries (>xxxx) and want to keep only shared entries among all files, separately. I'm not sure what is the best method to do this, maybe perl! I used sort, uniq of bash, but I didn't get the correct answer. The format of IDs start with > and follows 4 characters among all files.
1.fa
>abcd
CTGAATGCC
2.fa
>abcd
AAATGCGCG
>efgh
CGTAC
3.fa
>abcd
ATGCAATA
>efgh
TAACGTAA
>ijkl
TGCAA
Final results, of this example would be:
1.fa
>abcd
CTGAATGCC
2.fa
>abcd
AAATGCGCG
3.fa
>abcd
ATGCAATA
This Perl program will do as you ask. It uses Perl's built-in edit in place functionality and renames the original files to 1.fa.bak etc. It shouldn't have a problem with blank lines in your data as long as the sequence is always on one line immediately following the ID
use strict;
use warnings 'all';
my #files = glob '*.fa';
printf "Processing %d file%s\n", scalar #files, #files == 1 ? "" : "s";
exit if #files < 2;
my %ids;
{
local #ARGV = #files;
while ( <> ) {
++$ids{$1} if /^>(\S+)/;
}
}
# remove keys that aren't in all files
delete #ids{ grep { $ids{$_} < #files } keys %ids };
my $n = keys %ids;
printf "%d ID%s common to all files\n", $n, $n == 1 ? '' : "s";
exit unless $n;
{
local #ARGV = #files;
local $^I = '.bak';
while ( <> ) {
next unless /^>(\S+)/ and $ids{$1};
print;
print scalar <>;
}
}
Here is Perl solution, that may help you:
use feature qw(say);
use strict;
use warnings;
my $file_dir = 'files';
chdir $file_dir;
my #files = <*.fa>;
my $num_files = scalar #files;
my %ids;
for my $file (#files) {
open ( my $fh, '<', $file) or die "Could not open file '$file': $!";
while (my $id = <$fh>) {
chomp $id;
chomp (my $sequence = <$fh>);
$ids{$id}++;
}
close $fh;
}
for my $file (#files) {
open ( my $fh, '<', $file) or die "Could not open file '$file': $!";
my $new_name = $file . '.new';
open ( my $fh_write, '>', $new_name ) or die "Could not open file '$new_name': $!";
while (my $id = <$fh>) {
chomp $id;
chomp (my $sequence = <$fh>);
if ( $ids{$id} == $num_files ) {
say $fh_write $id;
say $fh_write $sequence;
}
}
close $fh_write;
close $fh;
}
It assumes that all the .fa files are located in the directory named $file_dir, and it writes the new sequences to new files in the same directory. The new file names get the .new extension.

How to compare the content of multiple txt file in bash shell and delete the one (file) which is duplicate

I am trying to achieve this is Mac OS, tried to achieve similar by using fdupes but didn't work. Here is what I am trying to achieve:
There are 100 files in directory 'alpha'
Pick one file A and compare it with each remaining file in the directory 'alpha'
If content of file A matches any file (duplicate), delete the duplicate file
Move to file B, and compare with the remaining file, and do the same (check for duplicate)
Repeat the same until all files are checked for duplicates. Remaining files should be unique
Update
I modified a bit something similar I found here, but I have to run it multiple times to take out the duplicates. It is not detecting duplicates in a single run (have to run it multiple times to detect duplicate). Not sure if it is working correctly
use Digest::MD5;
%check = ();
while (<*>) {
-d and next;
$fname = "$_";
print "checking .. $fname\n";
$md5 = getmd5($fname) . "\n";
if ( !defined( $check{$md5} ) ) {
$check{$md5} = "$fname";
}
else {
print "Found duplicate files: $fname and $check{$md5}\n";
print "Deleting duplicate $check{$md5}\n";
unlink $check{$md5};
}
}
sub getmd5 {
my $file = "$_";
open( FH, "<", $file ) or die "Cannot open file: $!\n";
binmode(FH);
my $md5 = Digest::MD5->new;
$md5->addfile(FH);
close(FH);
return $md5->hexdigest;
}
You should limit the number of times that you have to read each file's contents:
Inventory the files using Path::Class or some similar method.
a. Build a hash relating file sizes and MD5::Digest to a list of file names.
Compare likely duplicates only. Matching file size and digest.
The following is untested:
use strict;
use warnings;
use Path::Class;
use Digest::MD5;
my $dir = dir('.');
my %files_per_digest;
# Inventory Directory
while ( my $file = $dir->next ) {
my $size = $file->stat->size;
my $digest = do {
my $md5 = Digest::MD5->new;
$md5->addfile( $file->openr );
$md5->hexdigest;
};
push #{ $files_per_digest{"$size - $digest"} }, $file;
}
# Compare likely duplicates only
for my $files ( grep { #$_ > 1 } values %files_per_digest ) {
# Sort by alpha
#$files = sort #$files;
print "Comparing: #files\n";
for my $i ( reverse 0 .. $#files ) {
for my $j ( 0 .. $i - 1 ) {
my $fh1 = $files->[$i]->openr;
my $fh2 = $files->[$j]->openr;
my $diff = 0;
while ( !eof($fh1) && !eof($fh2) ) {
$diff = 1, last if scalar(<$fh1>) ne scalar(<$fh2>);
}
if ( $diff or !eof($fh1) or !eof($fh2) ) {
print " $files->[$i] ($i) is duplicate of $files->[$j] ($j)\n";
$files->[$i]->remove();
splice #$files, $i, 1;
}
}
}
}
I've used rdfind in the past with very good success. It's very accurate, fast, and seems to run leaner than fdupes. According to RDFind's web site (http://rdfind.pauldreik.se/), it can be installed using MacPorts.

Why can't I use more than 20 files with my Perl script and Windows's SendTo?

I'm trying to emulate RapidCRC's ability to check crc32 values within filenames on Windows Vista Ultimate 64-bit. However, I seem to be running into some kind of argument limitation.
I wrote a quick Perl script, created a batch file to call it, then placed a shortcut to the batch file in %APPDATA%\Microsoft\Windows\SendTo
This works great when I select about 20 files or less, right-click and "send to" my batch file script. However, nothing happens at all when I select more than that. I suspect there's a character or number of arguments limit somewhere.
Hopefully I'm missing something simple and that the solution or a workaround isn't too painful.
References:
batch file (crc32_inline.bat):
crc32_inline.pl %*
Perl notes:
I'm using (strawberry) perl v5.10.0
I have C:\strawberry\perl\bin in my path, which is where crc32.bat exists.
perl script (crc32_inline.pl):
#!/usr/bin/env perl
use strict;
use warnings;
use Cwd;
use English qw( -no_match_vars );
use File::Basename;
$OUTPUT_AUTOFLUSH = 1;
my $crc32_cmd = 'crc32.bat';
my $failure_report_basename = 'crc32_failures.txt';
my %failures = ();
print "\n";
foreach my $arg (#ARGV) {
# if the file has a crc, check to see if it matches the calculated
# crc.
if (-f $arg and $arg =~ /\[([0-9a-f]{8})\]/i) {
my $crc = uc $1;
my $basename = basename($arg);
print "checking ${basename}... ";
my $calculated_crc = uc `${crc32_cmd} "${arg}"`;
chomp($calculated_crc);
if ($crc eq $calculated_crc) {
print "passed.\n";
}
else {
print "FAILED (calculated ${calculated_crc})\n";
my $dirname = dirname($arg);
$failures{$dirname}{$basename} = $calculated_crc;
}
}
}
print "\nReport Summary:\n";
if (scalar keys %failures == 0) {
print " All files OK\n";
}
else {
print sprintf(" %d / %d files failed crc32 validation.\n" .
" See %s for details.\n",
scalar keys %failures,
scalar #ARGV,
$failure_report_basename);
my $failure_report_fullname = $failure_report_basename;
if (defined -f $ARGV[0]) {
$failure_report_fullname
= dirname($ARGV[0]) . '/' . $failure_report_basename;
}
$OUTPUT_AUTOFLUSH = 0;
open my $fh, '>' . $failure_report_fullname or die $!;
foreach my $dirname (sort keys %failures) {
print {$fh} $dirname . "\n";
foreach my $basename (sort keys %{$failures{$dirname}}) {
print {$fh} sprintf(" crc32(%s) basename(%s)\n",
$failures{$dirname}{$basename},
$basename);
}
}
close $fh;
$OUTPUT_AUTOFLUSH = 1;
}
print sprintf("\n%s done! (%d seconds elapsed)\n" .
"Press enter to exit.\n",
basename($0),
time() - $BASETIME);
<STDIN>;
I will recommend just putting a shortcut to your script in the "Send To" directory instead of doing it via a batch file (which is subject to cmd.exes limits on command line length).

Resources