Error executing SQL using Perl - oracle

I am trying to execute the open source code which finds the list of tables involved in SQL.
I am working on Retrieve table names from Oracle queries.
I understood the expressions and commands to some extent and tried it.
Details of my execution:
GetTable.pl file
same as in the link
test.sql file
I didn't use the one in link. Instead I had only a single SQL for testing.
SELECT emp_name FROM load_tables.temp;
Executed in Strawberry Perl
I tried the following
$ perl GetTable.pl
Usage : GetTable <sql query file>
$ perl test.sql
Can't locate object method "FROM" via package "load_tables" (perhaps you forgot to load "load_tables"?) at test.sql line 1
Can someone help me in executing it? I'm not sure if there is problem with code as I could see two people have executed successfully.
Perl code
#!/usr/bin/perl
use warnings;
#Function which gets the table names and formats and prints them.
sub printTable {
my $tab = shift;
$tab =~ s/,\s+/,/g;
$tab =~ s/\s+,/,/g;
my #out = split( /,/, $tab );
foreach ( #out ) {
$_ =~ s/ .*//;
print $opr, $_, "\n";
}
}
# Function which gets the indivdual queries and separtes the table
# names from the queries. Sub-Queries, co-related queries, etc..
# will also be handled.
sub process {
local $opr;
my $line = shift;
$line =~ s/\n/ /g;
if ( $line =~ m/^\s*(select|delete)/i ) {
if ( $line =~ m/^\s*select/i ) {
$opr = "SELECT: ";
}
else {
$opr = "DELETE: ";
}
if ( $line =~ m/from.*where/i ) {
while ( $line =~ m/from\s+(.*?)where/ig ) {
&printTable( $1 );
}
}
elsif ( $line =~ m/from.*;/i ) {
while ( $line =~ m/from\s+(.*);/ig ) {
&printTable( $1 );
}
}
}
elsif ( $line =~ m/^\s*update\s+(\w+)\s+/i ) {
$opr = "UPDATE: ";
&printTable( $1 );
}
elsif ( $line =~ m/^\s*insert\s+into\s+(\w+)\s+/i ) {
$opr = "INSERT: ";
&printTable( $1 );
}
}
#The main function which reads the files and reads the
#query into a variable and sends it to process function.
if ( #ARGV != 1 ) {
print "Usage: GetTable <sql query file>\n";
exit 1;
}
open QFILE, $ARGV[0] or die "File $ARGV[0]: $! \n";
my $flag = 0;
my $query = "";
my $conds = "select|insert|update|delete";
while ( <QFILE> ) {
next if ( /^$/ );
if ( $flag == 1 ) {
$query .= $_;
if ( /;\s*$/ ) {
$flag = 0;
&process( $query );
}
}
elsif ( /^\s*($conds).*;\s*/i ) {
&process( $_ );
}
elsif ( /^\s*($conds)/i ) {
$flag = 1;
$query = $_;
}
}
close QFILE;

Two important skills to learn as a programmer are a) accuracy in following instructions and b) reading the error message carefully.
You started by running GetTable.pl. But that program requires a parameter (the name of an SQL file to analyse) and the error message tried to tell you that.
I don't know why, but instead of doing what the error message told you to do (which would have been to run perl GetTable.pl test.sql) you decided to ask Perl to run your SQL file.
The second error message you got was the Perl compiler trying to make sense of the SQL that you asked it to run. But the Perl compiler doesn't understand SQL, it understands Perl. So it's no surprise that it got confused.
To fix it, do what your first error message suggested—run the command
$ perl GetTable.pl test.sql

Related

How to pass parameters to Perl module from Bash shell script, and retrieve values?

A function in Perl module takes 3 parameters. The value of the first parameter will determine the values of the other parameters and return them back to the caller. It is defined like this:
package MyModule;
sub MyFunction
{
my $var_0 = $_[0];
my $var_1;
my $var_2;
if ($var_0 =~ /WA/) {
$var_1 = "Olympia";
$var_2 = "Population is 53,000";
}
elsif ($var_0 =~ /OR/) {
$var_1 = "Salem";
$var_2 = "Population is 172,000";
}
$_[1] = $var_1;
$_[2] = $var_2;
return 0; # no error
}
Calling this function from the bash shell script:
VAL=`perl -I. -MMyModule -e 'print MyModule::MyFunction("WA")'`
echo $VAL
Problem: The VAL only stores the value of the last variable or $var_2.
Question: How can I retrieve the value from both $var_1 and $var_2, for use later in this bash script? ( assuming code from perl function can not be modified). Thanks for your help.
Your function is modifying the value of its arguments #2 #3 so you may pass variables to it and print them:
perl -l -I. -MMyModule -e 'MyModule::MyFunction("WA",$a,$b); print $a; print $b;'
You print the value returned by MyFunction, which is 0. So that's why 0 is assigned to $VAL.
You should return the value instead of assigning them to the $_[1] and $_[2].
package MyModule;
use v5.14;
use warnings;
use Exporter qw( import );
our #EXPORT = qw( MyFunction );
sub MyFunction {
my $var_0 = shift;
if ( $var_0 eq "WA" ) {
return "Olympia", "Population is 53,000";
}
elsif ( $var_0 eq "OR" ) {
return "Salem", "Population is 172,000";
}
}
perl -I. -MMyModule -le'print for MyFunction( #ARGV )' WA
You probably want the two values in different shell vars. You could use the shell's read with the above, or you could use the following:
package MyModule;
use v5.14;
use warnings;
use Exporter qw( import );
use String::ShellQuote qw( shell_quote );
our #EXPORT = qw( MyWrappedFunction );
sub MyFunction {
my $var_0 = shift;
my ( $var_1, $var_2 );
if ( $var_0 eq "WA" ) {
( $var_1, $var_2 ) = ( "Olympia", "Population is 53,000" );
}
elsif ( $var_0 eq "OR" ) {
( $var_1, $var_2 ) = ( "Salem", "Population is 172,000" );
}
return VAL1 => $var_1, VAL2 => $var_ 2;
}
sub MyWrappedFunction {
my %d = MyFunction( #_ );
say "$_=" . shell_quote( $d{$_} ) for keys( %d );
}
eval "$( perl -I. -MMyModule -e'MyWrappedFunction( #ARGV )' WA )"
(I'm assuming a sh-like shell is used.)

Kaldi librispeech data preparation error

I'm trying to do ASR system. Im using kaldi manual and librispeech corpus.
In data preparation step i get this error
utils/data/get_utt2dur.sh: segments file does not exist so getting durations
from wave files
utils/data/get_utt2dur.sh: could not get utterance lengths from sphere-file
headers, using wav-to-duration
utils/data/get_utt2dur.sh: line 99: wav-to-duration: command not found
And here the piece of code where this error occures
if cat $data/wav.scp | perl -e '
while (<>) { s/\|\s*$/ |/; # make sure final | is preceded by space.
#A = split;
if (!($#A == 5 && $A[1] =~ m/sph2pipe$/ &&
$A[2] eq "-f" && $A[3] eq "wav" && $A[5] eq "|")) { exit (1); }
$utt = $A[0]; $sphere_file = $A[4];
if (!open(F, "<$sphere_file")) { die "Error opening sphere file $sphere_file"; }
$sample_rate = -1; $sample_count = -1;
for ($n = 0; $n <= 30; $n++) {
$line = <F>;
if ($line =~ m/sample_rate -i (\d+)/) { $sample_rate = $1; }
if ($line =~ m/sample_count -i (\d+)/) { $sample_count = $1;
}
if ($line =~ m/end_head/) { break; }
}
close(F);
if ($sample_rate == -1 || $sample_count == -1) {
die "could not parse sphere header from $sphere_file";
}
$duration = $sample_count * 1.0 / $sample_rate;
print "$utt $duration\n";
} ' > $data/utt2dur; then
echo "$0: successfully obtained utterance lengths from sphere-file headers"
else
echo "$0: could not get utterance lengths from sphere-file headers,
using wav-to-duration"
if command -v wav-to-duration >/dev/null; then
echo "$0: wav-to-duration is not on your path"
exit 1;
fi
In file wav.scp i got such lines:
6295-64301-0002 flac -c -d -s /home/tinin/kaldi/egs/librispeech/s5/LibriSpeech/dev-clean/6295/64301/6295-64301-0002.flac |
In this dataset i have only flac files(they downloaded via provided script) and i dont understand why we search wav-files? And how run data preparation correctly(i didnt change source code in this manual.
Also, if you explain to me what is happening in this code, then I will be very grateful to you, because i'm not familiar with bash and perl.
Thank you a lot!
The problem I see from this line
utils/data/get_utt2dur.sh: line 99: wav-to-duration: command not found
is that you have not added the kaldi tools in your path.
Check the file path.sh and see if the directories that it adds to your path are correct (because it has ../../.. inside and it might not match your current folder setup)
As for the perl script, it counts the samples of the sound file and then it divides with the sample rate in order to get the duration. Don't worry about the 'wav' word, your files might be on another format, it's just the name of the kaldi functions.

Perl script hangs for no reason

So I have this small script which checks two log files for a specific line and compares the lines.
The script is used on several different Windows Bamboo Agents. But on one it just hangs and doesn't exit. Since the script is used in bamboo the whole job hangs, when this script doesn't exit.
When I check the computer via remote access and kill the script the job continues until it reaches the script again.
This is the script, which is started by another script.
#! /usr/bin/perl
my $naluresult = 2;
my $hevcresult = 2;
my $hevcfailed = 0;
use strict;
use warnings;
#---------------------------------------------
#check for $ARGV[0] and $ARGV[1]
open( my $nalulog, "<", $ARGV[1] )
or die "cannot open File:$!\n\n";
while (<$nalulog>) {
chomp;
$_ =~ s/\s+//g;
if ( $_ =~ m/MD5:OK/ ) {
$naluresult = 1;
} else {
if ( $_ =~ m/MD5:MISSING/ ) {
$naluresult = 0;
}
}
}
close $nalulog;
#---------------------------------------------
open( my $hevclog, "<", $ARGV[0] )
or die "cannot open File:$!\n\n";
while (<$hevclog>) {
chomp;
$_ =~ s/\s+//g;
if ( $_ =~ m/MD5check:OK/ ) {
$hevcresult = 1;
last;
} else {
if ( $_ =~ m/MD5check:FAILED/ ) { $hevcfailed = 1; }
}
if ( $hevcfailed == 1 ) {
#do stuff
}
}
close $hevclog;
#---------------------------------------------
if ( $hevcresult == 2 ) {
print("Missing MD5 status in HEVC Output");
exit(-1);
} elsif ( $naluresult == 2 ) {
print("Missing MD5 status in NALU Output");
exit(-2);
} else {
if ( $naluresult == $hevcresult ) { exit(0); }
else {
#different if-statements to print() to log
exit(1);
}
}
#---------------------EOF---------------------
If your files are just normal disk files that aren't being simultaneously written to by other processes, or locked, or anything like that, then there is nothing in the code you have here that should hang. If the files are both reasonable sizes, the code you have here should read through the files and finish.
However, if one of the files is locked, or is immensely large, or if you have other code that can get stuck in an infinite loop, that would explain why your program is hanging.

How to compare the content of multiple txt file in bash shell and delete the one (file) which is duplicate

I am trying to achieve this is Mac OS, tried to achieve similar by using fdupes but didn't work. Here is what I am trying to achieve:
There are 100 files in directory 'alpha'
Pick one file A and compare it with each remaining file in the directory 'alpha'
If content of file A matches any file (duplicate), delete the duplicate file
Move to file B, and compare with the remaining file, and do the same (check for duplicate)
Repeat the same until all files are checked for duplicates. Remaining files should be unique
Update
I modified a bit something similar I found here, but I have to run it multiple times to take out the duplicates. It is not detecting duplicates in a single run (have to run it multiple times to detect duplicate). Not sure if it is working correctly
use Digest::MD5;
%check = ();
while (<*>) {
-d and next;
$fname = "$_";
print "checking .. $fname\n";
$md5 = getmd5($fname) . "\n";
if ( !defined( $check{$md5} ) ) {
$check{$md5} = "$fname";
}
else {
print "Found duplicate files: $fname and $check{$md5}\n";
print "Deleting duplicate $check{$md5}\n";
unlink $check{$md5};
}
}
sub getmd5 {
my $file = "$_";
open( FH, "<", $file ) or die "Cannot open file: $!\n";
binmode(FH);
my $md5 = Digest::MD5->new;
$md5->addfile(FH);
close(FH);
return $md5->hexdigest;
}
You should limit the number of times that you have to read each file's contents:
Inventory the files using Path::Class or some similar method.
a. Build a hash relating file sizes and MD5::Digest to a list of file names.
Compare likely duplicates only. Matching file size and digest.
The following is untested:
use strict;
use warnings;
use Path::Class;
use Digest::MD5;
my $dir = dir('.');
my %files_per_digest;
# Inventory Directory
while ( my $file = $dir->next ) {
my $size = $file->stat->size;
my $digest = do {
my $md5 = Digest::MD5->new;
$md5->addfile( $file->openr );
$md5->hexdigest;
};
push #{ $files_per_digest{"$size - $digest"} }, $file;
}
# Compare likely duplicates only
for my $files ( grep { #$_ > 1 } values %files_per_digest ) {
# Sort by alpha
#$files = sort #$files;
print "Comparing: #files\n";
for my $i ( reverse 0 .. $#files ) {
for my $j ( 0 .. $i - 1 ) {
my $fh1 = $files->[$i]->openr;
my $fh2 = $files->[$j]->openr;
my $diff = 0;
while ( !eof($fh1) && !eof($fh2) ) {
$diff = 1, last if scalar(<$fh1>) ne scalar(<$fh2>);
}
if ( $diff or !eof($fh1) or !eof($fh2) ) {
print " $files->[$i] ($i) is duplicate of $files->[$j] ($j)\n";
$files->[$i]->remove();
splice #$files, $i, 1;
}
}
}
}
I've used rdfind in the past with very good success. It's very accurate, fast, and seems to run leaner than fdupes. According to RDFind's web site (http://rdfind.pauldreik.se/), it can be installed using MacPorts.

Perl: Weird Tie::File behaviour in Windows as opposed to Unix

I have this perl script that uses Tie::File.
In Linux(Ubuntu) when I invoke the script via Bash it works as expected but in Windows when I invoke the script via Powershell it behaves weirdly (check P.S. below).
Code:
#!/usr/bin/perl -T
use strict;
use warnings;
use Tie::File;
use CommonStringTasks;
if ( #ARGV != 4 ) {
print "ERROR:Inadequate/Redundant arguments.\n";
print "Usage: perl <pl_executable> <path/to/peer_main.java> <peer_main.java>\n";
print " <score_file_index> <port_step_index>\n";
print $ARGV[0], "\n";
print $ARGV[1], "\n";
print $ARGV[2], "\n";
print $ARGV[3], "\n";
exit 1;
}
my $PEER_DIR = $ARGV[0];
my $PEER_FILE = $ARGV[1];
my $PEER_PACKAGE = "src/planetlab/app";
my $PEER_PATH = "${PEER_DIR}/${PEER_PACKAGE}/${PEER_FILE}";
# Check if args are tainted ...
# Check $PEER_PATH file permissions ...
open(my $file, "+<", "$PEER_PATH")
or
die("File ", $PEER_FILE, " could not be opened for editing:$!");
# Edit the file and change variables for debugging/deployment setup.
# Number demanglers:
# -flock -> arg2 -> 2 stands for FILE_EX
# Options (critical!):
# -Memory: Inhibit caching as this will allow record changes on the fly.
tie my #fileLines,
'Tie::File',
$file,
memory => 0
or
die("File ", $PEER_FILE, " could not be tied with Tie::File:$!");
flock $file, 2;
my $i = 0;
my $scoreLine = "int FILE_INDEX = " . $SCORE . ";";
my $portLine = "int SERVER_PORT = " . $PORT . ";";
my $originalScoreLine = "int FILE_INDEX =";
my $originalPortLine = "int SERVER_PORT =";
(tied #fileLines)->defer;
while (my $line = <$file>) {
if ( ($line =~ m/($scoreLine)/) && ($SCORE+1 > 0) ) {
print "Original line (score): ", "\n", $scoreLine, "\n";
chomp $line;
$line = substr($line, 0, -($scoreDigits+1));
$line = $line . (++$SCORE) . ";";
print "Editing line (score): ", $i, "\n", trimLeadSpaces($fileLines[$i]), "\n";
$fileLines[$i] = $line;
print "Line replaced with:\n", trimLeadSpaces($line), "\n";
next;
}
if ( ($line =~ m/($portLine)/) && ($PORT > 0) ) {
print "Original line (port): ", "\n", $portLine, "\n";
chomp $line;
$line = substr($line, 0, -($portDigits+1));
$line = $line . (++$PORT) . ";";
print "Editing line (port): ", $i, "\n", trimLeadSpaces($fileLines[$i]), "\n";
$fileLines[$i] = $line;
print "Line replaced with:\n", trimLeadSpaces($line), "\n";
last;
}
# Restore original settings.
if ( ($line =~ m/($originalScoreLine)/) && ($SCORE < 0) ) {
print "Restoring line (score) - FROM: ", "\n", $fileLines[$i], "\n";
$fileLines[$i] = " private static final int FILE_INDEX = 0;";
print "Restoring line (score) - TO: ", "\n", $fileLines[$i], "\n";
next;
}
if ( ($line =~ m/($originalPortLine)/) && ($PORT < 0) ) {
print "Restoring line (port) - FROM: ", "\n", $fileLines[$i], "\n";
$PORT = abs($PORT);
$fileLines[$i] = " private static final int SERVER_PORT = " . $PORT . ";";
print "Restoring line (port) - TO: ", "\n", $fileLines[$i], "\n";
last;
}
} continue {
$i++;
}
(tied #fileLines)->flush;
untie #fileLines;
close $file;
The perl version in both OSes is 5+(in Windows Active-State Perl with CPAN modules).
Could it be the way I open the filehandle? Any ideas anyone?
P.S.: The first version had a while (<$file>) and instead of $line I used the $_ variable but when I did that I had a behaviour where specific lines would not be edited but instead the file would get appended with a hundred newlines or so followed by the (correctly) edited line and so on. I also had a warning about $fileLines[$i] being uninitialized!Clearly something's wrong with the Tie::File structure in Windows or something else that I am not aware of. Same erratic behaviour takes place with the changes and in Linux(Ubuntu) behaviour again is as expected.
The OPs question is vague, and lacks input and expected output. Therefore I will simply note some of my concerns:
First, using Tie::File and <$file> and flock on the same handle seems to be both overkill and dangerous. I would recommend simply using Tie::File to iterate and to edit, such as:
#!/usr/bin/env perl
use strict;
use warnings;
use Tie::File;
tie my #lines, 'Tie::File', 'filename';
foreach my $linenum ( 0..$#lines ) {
if ($lines[$linenum] =~ /something/) {
$lines[$linenum] = 'somethingelse';
}
}
Perhaps better than edit inline, as Tie::File allows, copy the file to a backup, iterate over the lines using <$file>, then write to a new file with the old name.
#!/usr/bin/env perl
use strict;
use warnings;
use File::Copy 'move';
my $infile = $ARGV[0];
move( $infile, "$infile.bak");
open my $inhandle, '<', "$infile.bak";
open my $outhandle, '>', $infile;
while( my $line = <$inhandle> ) {
if ($line =~ /something/) {
$line = 'somethingelse';
}
print $outhandle $line;
}
Second, the -MModule flag simply translates to a use Module; at the top of the script. Therefore -MCPAN is use CPAN;, however loading the CPAN module does nothing for the script. CPAN.pm gives a script the ability to install modules.
Third, we will be able to help better if you give and example input, an expected output, and a stripped down script that clearly shows how this operation is to perform while still failing in the same way that the actual script does.
I found out the source of my problems. The reason was the record separator!
Tie::File expected in Windows a /r/n record separator so it read the whole file in just one pass. My files are in UTF-8, with Unix line endings.
That is why when I was traversing the $fileLines and accessed any index beyond 0 I got from perl a warning that the string was not initialized. Fixed the problem and now I am ready to go on! :D
P.S.: Mr Joel Berger I am marking your answer as valid/appropriate because you really tried helping me and I followed your first advice about the file handle :).
Thank you everyone for assisting me xD xD xD

Resources