Perl code doesn't run in a bash script with scheduling of crontab - bash

I want to schedule my Perl code to be run every day at a specific time. so I put the below code in bash file:
Automate.sh
#!/bin/sh
perl /tmp/Taps/perl.pl
The schedule has been specified in below path:
10 17 * * * sh /tmp/Taps/Automate.sh > /tmp/Taps/result.log
When the time arrived to 17:10 the .sh file hasn't been running. however, when I run ./Automate.sh (manually) it is running and I see the result. I don't know what is the problem.
Perl Code
#!/usr/bin/perl -w
use strict;
use warnings;
use Data::Dumper;
use XML::Dumper;
use TAP3::Tap3edit;
$Data::Dumper::Indent=1;
$Data::Dumper::Useqq=1;
my $dump = new XML::Dumper;
use File::Basename;
my $perl='';
my $xml='';
my $tap3 = TAP3::Tap3edit->new();
foreach my $file(glob '/tmp/Taps/X*')
{
$files= basename($file);
$tap3->decode($files) || die $tap3->error;
}
my $filename=$files.".xml\n";
$perl = $tap3->structure;
$dump->pl2xml($perl, $filename);
print "Done \n";
error:
No such file or directory for file X94 at /tmp/Taps/perl.pl line 22.
X94.xml

foreach my $file(glob 'Taps/X*') -- when you're running from cron, your current directory is /. You'll want to provide the full path to that Taps directory. Also specify the output directory for Out.xml

Cron uses a minimal environment and a short $PATH, which may not necessarily include the expected path to perl. Try specifying this path fully. Or source your shell settings before running the script.

There are a lot of things that can go wrong here. The most obvious and certain one is that if you use a glob to find the file in directory "Taps", then remove the directory from the file name by using basename, then Perl cannot find the file. Not quite sure what you are trying to achieve there. The file names from the glob will be for example Taps/Xfoo, a relative path to the working directory. If you try to access Xfoo from the working directory, that file will not be found (or the wrong file will be found).
This should also (probably) lead to a fatal error, which should be reported in your error log. (Assuming that the decode function returns a false value upon error, which is not certain.) If no errors are reported in your error log, that is a sign the program does not run at all. Or it could be that decode does not return false on missing file, and the file is considered to be empty.
I assume that when you test the program, you cd to /tmp and run it, or your "Taps" directory is in your home directory. So you are making assumptions about where your program looks for the files. You should be certain where it looks for files, probably by using only absolute paths.
Another simple error might be that crontab does not have permission to execute the file, or no read access to "Taps".
Edit:
Other complications in your code:
You include Data::Dumper, but never actually use that module.
$xml variable is not used.
$files variable not declared (this code would never run with use strict)
Your $files variable is outside your foreach loop, which means it will only run once. Since you use glob I assumed you were reading more than one file, in which case this solution will probably not do what you want. It is also possible that you are using a glob because the file name can change, e.g. X93, X94, etc. In that case you will read the last file name returned by the glob. But this looks like a weak link in your logic.
You add a newline \n to a file name, which is strange.

Related

In Perl Script Not able to add a directory to Archive

I have to add a Windows directory to Archive using Perl Script. After executing the below script only the Directory name is getting archived and the Directory Content under "C:\Software\Postgres" is not included as part of Archive. Can you please help me where I am making mistake in the below script.
use warnings;
use strict;
use Archive::Tar;
tar_BKUP();
sub tar_BKUP{
my $src_D = 'C:\Software\Postgres';
my $dst_Tar = 'C:\temp\Postgres.tar';
my $tar = Archive::Tar->new();
$tar->add_files($src_D);
$tar->write($dst_Tar);
}
Here is an example using File::Find::Rule:
use warnings;
use strict;
use Archive::Tar;
use File::Find::Rule;
tar_BKUP();
sub tar_BKUP{
my $src_D = 'C:/Software/Postgres';
my $dst_Tar ='C:/temp/Postgres.tar';
my $tar = Archive::Tar->new();
my #files = File::Find::Rule->file->in($src_D);
$tar->add_files(#files);
$tar->write($dst_Tar);
}
See also How can I archive a directory in Perl like tar does in UNIX?
The documentation for the add_files() method says this:
$tar->add_files( #filenamelist )
Takes a list of filenames and adds them to the in-memory archive.
So you pass it a list of filenames and those files get added to the archive. It looks like you think you can pass it a directory and get all of the files in that directory added in one go. But it's not documented to work like that.
If you know there are no subdirectories below your source directory, then you could do something like this:
$tar->add_files( glob( "$src_D/*" ) );
But if you need to include the contents of subdirectories, then HÃ¥kon's answer using File::Find::Rule is a good approach.
If a Perl module isn't working how you expect it to, then checking the documentation is always a good first step :-)

random file name in perl generated with unusual characters

Using this perl code below, I try to output some names in a random generated file. But the files are created with weird characters like this:
"snp-list-boo.dwjEUq5Wu^J.txt"
And, obviously when my code looks for these files it says not such file. Also, when I try open the files using "vi", they open like this
vi 'temporary-files/snp-list-boo.dwjEUq5Wu
.txt'
i.e. with a "new line" in the file name. Someone please help me understand and solve this weird issue. Thanks much!
code:
my $tfile = `mktemp boo.XXXXXXXXX`;
my $fh = "";
foreach my $keys (keys %POS_HASH){
open ($fh, '>>', "temporary-files/snp-list-$tfile.txt");
print $fh "$keys $POS_HASH{$keys}\n";
close $fh;
}
mktemp returns a line feed character in its output that you need to chop() or chomp() first.
Instead of using the external mktemp program, why don't you go with File::Temp instead?
Using external programs unnecessarily is a bad idea for a few reasons.
The external program that you use might not be available on all of the systems where your code runs. You are therefore making your program less portable.
Spawning a new sub-shell to run an external program is a lot slower than just doing the work in your current environment.
The values you get back from the external program are likely to have a newline character attached. And you might forget to remove it.
It's the last one that is burning you here. But the others still apply as well.
Perl's standard library has, for many, many years included the File::Temp module which creates temporary files for you without the need to use an external program.
use File::Temp qw/ tempfile /;
# It even opens it and gives you the filehandle.
($fh, $filename) = tempfile();

sql loader without .dat extension

Oracle's sqlldr defaults to a .dat extension. That I want to override. I don't like to rename the file. When googled get to know few answers to use . like data='fileName.' which is not working. Share your ideas, please.
Error message is fileName.dat is not found.
Sqlloder has default extension for all input files data,log,control...
data= .dat
log= .log
control = .ctl
bad =.bad
PARFILE = .par
But you have to pass filename without apostrophe and dot
sqlloder pass/user#db control=control data=data
sqloader will add extension. control.ctl data.dat
Nevertheless i do not understand why you do not want to specify extension?
You can't, at least in Unix/Linux environments. In Windows you can use the trailing period trick, specifying either INFILE 'filename.' in the control file or DATA=filename. on the command line. WIndows file name handling allows that; you can for instance do DIR filename. at a command prompt and it will list the file with no extension (as will DIR filename). But you can't do that with *nix, from a shell prompt or anywhere else.
You said you don't want to copy or rename the file. Temporarily renaming it might be the simplest solution, but as you may have a reason not to do that even briefly you could instead create a hard or soft link to the file which does have an extension, and use that link as the target instead. You could wrap that in a shell script that takes the file name argument:
# set variable from correct positional parameter; if you pass in the control
# file name or other options, this might not be $1 so adjust as needed
# if the tmeproary file won't be int he same directory, need to be full path
filename=$1
# optionally check file exists, is readable, etc. but overkill for demo
# can also check temporary file does not already exist - stop or remove
# create soft link somewhere it won't impact any other processes
ln -s ${filename} /tmp/${filename##*/}.dat
# run SQL*Loader with soft link as target
sqlldr user/password#db control=file.ctl data=/tmp/${filename##*/}.dat
# clean up
rm -f /tmp/${filename##*/}.dat
You can then call that as:
./scriptfile.sh /path/to/filename
If you can create the link in the same directory then you only need to pass the file, but if it's somewhere else - which may be necessary depending on why renaming isn't an option, and desirable either way - then you need to pass the full path of the data file so the link works. (If the temporary file will be int he same filesystem you could use a hard link, and you wouldn't have to pass the full path then either, but it's still cleaner to do so).
As you haven't shown your current command line options you may have to adjust that to take into account anything else you currently specify there rather than in the control file, particularly which positional argument is actually the data file path.
I have the same issue. I get a monthly download of reference data used in medical application and the 485 downloaded files don't have file extensions (#2gb). Unless I can load without file extensions I have to copy the files with .dat and load from there.

Detecting that files are being copied in a folder

I am running a script which copies one folder from a specific location if it does not exist( or is not consistent). The problems appears when I run concurently the script 2+ times. As the first script is trying to copy the files, the second comes and tryes the same thing resulting in a mess. How could I avoid this situation? Something like system wide mutex.
I tryed a simple test with -w, I manually copied the folder and while the folder was copying I run the script:
use strict;
use warnings;
my $filename = 'd:\\folder_to_copy';
if (-w $filename) {
print "i can write to the file\n";
} else {
print "yikes, i can't write to the file!\n";
}
Of course this won't work, cuz I still have write acces to that folder.
Any ideea of how could I check if the folder is being copied in Perl or usingbatch commands?
Sounds like a job for a lock file. There are myriads of CPAN modules that implement lock files, but most of them don't work on Windows. Here are a few that seem to support Windows according to CPAN Testers:
File::Lockfile
File::TinyLock
File::Flock::Tiny
After having a quick view at the source code, the only module I can recommend is File::Flock::Tiny. The others seem racy.
If you need a systemwide mutex, then one "trick" is to (ab)use a directory. The command mkdir is usually atomic and either works or doesn't (if the directory already exists).
Change your script as follows:
my $mutex_dir = '/tmp/my-mutex-dir';
if ( mkdir $mutex_dir ) {
# run your copy-code here
# when finished:
rmdir $mutex_dir;
} else {
print "another instance is already running.\n";
}
The only thing you need to make sure is that you're allowed to create a directory in /tmp (or wherever).
Note that I intentionally do NOT firstly test for the existence of $mutex_dir because between the if (not -d $mutex_dir) and the mkdir someone else could create the directory and the mkdir would fail anyway. So simply call mkdir. If it worked then you can do your stuff. Don't forget to remove the $mutex_dir after you're done.
That's also the downside of this approach: If your copy-code crashes and the script prematurely dies then the directory isn't deleted. Presumably the lock file mechanism suggested in nwellnhof's answer behaves better in that case and automatically unlocks the file.
As the first script is trying to copy the files, the second comes and
tries the same thing resulting in a mess
A simplest approach would be to create a file which will contain 1 if another instance of script is running. Then you can add a conditional based on that.
{local $/; open my $fh, "<", 'flag' or die $!; $data = <$fh>};
die "another instance of script is running" if $data == 1;
Another approach would be to set an environment variable within the script and check it in BEGIN block.
You can use Windows-Mutex or Windows-Semaphore Objects of the package
http://search.cpan.org/~cjm/Win32-IPC-1.11/
use Win32::Mutex;
use Digest::MD5 qw (md5_hex);
my $mutex = Win32::Mutex->new(0, md5_hex $filename);
if ($mutex) {
do_your_job();
$mutex->release
} else {
#fail...
}

Determine write access in windows from activeperl

I have a script written using ActivePerl that creates files where the user specifies. If the directory doesn't already exist, it uses mkpath to attempt to create it and trap any error conditions (such as not having permission to create the directory there). This seems fine. What I'm running into trouble with is determine the permissions of a directory that already exists. I don't want a user to be able to specify a protected folder that they can read from (c:\windows\system32 comes to mind) and the script silently fail attempting to create its files there.
From other perl examples on the web I've tried using the following, but I'm always given 0777 as the result for any existing directory:
use File::stat;
#
#...
#
my $info = stat($candiDirectory);
my $retMode = $info->mode;
my $mymode = sprintf("0%o, $retMode & 07777);
print "retMode for $candiDirectory is $mymode \n";
While this seems reasonable for unix/Linux, I'd be surprised if it didn't require something different under Win32 or 64.
from perldoc perlfunc:
-w $filename
unless (-w $filename) {
say "i can't write this file";
}

Resources