In Perl Script Not able to add a directory to Archive - windows

I have to add a Windows directory to Archive using Perl Script. After executing the below script only the Directory name is getting archived and the Directory Content under "C:\Software\Postgres" is not included as part of Archive. Can you please help me where I am making mistake in the below script.
use warnings;
use strict;
use Archive::Tar;
tar_BKUP();
sub tar_BKUP{
my $src_D = 'C:\Software\Postgres';
my $dst_Tar = 'C:\temp\Postgres.tar';
my $tar = Archive::Tar->new();
$tar->add_files($src_D);
$tar->write($dst_Tar);
}

Here is an example using File::Find::Rule:
use warnings;
use strict;
use Archive::Tar;
use File::Find::Rule;
tar_BKUP();
sub tar_BKUP{
my $src_D = 'C:/Software/Postgres';
my $dst_Tar ='C:/temp/Postgres.tar';
my $tar = Archive::Tar->new();
my #files = File::Find::Rule->file->in($src_D);
$tar->add_files(#files);
$tar->write($dst_Tar);
}
See also How can I archive a directory in Perl like tar does in UNIX?

The documentation for the add_files() method says this:
$tar->add_files( #filenamelist )
Takes a list of filenames and adds them to the in-memory archive.
So you pass it a list of filenames and those files get added to the archive. It looks like you think you can pass it a directory and get all of the files in that directory added in one go. But it's not documented to work like that.
If you know there are no subdirectories below your source directory, then you could do something like this:
$tar->add_files( glob( "$src_D/*" ) );
But if you need to include the contents of subdirectories, then HÃ¥kon's answer using File::Find::Rule is a good approach.
If a Perl module isn't working how you expect it to, then checking the documentation is always a good first step :-)

Related

Perl code doesn't run in a bash script with scheduling of crontab

I want to schedule my Perl code to be run every day at a specific time. so I put the below code in bash file:
Automate.sh
#!/bin/sh
perl /tmp/Taps/perl.pl
The schedule has been specified in below path:
10 17 * * * sh /tmp/Taps/Automate.sh > /tmp/Taps/result.log
When the time arrived to 17:10 the .sh file hasn't been running. however, when I run ./Automate.sh (manually) it is running and I see the result. I don't know what is the problem.
Perl Code
#!/usr/bin/perl -w
use strict;
use warnings;
use Data::Dumper;
use XML::Dumper;
use TAP3::Tap3edit;
$Data::Dumper::Indent=1;
$Data::Dumper::Useqq=1;
my $dump = new XML::Dumper;
use File::Basename;
my $perl='';
my $xml='';
my $tap3 = TAP3::Tap3edit->new();
foreach my $file(glob '/tmp/Taps/X*')
{
$files= basename($file);
$tap3->decode($files) || die $tap3->error;
}
my $filename=$files.".xml\n";
$perl = $tap3->structure;
$dump->pl2xml($perl, $filename);
print "Done \n";
error:
No such file or directory for file X94 at /tmp/Taps/perl.pl line 22.
X94.xml
foreach my $file(glob 'Taps/X*') -- when you're running from cron, your current directory is /. You'll want to provide the full path to that Taps directory. Also specify the output directory for Out.xml
Cron uses a minimal environment and a short $PATH, which may not necessarily include the expected path to perl. Try specifying this path fully. Or source your shell settings before running the script.
There are a lot of things that can go wrong here. The most obvious and certain one is that if you use a glob to find the file in directory "Taps", then remove the directory from the file name by using basename, then Perl cannot find the file. Not quite sure what you are trying to achieve there. The file names from the glob will be for example Taps/Xfoo, a relative path to the working directory. If you try to access Xfoo from the working directory, that file will not be found (or the wrong file will be found).
This should also (probably) lead to a fatal error, which should be reported in your error log. (Assuming that the decode function returns a false value upon error, which is not certain.) If no errors are reported in your error log, that is a sign the program does not run at all. Or it could be that decode does not return false on missing file, and the file is considered to be empty.
I assume that when you test the program, you cd to /tmp and run it, or your "Taps" directory is in your home directory. So you are making assumptions about where your program looks for the files. You should be certain where it looks for files, probably by using only absolute paths.
Another simple error might be that crontab does not have permission to execute the file, or no read access to "Taps".
Edit:
Other complications in your code:
You include Data::Dumper, but never actually use that module.
$xml variable is not used.
$files variable not declared (this code would never run with use strict)
Your $files variable is outside your foreach loop, which means it will only run once. Since you use glob I assumed you were reading more than one file, in which case this solution will probably not do what you want. It is also possible that you are using a glob because the file name can change, e.g. X93, X94, etc. In that case you will read the last file name returned by the glob. But this looks like a weak link in your logic.
You add a newline \n to a file name, which is strange.

Detecting that files are being copied in a folder

I am running a script which copies one folder from a specific location if it does not exist( or is not consistent). The problems appears when I run concurently the script 2+ times. As the first script is trying to copy the files, the second comes and tryes the same thing resulting in a mess. How could I avoid this situation? Something like system wide mutex.
I tryed a simple test with -w, I manually copied the folder and while the folder was copying I run the script:
use strict;
use warnings;
my $filename = 'd:\\folder_to_copy';
if (-w $filename) {
print "i can write to the file\n";
} else {
print "yikes, i can't write to the file!\n";
}
Of course this won't work, cuz I still have write acces to that folder.
Any ideea of how could I check if the folder is being copied in Perl or usingbatch commands?
Sounds like a job for a lock file. There are myriads of CPAN modules that implement lock files, but most of them don't work on Windows. Here are a few that seem to support Windows according to CPAN Testers:
File::Lockfile
File::TinyLock
File::Flock::Tiny
After having a quick view at the source code, the only module I can recommend is File::Flock::Tiny. The others seem racy.
If you need a systemwide mutex, then one "trick" is to (ab)use a directory. The command mkdir is usually atomic and either works or doesn't (if the directory already exists).
Change your script as follows:
my $mutex_dir = '/tmp/my-mutex-dir';
if ( mkdir $mutex_dir ) {
# run your copy-code here
# when finished:
rmdir $mutex_dir;
} else {
print "another instance is already running.\n";
}
The only thing you need to make sure is that you're allowed to create a directory in /tmp (or wherever).
Note that I intentionally do NOT firstly test for the existence of $mutex_dir because between the if (not -d $mutex_dir) and the mkdir someone else could create the directory and the mkdir would fail anyway. So simply call mkdir. If it worked then you can do your stuff. Don't forget to remove the $mutex_dir after you're done.
That's also the downside of this approach: If your copy-code crashes and the script prematurely dies then the directory isn't deleted. Presumably the lock file mechanism suggested in nwellnhof's answer behaves better in that case and automatically unlocks the file.
As the first script is trying to copy the files, the second comes and
tries the same thing resulting in a mess
A simplest approach would be to create a file which will contain 1 if another instance of script is running. Then you can add a conditional based on that.
{local $/; open my $fh, "<", 'flag' or die $!; $data = <$fh>};
die "another instance of script is running" if $data == 1;
Another approach would be to set an environment variable within the script and check it in BEGIN block.
You can use Windows-Mutex or Windows-Semaphore Objects of the package
http://search.cpan.org/~cjm/Win32-IPC-1.11/
use Win32::Mutex;
use Digest::MD5 qw (md5_hex);
my $mutex = Win32::Mutex->new(0, md5_hex $filename);
if ($mutex) {
do_your_job();
$mutex->release
} else {
#fail...
}

Create new files from existing ones but change their extension

In shell, what is a good way to duplicating files in an existing directory so that the result gives the same file but with a different extension? So taking something like:
path/view/blah.html.erb
And adding:
path/view/blah.mobile.erb
So that in the path/view directory, there would be:
path/view/blah.html.erb
path/view/blah.mobile.erb
I'd ideally like to perform this at a directory level and not create the file if it already has both extensions but that isn't necessary.
You can do:
cd /path/view/
for f in *.html.erb; do
cp "$f" "${f/.html./.mobile.}"
done
PS: This replaces first instance of .html. with .mobile., syntax is bash specific (let me know if you're not using BASH).

Calling Bash commands from Ruby and returning output?

The following code includes a command and a string:
files = `ls /tmp`
I would like /tmp to be a variable instead of a static string, and would ideally like it to be like:
dir = '/tmp'
command = 'ls ' + dir
files = `command`
What is the correct Ruby syntax to achieve this?
Use string interpolation:
dir = '/tmp'
files = `ls #{dir}`
files = `#{command}`
Is that what you are looking for ?
Use the standard shellwords library. It will take care of proper escaping, which will help to protect you from shell injection attacks.
require 'shellwords'
command = [
'ls',
dir
].shelljoin
files = `#{command}`
If dir comes from untrusted input, the above code still allows someone to see any directory on your system. However, using shelljoin protects you from someone injecting, for example, a "delete all files on my hard drive" command.
In the particular case of listing a directory, The built-in class Dir will do that rather well:
files = Dir[File.join(dir, '*')]
Here we add a glob onto the end of the directory using File::join. Dir::[] then returns the paths of the files in that directory.

Change file names in windows folders

Hi I am trying to change the files names in some of my folders in windows machine.
I have bunch of files with files names starting with caiptal letter example
"Hello.html" but i want to change that to "hello.html" since there are like thousands of files i cannot just go and do it change it manually. I am looking for a script and i just need some help to get started and what should i start with.
I have access to a linux machine i can just copy the files over there and run any scripts i would really appreciate if some one could guide me to get started either in Linux or windows environments.
On some linux system you can use the rename command, which accept regular expression. Try the following:
rename 's/^([A-Z])/\l$1/' *
This should replace any uppercase char at the beginning with a lower case one.
Othewise, if you're not running a linux system that accept such a command, you can write your own little perl script:
#!/usr/bin/perl
use strict;
use warnings;
use File::Copy;
my #files = `ls`;
foreach (#files) {
chomp($_);
if ($_ =~ m/^[A-Z]/) {
my $newname = $_;
$newname =~ s/^([A-Z])/\l$1/;
move($_, $newname);
}
}
exit 0;
A very easy to use option is ReNamer.
Once installed, simply add the files to be renamed and add a case rule to simply change it to lower case or add a regex rule for advanced cases.

Resources