How to OR file names in open command - macos

I want to pass two file names in a file open command, so that if one file doesn't exist, it should open another file.
Is there any way to do it in a single open command? Below is my code:
open FILE, "/Library/xampp/Documents/$var"
I want to pass $var such that it will have xxx | /Library/Documents/xyz. Is there any way to do this?

Does this help?
my $fh;
open $fh, '<', '/Library/xampp/Documents/xxx'
or open $fh, '<', '/Library/Documents/xyz'
or die "Unable to open files for reading: $!";
or perhaps
use List::Util 'first';
my #files = qw( /Library/xampp/Documents/xxx /Library/Documents/xyz );
open my $fh, '<', first { -f } #files or die $!;

If I understand the question correctly, yes, you can provide $var as "../../../any/arbitrary/file" and open a file that is not under "/Library/xampp/Documents" (though if Library, xampp, or Documents is a link rather than a file, you may need a different number of ..s).

You are trying to find the path to a file given an absolute path or a path relative to a directory other than the current work directory.
use Path::Class qw( dir file );
my $file_qfn = 'xxx'; # /Library/xampp/Documents/xxx
-or-
my $file_qfn = '/Library/Documents/xyz'; # /Library/Documents/xyz
-or-
my $file_qfn = '../Docs/zzz'; # /Library/Docs/xampp/zzz
my $abs_file_qfn = file($file_qfn)->absolute(dir('/Library/xampp/Documents'));
open(my $fh, '<', $abs_file_qfn)
or die("Can't open \"$abs_file_qfn\": $!\n");
You could also chdir to /Library/xampp/Documents, but I dislike doing that.

Check the file exists first and if it does not choose a different path.
Then open that path
my $path= -e $ARGV[0] ? $ARGV[0] : "default"
open(my $fh, $path) || die "$path $!"

Related

How to get the header from a CSV file and write it to another file?

How can I get the header from a CSV file and write it to another file ?
This code works if I don't have many columns in the CSV file, but it doesn't work when my CSV file contains 200+ columns. It just echoes the column header (but not all of them, they are truncated) to the screen.
#echo off
set /p "header="<book1.csv
echo %header% > "book3.csv"
#!/usr/bin/perl
use warnings;
use strict;
use Text::CSV_XS;
my $csv = 'Text::CSV_XS'->new({binary => 1, escape_char => '\\'});
open my $fh, '<', 'book1.csv' or die $!;
my $h = $csv->getline($fh);
my $out = 'Text::CSV_XS'->new({eol => $/, escape_char => '\\'});
open my $fho, '>', 'book3.csv' or die $!;
$out->say($fho, $h);
Tested with
"Header 1","Header, 2","Header \"3\"","Header
4"
1,2,3,4
If you are running powershell you should be able to use the -TotalCount (aliases -Head and -First) parameter of Get-Content (alias GC) to just get the first line of a file.
gc book1.csv -head 1|sc book3.csv

How can I reduce this to a single file open?

Using Strawberry Perl 5.22.0 in Windows 7. Is there a more "perlish" way to write this snippet of code? I hate the duplication of file open sections, but cannot think of a way to make it only open once because of the requirement to test the creation time.
...
my $x;
my $fh;
my $sentinelfile = "Logging.yes"; #if this file exists then enable logging
my $logfile = "transfers.log";
my $log = 0; #default to NO logging
$log = 1 if -e $sentinelfile; #enable logging if sentinel file exists
if($log){
#logfile remains open after this so remember to close at end of program!
if (-e $logfile) { #file exists
open($fh, "<", $logfile); #open for read will NOT create if not exist
chomp ($x = <$fh>); #grab first row
close $fh;
if (((scalar time - $x)/3600/24) > 30) { #when ~30 days since created
rename($logfile, $logfile . time); #rename existing logfile
open($fh, ">", $logfile); #open for write and truncate
print $fh time,"\n"; #save create date
print $fh "--------------------------------------------------\n";
} else { #file is not older than 30 days
open($fh, ">>", $logfile); #open for append
}
} else { #file not exist
open($fh, ">", $logfile); #open new for write
print $fh time,"\n"; #save create date
print $fh "--------------------------------------------------\n";
}
} #if $log
...
To recap: logfile logs stuff. First row of file contains the logfile creation date. Second row contains horizontal rule. Rest of file contains text. Around 30 days after file was created, rename file and start a new one. After the above chunk of code the logfile is open and ready for logging stuff. It gets closed at the end of the rest of the program.
There are other, non-cosmetic problems with your code: a) You do not ever check if your calls to open succeeded; b) You are creating a race condition. The file can come into existence after the -e check has failed. The subsequent open $fh, '>' ... would then clobber it; c) You don't check if your rename call succeeded etc.
The following would be a partial improvement on your existing code:
if ($log) {
if (open $fh, '<', $logfile) { #file exists
chomp ($x = <$fh>);
close $fh
or die "Failed to close '$logfile': $!";
if (((time - $x)/3600/24) > 30) {
my $rotated_logfile = join '.', $logfile, time;
rename $logfile => $rotated_logfile
or die "Failed to rename '$logfile' to '$rotated_logfile': $!";
open $fh, '>', $logfile
or die "Failed to create '$logfile'";
print $fh time, "\n", '-' x 50, "\n";
}
else {
open $fh, '>>', $logfile
or die "Cannot open '$logfile' for appending: $!";
}
}
else {
open $fh, '>', $logfile
or die "Cannot to create '$logfile': $!";
print $fh time, "\n", '-' x 50, "\n";
}
}
It would be better to abstract every bit of discrete functionality to suitably named functions.
For example, here is a completely untested re-write:
use autouse Carp => qw( croak );
use constant SENTINEL_FILE => 'Logging.yes';
use constant ENABLE_LOG => -e SENTINEL_FILE;
use constant HEADER_SEPARATOR => '-' x 50;
use constant SECONDS_PER_DAY => 24 * 60 * 60;
use constant ROTATE_AFTER => 30 * SECONDS_PER_DAY;
my $fh;
if (ENABLE_LOG) {
if (my $age = read_age( $logfile )) {
if ( is_time_to_rotate( $age ) ) {
rotate_log( $logfile );
}
else {
$fh = open_log( $logfile );
}
}
unless ($fh) {
$fh = create_log( $logfile );
}
}
sub is_time_to_rotate {
my $age = shift;
return $age > ROTATE_AFTER;
}
sub rotate_log {
my $file = shift;
my $saved_file = join '.', $file, time;
rename $file => $saved_file
or croak "Failed to rename '$file' to '$saved_file': $!"
return;
}
sub create_log {
my $file = shift;
open my $fh, '>', $file
or croak "Failed to create '$file': $!";
print $fh time, "\n", HEADER_SEPARATOR, "\n"
or croak "Failed to write header to '$file': $!";
return $fh;
}
sub open_log {
my $file = shift;
open my $fh, '>>', $file
or croak "Failed to open '$file': $!";
return $fh;
}
sub read_age {
my $file = shift;
open my $fh, '<', $file
or return;
defined (my $creation_time = <$fh>)
or croak "Failed to read creation time from '$file': $!";
return time - $creation_time;
}
If you need to read a line of a file, rename it and then work with it, you have to open it twice.
However, you can also do away with using that first line.
On Windows, according to perlport (Files and Filesystems), the inode change time time-stamp (ctime) "may really" mark the file creation time. This is likely to be completely suitable for a log file that doesn't get manipulated and moved around. It can be obtained with the -C file-test operator
my $days_float = -C $filename;
Now you can numerically test this against 30. Then there is no need to print the file's creation time to its first line (but you may as well if it is useful for viewing or other tools).
Also, there is the module Win32API::File::Time, with the purpose to
provide maximal access to the file creation, modification, and access times under MSWin32
Plese do read the docs for some caveats. I haven't used it but it seems tailored for your need.
A good point is raised in a comment: apparently the OS retains the original time-stamp as the file is being renamed. In that case, when the file's too old copy it into a new one (with the new name) and delete it, instead of using rename. Then open that log file anew, so with a new time-stamp.
Here is a complete example
archive_log($logfile) if -f $logfile and -C $logfile > 30;
open my $fh_log, '>>', $logfile or die "Can't open $logfile: $!";
say $fh_log "Log a line";
sub archive_log {
my ($file) = #_;
require POSIX; POSIX->import('strftime');
my $ts = strftime("%Y%m%d_%H:%M:%S", localtime); # 20170629_12:44:10
require File::Copy; File::Copy->import('copy');
my $archive = $file . "_$ts";
copy ($file, $archive) or die "Can't copy $file to $archive: $!";
unlink $file or die "Can't unlink $file: $!";
}
The archive_log archives the current log by copying it and then removes it.
So after that we can just open for append, which creates the file if not there.
The -C tests for file existence but since its output is used in a numerical test we need -f first.
Since this happens once a month I load modules at runtime, with require and import, once the log actually need be rotated. If you already use File::Copy then there is no need for this. As for the time-stamp, I threw in something to make this a working example.
I tested this on UNIX, by changing -C to -M and tweaking the timestamp by touch -t -c.
Better yet, to reduce the caller's code fully move the tests into the sub as well, for
my $fh_log = open_log($logfile);
say $fh_log "Log a line";
sub open_log {
my ($file) = #_;
if (-f $file and -C $file > 30) {
# code from archive_log() above, to copy and unlink $file
}
open my $fh_log, '>>', $file or die "Can't open $file: $!";
return $fh_log;
}
Note. On UNIX the file's creation time is not kept anywhere. The closest notion is the ctime above, but this is of course different. For one thing, it changes with many operations, for instance mv, ln, chmod, chown, chgrp (and probably others).

Search for specific lines from a file

I have an array that contains the data from a text file.
I want to filter the array and copy some information to another array. grep seems to not work.
Here's what I have
$file = 'files.txt';
open (FH, "< $file") or die "Can't open $file for read: $!";
#lines = <FH>;
close FH or die "Cannot close $file: $!";
chomp(#lines);
foreach $y (#lines){
if ( $y =~ /(?:[^\\]*\\|^)[^\\]*$/g ) {
print $1, pos $y, "\n";
}
}
files.txt
public_html
Trainings and Events
General Office\Resources
General Office\Travel
General Office\Office Opperations\Contacts
General Office\Office Opperations\Coordinator Operations
public_html\Accordion\dependencies\.svn\tmp\prop-base
public_html\Accordion\dependencies\.svn\tmp\props
public_html\Accordion\dependencies\.svn\tmp\text-base
The regular expression should take the last one or two folders and put them into their own array for printing.
A regex can get very picky for this. It is far easier to split the path into components and then count off as many as you need. And there is a tool for this exact purpose, the core module File::Spec, as mentioned by xxfelixxx in a comment.
You can use its splitdir to break up the path, and catdir to compose one.
use warnings 'all';
use strict;
use feature 'say';
use File::Spec::Functions qw(splitdir catdir);
my $file = 'files.txt';
open my $fh, '<', $file or die "Can't open $file: $!";
my #dirs;
while (<$fh>) {
next if /^\s*$/; # skip empty lines
chomp;
my #path = splitdir $_;
push #dirs, (#path >= 2 ? catdir #path[-2,-1] : #path);
}
close $fh;
say for #dirs;
I use the module's functional interface while for heavier work you want its object oriented one. Reading the whole file into an array has its uses but in general process line by line. The list manipulations can be done more elegantly but I went for simplicity.
I'd like to add a few general comments
Always start your programs with use strict and use warnings
Use lexical filehandles, my $fh instead of FH
Being aware of (at least) a dozen-or-two of most used modules is really helpful. For example, in the above code we never had to even mention the separator \.
I can't write a full answer because I'm using my phone. In any case zdim has mostly answered you. But my solution would look like this
use strict;
use warnings 'all';
use feature 'say';
use File::Spec::Functions qw/ splitdir catdir /;
my $file = 'files.txt';
open my $fh, '<', $file or die qq{Unable to open "$file" for input: $!};
my #results;
while ( <$fh> ) {
next unless /\S/;
chomp;
my #path = splitdir($_);
shift #path while #path > 2;
push #results, catdir #path;
}
print "$_\n" for #results;

Perl Reading from one file, writing contents to another file on Windows

I am very new to Perl and its syntax. I've done a bit of research about reading from one file and writing to another. I've written a short piece of code that doesnt seem to be giving me any error but it also doesn't write to the file. Some help would be greatly appreciated.
#!/usr/bin/perl
use strict;
use warnings;
my $defaultfile = 'C:\\Glenn Scott C\\AUTO IOX\\IOMETER FILES\\test.txt';
my $mainfile = 'C:\\Glenn Scott C\\AUTO IOX\\IOMETER FILES\\IOMETERFILECREATOR.txt';
open FILE, $defaultfile;
open FILE2, $mainfile;
while (my $line = <FILE>)
{
print FILE2($line);
}
close FILE;
close FILE2;
Close, but not quite.
open is best done with 3 arguments. open ( my $default_fh, '<', $defaultfile ) or die $!;
print to a file handle doesn't work like that. It's print {$main_fh} $line;
you should test open for success. An or die $! is sufficient.
So this would be what you need:
#!/usr/bin/perl
use strict;
use warnings;
my $defaultfile = 'C:\\Glenn Scott C\\AUTO IOX\\IOMETER FILES\\test.txt';
my $mainfile =
'C:\\Glenn Scott C\\AUTO IOX\\IOMETER FILES\\IOMETERFILECREATOR.txt';
open( my $default_fh, "<", $defaultfile ) or die $!;
open( my $main_fh, ">", $mainfile ) or die $!;
while ( my $line = <$default_fh> ) {
print {$main_fh} $line;
}
close $default_fh;
close $main_fh;

Perl: Bad Symbol for dirhandle

This is my code:
opendir(DIR, $directoryPath) or die "Cant open $directoryPath$!";
my #files = readdir(DIR); #Array of file names
closedir (DIR) or die "Cant close $directoryPath$!";
I'm using #files to create an array of the file names within the directory for renaming later in the program.
The problem is:
I am getting the error "Bad Symbol for dirhandle" at the closedir line.
If I don't closedir to avoid this, I don't have permission to change file names (I'm using Windows).
I tried an alternative way of renaming the files (below) to try a different solution to the problem by renaming the files a different way and within the dirhandles, but this just repeat the permission errors.
opendir(DIR, $directoryPath) or die "Cant open $directoryPath$!";
while( (my $filename = readdir(DIR)))
{
rename($filename, $nFileName . $i) or die "Cant rename file $filename$!";
i++;
}
closedir (DIR) or die "Cant close $directoryPath$!";
From a quick bit of research I think the permission error is a Windows security feature so you can't edit a file while its open, but I haven't been able to find a solution simple enough for me to understand.
An answer to point 1. or point 3. is preferrable, but an answer to point 2. will also be useful.
Full code used in points 1. and 2. below
use 5.16.3;
use strict;
print "Enter Directory: ";
my $directoryPath = <>;
chomp($directoryPath);
chdir("$directoryPath") or die "Cant chdir to $directoryPath$!";
opendir(DIR, $directoryPath) or die "Cant open $directoryPath$!";
my #files = readdir(DIR); #Array of file names
closedir (DIR) or die "Cant close $directoryPath$!";
my $fileName = "File ";
for my $i (0 .. #files)
{
rename($files[$i], $fileName . ($i+1)) or die "Cant rename file $files[$i]$!";
}
chdir; #return to home directory
I can input the path correctly, but then error message (copied exactly) is:
Can't rename file .Permission denied at C:\path\to\file\RenameFiles.pl line 19, <> line 1.
The error
Can't rename file .Permission denied at C:\path\to\file\RenameFiles.pl line 19, <> line 1.
says that you are trying to rename the file ., which is a special file that is a shortcut for "current directory". You should add exceptions to your code to not rename this file, and the one called ... Something like:
next if $files[$i] =~ /^\./;
Would do. This will skip over any file that begins with a period .. Alternatively you can skip directories:
next if -d $files[$i]; # skip directories (includes . and ..)
As TLP has already pointed out, readdir returns . and .. which corresponds to the current and parent directory.
You'll need to filter those out in order to avoid renaming directories.
use strict;
use warnings;
use autodie;
print "Enter Directory: ";
chomp( my $dirpath = <> );
opendir my $dh, $dirpath or die "Can't open $dirpath: $!";
my $number = 0;
while ( my $file = readdir($dh) ) {
next if $file =~ /^\.+$/;
my $newfile = "$dirpath/File " . ++$number;
rename "$dirpath/$file", $newfile or die "Cant rename file $file -> $newfile: $!";
}
closedir $dh;
Cross Platform Compatibility using Path::Class
One way to simplify this script and logic is to use Path::Class to handle file and directory operations.
use strict;
use warnings;
use autodie;
use Path::Class;
print "Enter Directory: ";
chomp( my $dirname = <> );
my $dir = dir($dirname);
my $number = 0;
for my $file ( $dir->children ) {
next if $file->is_dir();
my $newfile = $dir->file( "File" . ++$number );
$file->move_to($newfile);
}

Resources