My firefox is installed in C:\Program Files\Mozilla Firefox.
I amm trying to read the firefox version like this:
open (IN, "\"C:\\Program\ Files\\Mozilla\ Firefox\\firefox.exe\" --version|") or die "Couldn't fork - $!\n";
my $aarray = (IN);
print $fh "=========Array Content========> $aarray <====\n";
close(IN);
This is working perfectly. But, when I put path or command in a variable and try to execute, it fails saying "Couldn't fork"
my $ffox = 'C:\Program Files\Mozilla Firefox\firefox.exe';
$ffox =~ s/\\/\\\\/g; #Replacing single \ with double \\
$ffox =~ s/\ /\\ /g; # Adding escape character before space
chop($firefoxVer); # remove last \n chr.
$ffox =~ s/$/\\\"/g; # With the below two commands, massaging the command to become
$ffox =~ s/^/\\\"/g; # "C:\\Program\ Files\\Mozilla\Firefox\\firefox.exe\"
$firefoxVer = $ffox.' --version|'; # Adding "version|" to the above command
$firefoxVer =~ s/$/\"/g; # and make it look like:
$firefoxVer =~ s/^/\"/g; # "\"C:\\Program\ Files\\Mozilla\\Firefox\\firefox.exe\" --version|"
open (IN, $firefoxVer) or die "Couldn't fork - $!\n"; ==> Fails
With the backticks you can get the result directly without the open. Quote the command-line things that need to preserve their spaces:
use v5.10;
my $version = `"C:\\Program Files\\Mozilla Firefox\\firefox.exe" -version`;
say "version is $version";
If that directory is in your path, you don't need to fool around with that:
my $firefox = 'C:\\Program Files\\Mozilla Firefox';
$ENV{PATH} .= ";$firefox";
my $version = `firefox -version`;
Related
Code snippet:
my $node = shift;
my $runCmd = "cmviewcl -v -f line -p ".$package_name." | awk -F \"[:|=]\" \'(\$1 == \"script_log_file\") { print \$2 }\'";
my $logfile = $output[0];
chomp $logfile;
#DC1_list = utils::getDC1Host($hash_ref);
#DC2_list = utils::getDC2Host($hash_ref);
foreach $node1 (#DC1_list) {
$runCmd = "cmexec $node1 echo \"\" > ".$logfile;
Please let me know the what's this line means:
$runCmd = "cmexec $node1 echo \"\" > ".$logfile;
it was written before as:
$runCmd = "cmexec $node1 rm -rf ".$logfile;
which probably means remove the file in logfile variable forced recursive, but later changed to the above. so
what's it's doing?
Remove a file is different than an empty file.
The first option keep the file but override the content with "" (2x double quote), the second one remove the file.
Maybe your application need the file exist, because of this you cannot remove it.
If you have really copied this line verbatim, it is pretty nonsense.
Let's assume that the variables mentioned here have the folllowing values:
runCmd has value FOO
node1 has value BAR
logfile has value BAZ
After parameter expansion and making the quoting a bit more legible, this leaves you with a line equivalent to
FOO = 'cmexec BAR echo "" >' .BAZ
This means that a command named FOO is invoked. It must either be an executable file in the PATH, or a function. This command gets three parameters:
First parameter : a lonely equal sign
Second parameter: The string cmexec BAR echo "" >
Third paramete : the string .BAZ
I don't believe that anybody would seriously write such a command; my guess is that you made a typo, or error when doing a copy&paste of this command.
I have a Perl script written on Server 2008. It works fine here.
I have copied it to my laptop, which is running Windows 10 Home Edition, Version 1993, OS Build 18362.959.
I also download Active Perl, for the very first time.
The script basically takes an input file and applies regular expressions to the content and then outputs the results to a file.
On Windows 10 it is not writing to the file, the file is not even created.
I have done a search on this issue but have not found a solution.
I tried the following code, found on one on the reply to the same issue. But it does not create or writes to the file. It works fine on server 2008. Am I missing something?
As mentioned this is the first time I downloaded ActivePerl version
Perl Version Details are as follows:
This is perl 5, version 28, subversion 1 (v5.28.1) built for MSWin32-x64-multi-thread
(with 1 registered patch, see perl -V for more detail)
Copyright 1987-2018, Larry Wall
Binary build 0000 [58a1981e] provided by ActiveState http://www.ActiveState.com
Built Apr 10 2020 17:28:14
Perl code is as follows:
use strict;
use IO::File;
my $FileHandle = new IO::File;
$FileName = "C:\\Users\\moons\\Documents\\Personal Planning\\Shopping\\ShoppingList.txt";
open ($FileHandle, "<$FileName") || print "Cannot open $FileName\n";
local $/;
my $FileContents = <$FileHandle>;
close($FileHandle);
$FileContents =~ s/(Add|Bad|Limit|Each).*\n|Add$|\nWeight\n\d{1,}\nea|\$\d{1,}\.\d\d\/100g\n//g;
Do more Regular expressions.
$FileContents =~ s/(.*)\n(.*)\n(\$\d{1,}\.\d\d)/$1,$3,$2/g;
printf $FileContents;
Above code works. Code below does not create or write to file.
$OutFile = "C:\\Users\\moons\\Documents\\Personal Planning\\Shopping\\test.txt";
$FileHandle = new IO::File;
open ($FileHandle, ">$OutFile") || print "Cannot open $OutFile\n";
printf $FileHandle $FileContents;
close($FileHandle);
Always use use strict; use warnings;.
my $OutFile = "C:\Users\moons\Documents\Personal Planning\Shopping\test.txt";
results in
Unrecognized escape \m passed through at a.pl line 3.
Unrecognized escape \D passed through at a.pl line 3.
Unrecognized escape \P passed through at a.pl line 3.
Unrecognized escape \S passed through at a.pl line 3.
You could use
my $OutFile = "C:\\Users\\moons\\Documents\\Personal Planning\\Shopping\\test.txt";
Whole thing:
use strict;
use warnings;
my $in_qfn = "C:\\Users\\moons\\Documents\\Personal Planning\\Shoppin\\ShoppingList.txt";
my $out_qfn = "C:\\Users\\moons\\Documents\\Personal Planning\\Shopping\\test.txt";
open(my $in_fh, '<', $in_qfn)
or die("Can't open \"$in_qfn\": $!\n");
open(my $out_fh, '>', $out_qfn)
or die("Can't create \"$out_qfn\": $!\n");
my $file;
{
local $/;
$file = <$in_fh>;
}
for ($file) {
s/(Add|Bad|Limit|Each).*\n|Add$|\nWeight\n\d{1,}\nea|\$\d{1,}\.\d\d\/100g\n//g;
s/(.*)\n(.*)\n(\$\d{1,}\.\d\d)/$1,$3,$2/g;
}
print $file;
I have a FTP folder receiving files from a remote camera. The camera stores the video file name always as ./rec_YYYY-MM-DD_HH-MM.mkv. The video files are stored all in the same folder, the root folder from the FTP server.
I need to move these files to another folder, with this new scheme:
Remove rec_ from the file name.
Change date format to DD-MM-YY.
Remove date from the file name and make it a folder instead, where that same file and all the others in the same date will be stored in.
Final file path would be: ./DD-MM-YYYY/HH-MM.mkv.
The process would continue to all the files, putting them in the folder corresponding to the day it was created.
Summing up: ./rec_YYYY-MM-DD_HH-MM.mkv >> ./DD-MM-YYYY/HH-MM.mkv. The same should apply to all files that are in the same folder.
As I can't make it happen directly from the camera, this needs to be done with Bash on the server that is receiving the files.
So far, what I got is script, which would get the file's creation date and use it to make a folder, and then get creation time to move the file with the new name, based on it's creation time.:
for f in *.mp4
do
mkdir "$f" "$(date -r "$f" +"%d-%m-%Y")"
mv -n "$f" "$(date -r "$f" +"%d-%m-%Y/%H-%M-%S").mp4"
done
I'm getting this output (with testfile 1.mp4):
It creates the folder based on the file's creation date;
it renames the file to it's creation time;
Then, it returns mkdir: cannot create directory ‘1.mp4’: File exists
If two or more files, only one gets renamed and moved as described. The others stay the same and terminal returns:
mkdir: cannot create directory ‘1.mp4’: File exists
mkdir: cannot create directory ‘2.mp4’: File exists
mkdir: cannot create directory ‘12-12-2018’: File exists
Could someone help me out? Better suggestions? Thanks!
Honestly I would just use Perl or Python for this. Here's how to embed either in a shell script.
Here's a perl script that doesn't use any libraries, even ones that ship with Perl (so it'll work without extra packages on distributions like CentOS that don't ship with the entire Perl library). The perl script launches one new process per file in order to perform the copy.
perl -e '
while (<"*.m{p4,kv}">) {
my $path = $_;
my ($prefix, $year, $month, $day, $hour, $minute, $ext) =
split /[.-_]/, $path;
my $sec = q[00];
die "unexpected prefix ($prefix) in $path"
unless $prefix eq q[rec];
die "unexpected extension ($ext) in $path"
unless $ext eq q[mp4] or $ext eq q[mkv];
my $dir = "$day-$month-$year";
my $name = "$hour-$min-$sec" . q[.] . $ext;
my $destpath = $dir . q[/] . $name;
die "$dir . $name is unexpectedly a directory" if (-d $dir);
system("cp", "--", $path, $destpath);
}
'
Here's a Python example, it's compatible with either Python 2 or Python 3 but does use the standard library. The Python script does not spawn any additional processes.
python3 -c '
import os.path as path
import re
from glob import iglob
from itertools import chain
from os import mkdir
from shutil import copyfile
for p in chain(iglob("*.mp4"), iglob("*.mkv")):
fields = re.split("[-]|[._]", p)
prefix = fields[0]
year = fields[1]
month = fields[2]
day = fields[3]
hour = fields[4]
minute = fields[5]
ext = fields[6]
sec = "00"
assert prefix == "rec"
assert ext in ["mp4", "mkv"]
directory = "".join([day, "-", month, "-", year])
name = "".join([hour, "-", minute, "-", sec, ".", ext])
destpath = "".join([directory, "/", name])
assert not path.isdir(destpath)
try:
mkdir(directory)
except FileExistsError:
pass
copyfile(src=p, dst=destpath)
'
Finally, here's a bash solution. It splits paths using -, ., and _ and then extracts various subfields by indexing into $# inside a function. The indexing trick is portable, although regex substitution on variables is a bash extension.
#!/bin/bash
# $1 $2 $3 $4 $5 $6 $7 $8
# path rec YY MM DD HH MM ext
process_file() {
mkdir "$5-$4-$3" &> /dev/null
cp -- "$1" "$5-$4-$3"/"$6-$7-00.$8"
}
for path in *.m{p4,kv}; do
[ -e "$path" ] || continue
# NOTE: two slashes are needed in the substitution to replace everything
# read -a ARRAYVAR <<< ... reads the words of a string into an array
IFS=' ' read -a f <<< "${path//[-_.]/ }"
process_file "$path" "${f[#]}"
done
If you cd /to/some/directory/containing_your_files then you could use the following script
#!/usr/bin/env bash
for f in rec_????-??-??_??-??.m{p4,kv} ; do
dir=${f:4:10} # skip 4 chars ('rec_') take 10 chars ('YYYY_MM_DD')
fnm=${f:15} # skip 15 chars, take the remainder
test -d "$dir" || mkdir "$dir"
mv "$f" "$dir"/"$fnm"
done
note ① that I have not exchanged the years and the days, if you absolutely need to do the swap you can extract the year like this, year=${dir::4} etc and ② that this method of parameter substitution is a Bash-ism, e.g., it doesn't work in dash.
your problem is: mkdir creates folder but you are giving filename for folder creation.
if you want to use fileName for folder creation then use it without extension.
the thing is you are trying to create folder with the already existing fileName
I want to move files from a server that my Windows computer is connected to, to the actual computer. I have tried the code on my mac and it works fine, so I suspect the problem has to do with the fact that the files I wish to move are on a server or perhaps with Windows (I am unfamiliar with this OS). It is important to me to be able to use File::Find::Rule because there are many subdirectories within subdirectories that need to be searched.
use strict;
use warnings;
use File::Find::Rule;
use File::Copy;
# directory where files live
# my $dir = "\\172.18\user\folder\folder2";
# directory where TextGrids will be moved to
my $outdir = "\users\lisa\desktop\test";
my #files;
#files = File::Find::Rule -> file()
-> name("*_clean.TextGrid")
-> maxdepth()
-> in($dir);
foreach my $file (#files) {
$file =~ /(.*\\)(.*)/;
my $name = $2;
copy("$file", "$outdir/$name") or die "Copy failed: $!";
}
Edit: Ok, I've made some changes to the script below. But the strange thing is, that when I ask it to print each file, it gives me something like \\172.18\user\folder\folder/255/file.txt. I changed the regex to be (.*\/)(.*) and now the script works perfectly, though I don't know why!
use strict;
use warnings;
use File::Find::Rule;
use File::Copy;
# directory where files live
my $dir = "\\\\172.18\\user\\folder\\folder2";
# directory where TextGrids will be moved to
my $outdir = "C:\\Users\\lisa\\desktop\\test";
my #files;
#files = File::Find::Rule -> file()
-> name("*_clean.TextGrid")
-> maxdepth()
-> in($dir);
foreach my $file (#files) {
print "$file\n";
$file =~ /(.*\\)(.*)/;
my $name = $2;
copy("$file", "$outdir\\$name") or die "Copy failed: $!";
}
After your edit, the script works because the last directory separator in the string happens to be /, which is matched by the \/ in the regular expression. Even though you had \ in the input, the library you used to find the files added /s.
I have some suggestions:
You can avoid the need to escape (most) backslashes by using single quoted strings, unless you need the interpolation of the double quoted ones.
Escaping backslashes is optional unless followed by a single quote or another backslash:
my $outdir = '\users\lisa\desktop\test';
but
my $outdir = '\users\lisa\desktop\test\\';
$outdir = '\users\lisa\desktop\test\\\'ere is a path';
my $not_a_path = 'three backslashes\\\\\in between, all but the last need escaping';
'ere is a path is the last element in that path.
If you're dealing with Windows, consider using [\\/] in place of directory separator in regular expressions. (Or [\\\/] if you absolutely must use / as regular expression delimiter.)
Even if you have control over user input to only use \ in paths, libraries you use will usually add /, so it's better to be prepared for a combination of both.
$file =~ /(.*[\\\/])(.*)/;
$file =~ m{(.*[\\/])(.*)};
$file =~ m¤(.*[\\/])(.*)¤;
I also removed the superfluous quotes from around $file in the copy() call. Final result:
use strict;
use warnings;
use File::Find::Rule;
use File::Copy;
# directory where files live
my $dir = '\\172.18\user\folder\folder2';
# directory where TextGrids will be moved to
my $outdir = 'C:\Users\lisa\desktop\test';
my #files;
#files = File::Find::Rule -> file()
-> name("*_clean.TextGrid")
-> maxdepth()
-> in($dir);
foreach my $file (#files) {
print "$file\n";
$file =~ /(.*[\\\/])(.*)/;
my $name = $2;
copy($file, "$outdir\\$name") or die "Copy failed: $!";
}
use strict;
use warnings;
my $dir = "\\172.18\user\folder\folder2";
print("$dir\n");
my $outdir = "\users\lisa\desktop\test";
print("$outdir\n");
outputs
Unrecognized escape \d passed through at a.pl line 7.
\172.18Ser?older?older2
Sersisadesktop est
You need to escape your backslashes!
use strict;
use warnings;
my $dir = "\\\\172.18\\user\\folder\\folder2";
print("$dir\n");
my $outdir = "\\users\\lisa\\desktop\\test";
print("$outdir\n");
Don't ignore warnings.
I have a directory full of files (text exports of Dynamics NAV objects that have been exported) in Windows. Each file contains multiple objects. I need to split each file into separate files based on lines that begin with OBJECT, and name each file appropriately.
The purpose of this is to get our Dynamics NAV system into git.
I wrote a nifty perl program to do this that works great on linux. But it hangs on the while(<>) loop in Windows (Server 2012 if that matters).
So, I need to either figure out how to do this in the PowerShell script that I wrote that generates all of the files, or fix my perl script that I'm calling from PowerShell. Does Windows perl handle filehandles differently than linux?
Here's my code:
#!/usr/bin/perl
use strict;
use warnings;
use File::Path qw(make_path remove_tree);
use POSIX qw(strftime);
my $username = getlogin || getpwuid($<);
my $datestamp = strftime("%Y%m%d-%H%M%S", localtime);
my $work_dir = "/temp/nav_export";
my $objects_dir = "$work_dir/$username/objects";
my $export_dir = "$work_dir/$username/$datestamp";
print "Objects being exported to $export_dir\n";
make_path("$export_dir/Page", "$export_dir/Codeunit", "$export_dir/MenuSuite", "$export_dir/Query", "$export_dir/Report", "$export_dir/Table", "$export_dir/XMLport");
chdir $objects_dir or die "Could not change to $objects_dir: $!";
# delete empty files
foreach(glob('*.*')) {
unlink if -f and !-s _;
}
my #files = <*>;
my $count = #files;
print "Processing $count files\n";
open (my $fh, ">-") or die "Could not open standard out: $!";
# OBJECT Codeunit 1 ApplicationManagement
while(<>)
{
if (m/^OBJECT ([A-Za-z]+) ([0-9]+) (.*)/o)
{
my $objectType = $1;
my $objectID = $2;
my $objectName = my $firstLine = $3;
$objectName =~ s/[\. \/\(\)\\]/_/g; # translate spaces, (, ), ., \ and / to underscores
$objectName =~ tr/\cM//d; # get rid of Ctrl-M
my $filename = $export_dir . "/" . $objectType . "/" . $objectType . "~" . $objectID . "~" . $objectName;
close $fh and open($fh, '>', $filename) or die "Could not open file '$filename' $!";
print $fh "OBJECT $objectType $objectID $firstLine\n";
next;
}
print $fh $_;
}
I've learned quite a bit of PowerShell in the past few days. There are some things that it really does quite well. And some (such as calling an executable with variables and command line options that have spaces) that are maddeningly difficult to figure out. To call curl, this is what I resorted to:
$curl = "C:\Program Files (x86)\cURL\bin\curl"
$arg10 = '-s'
$arg1 = '-X'
$arg11 = 'post'
$arg2 = '-H'
$arg22 = '"Accept-Encoding: gzip,deflate"'
$arg3 = '-H'
$arg33 = '"Content-Type: text/xml;charset=UTF-8"'
$arg4 = '-H'
$arg44 = '"SOAPAction:urn:microsoft-dynamics-schemas/page/permissionrange:ReadMultiple"'
$arg5 = '--ntlm'
$arg6 = '-u'
$arg66 = 'username:password'
$arg7 = '-d'
$arg77 = '"#soap_envelope.txt"'
$arg8 = "http://$servicetier.corp.company.net:7047/$database/WS/DBDOC/Page/PermissionRange"
$arg9 = "-o"
$arg99 = "c:\temp\nav_export\$env:username\raw_list.xml"
&"$curl" $arg10 $arg1 $arg11 $arg2 $arg22 $arg3 $arg33 $arg4 $arg44 $arg5 $arg6 $arg66 $arg7 $arg77 $arg8 $arg9 $arg99
I realize that part is a bit of a tangent. But I've been working really hard at trying to figure this out and not have to bother you nice folk here at stackoverflow!
I'm ambivalent about making it work in PowerShell or fixing the Perl code at this point. I just need to make it work. But I'm hoping it's just some little difference in filehandle handling between linux and Windows.
It's hard to believe that the Perl code that you show does anything on Linux either. It looks like your while loop is supposed to be reading through all of the files in the #files array, but to make it do that you have to copy the names to #ARGV.
Also note that #files will contain directories as well as files.
I suggest you change the lines starting with my #files = <*> to this. There's no reason why it shouldn't work on both Windows and Linux.
our #ARGV = grep -f, glob '*';
my $count = #ARGV;
print "Processing $count files\n";
my $fh;
while (<>) {
s/\s+\z//; # Remove trailing whitespace (including CR and LF)
my #fields = split ' ', $_, 4;
if ( #fields == 4 and $fields[0] eq 'OBJECT' ) {
my ($object_type, $object_id, $object_name) = #fields[1,2,3];
$object_name =~ tr{ ().\\/}{_}; # translate spaces, (, ), ., \ and / to underscores
my $filename = "$export_dir/$object_type/$object_type~$object_id~$object_name";
open $fh, '>', $filename or die "Could not open file '$filename': $!";
}
print $fh "$_\n" if $fh;
if (eof) {
close $fh;
$fh = undef;
}
}