I have a lot of .txt files that I want to add the name of it on the first line.
I had this command:
perl -i -pe 'BEGIN{undef $/;} s/^/$ARGV\n/' `find . -name '*.txt'`
This works perfectly in Linux but, in Windows, I get:
Can't find string terminator "'" anywhere before EOF at -e line 1.
I think the following script would work:
#!/usr/bin/perl
use strict;
use warnings;
use File::Find;
sub wanted {
if (m/\.txt\z/) {
system "perl", "-i.bak", "-0777", "-pe", 's/^/$ARGV\n/', $File::Find::name;
}
}
find(\&wanted, ".");
This script uses File::Find as suggested in the comments on the question. find takes a callback and a list of directories to search. The callback, named wanted, checks if the filename ends in ".txt". If it does, the script spawns a new perl interpreter editing the file in place inserting the filename before the first line.
There's probably a way to do it without using system.
Related
Attempting to move directory which contains sub-directory to the /opt using perl. Just perl.
#!/usr/bin/perl -w
use strict;
use warning
system(
'echo "Copy the application directory here and paste it here"
read apple
mv -f $apple /opt
');
This returns
mv: cannot stat "'/root/dump'": No such file or directory
My input for read apple is /root/dump which is a directory. When I did it directly from terminal it worked but then by perl file it doesn't. Could you be descriptive as possible, as I'm not that familiar with bash script please? Thank you in Advance.
Update:
I tried the bash file I wrote
#!/bin/bash
echo "Copy the application directory here and paste it here"
read apple
mv -f $apple /opt
This also returns the same result as perl.
If it literally says "'/root/dump'" that means you have added some literal quotes in addition to the syntactic ones. Make sure to quote $apple in your Bash script and not quote the stuff you give to read.
If you want to use just perl, instead of system()ing out to a shell, use dirmove() from File::Copy::Recursive to move the directory tree.
If I understand your question, you just want to move a directory recoursively to /opt. This script does just that.
#!/usr/bin/perl -w
use strict;
print "Copy the application directory here and paste it here: ";
chomp (my $apple = <STDIN>);
`mv -f $apple /opt`;
using perl. Just perl.
I don't understand that. You are using system commands in your code provided above. If you want to use just perl use dirmove() from File::Copy::Recursive as Shawn mentioned already. Hope this helps.
I can remove file extensions if I know the extensions, for example to remove .txt from files:
foreach file (`find . -type f`)
mv $file `basename $file .txt`
end
However if I don't know what kind of file extension to begin with, how would I do this?
I tried:
foreach file (`find . -type f`)
mv $file `basename $file .*`
end
but it wouldn't work.
What shell is this? At least in bash you can do:
find . -type f | while read -r; do
mv -- "$REPLY" "${REPLY%.*}"
done
(The usual caveats apply: This doesn't handle files whose name contains newlines.)
You can use sed to compute base file name.
foreach file (`find . -type f`)
mv $file `echo $file | sed -e 's/^\(.*\)\.[^.]\+$/\1/'`
end
Be cautious: The command you seek to run could cause loss of data!
If you don't think your file names contain newlines or double quotes, then you could use:
find . -type f -name '?*.*' |
sed 's/\(.*\)\.[^.]*$/mv "&" "\1"/' |
sh
This generates your list of files (making sure that the names contain at least one character plus a .), runs each file name through the sed script to convert it into an mv command by effectively removing the material from the last . onwards, and then running the stream of commands through a shell.
Clearly, you test this first by omitting the | sh part. Consider running it with | sh -x to get a trace of what the shell's doing. Consider making sure you capture the output of the shell, standard output and standard error, into a log file so you've got a record of the damage that occurred.
Do make sure you've got a backup of the original set of files before you start playing with this. It need only be a tar file stored in a different part of the directory hierarchy, and you can remove it as soon as you're happy with the results.
You can choose any shell; this doesn't rely on any shell constructs except pipes and single quotes and double quotes (pretty much common to all shells), and the sed script is version neutral too.
Note that if you have files xyz.c and xyz.h before you run this, you'll only have a file xyz afterwards (and what it contains depends on the order in which the files are processed, which needn't be alphabetic order).
If you think your file names might contain double quotes (but not single quotes), you can play with the changing the quotes in the sed script. If you might have to deal with both, you need a more complex sed script. If you need to deal with newlines in file names, then it is time to (a) tell your user(s) to stop being silly and (b) fix the names so they don't contain newlines. Then you can use the script above. If that isn't feasible, you have to work a lot harder to get the job done accurately — you probably need to make sure you've got a find that supports -print0, a sed that supports -z and an xargs that supports -0 (installing the most recent GNU versions if you don't already have the right support in place).
It's very simple:
$ set filename=/home/foo/bar.dat
$ echo ${filename:r}
/home/foo/bar
See more in man tcsh, in "History substitution":
r
Remove a filename extension '.xxx', leaving the root name.
This is for the Apple platform. My end goal is to do a find and replace for a line inside of the firefox preference file "prefs.js" to turn off updates. I want to be able to do this for all accounts on the Mac, including the user template (didn't include that in the examples). So far I've been able to get a list of all the paths that have the prefs.js file with this:
find /Users -name prefs.js
I then put the old preference and new preference in variables:
oldPref='user_pref("app.update.enabled", false);'
newPref='user_pref("app.update.enabled", true);'
I then have a "for loop" with the sed command to replace the old preference with the new preference:
for prefs in `find /Users -name prefs.js`
do
sed "s/$oldPref/$newPref/g" "$prefs"
done
The problem I'm running into is that the "find" command returns the full paths with the stupid "Application Support" in the path name like this:
/Users/admin/Library/Application Support/Firefox/Profiles/437cwg3d.default/prefs.js
When the command runs, I get these errors:
sed: /Users/admin/Library/Application: No such file or directory
sed: Support/Firefox/Profiles/437cwg3d.default/prefs.js: No such file or directory
I'm assuming that I somehow need to get the "find" command to wrap the outputted path in quotes for the "sed" command to parse it correctly? I'm I on the right path? I've tried to pipe the find command into sed to wrap quotes, but I can't get anything to work correctly. Please let me know if I should go about this differently. Thank you.
You don't want to for prefs in ... on a list of files that are output from find. For a more complete explanation of why this is bad, see Greg's wiki page about parsing ls. You would only use a for loop in bash if you could match the files using a glob, which is difficult if you want to do it recursively.
It would be better, if you can swing it, to use find ... -exec ... instead. Perhaps something like:
find /Users -name prefs.js -exec sed -i.bak -e "s/$oldPref/$newPref/" {} \;
The sed command line is executed once for each file found by find. The {} gets replaced with the filename. Sed's -i option lets you run it in-place, rather than requiring stdin/stdout. Check the man page for usage details.
(Grain of salt: I'm basing this on my experience with linux)
I think it less to do with sed and more to do with the way the for loop array is formed. When the the results of find are converted to an array, the space between Application and Support is treated as a delimiter.
There are several ways to work around this, but the easiest is probably to change the IFS variable. The IFS variable is an internal variable that your command line interpreter uses to separate fields (more info). You can change the IFS variable of the environment before running the find command.
Modified example from here:
#!/bin/bash
SAVEIFS=$IFS
IFS=$(echo -en "\n\b")
for f in `find /Users -name prefs.js`
do
echo "$f"
done
# restore $IFS
IFS=$SAVEIFS
I have a file called secure.txt in c:\temp. I want to run a Perl command from the command line to print the SHA1 hash of secure.txt. I'm using ActivePerl 5.8.2. I have not used Perl before, but it's the most convenient option available right now.
perl -MDigest::SHA1=sha1_hex -le "print sha1_hex <>" secure.txt
The command-line options to Perl are documented in perlrun. Going from left to right in the above command:
-MDigest::SHA1=sha1_hex loads the Digest::SHA1 module at compile time and imports sha1_hex, which gives the digest in hexadecimal form.
-l automatically adds a newline to the end of any print
-e introduces Perl code to be executed
The funny-looking diamond is a special case of Perl’s readline operator:
The null filehandle <> is special: it can be used to emulate the behavior of sed and awk. Input from <> comes either from standard input, or from each file listed on the command line. Here's how it works: the first time <> is evaluated, the #ARGV array is checked, and if it is empty, $ARGV[0] is set to "-", which when opened gives you standard input. The #ARGV array is then processed as a list of filenames.
Because secure.txt is the only file named on the command line, its contents become the argument to sha1_hex.
With Perl version 5.10 or later, you can shorten the above one-liner by five characters.
perl -MDigest::SHA=sha1_hex -E 'say sha1_hex<>' secure.txt
The code drops the optional (with all versions of Perl) whitespace before <>, drops -l, and switches from -e to -E.
-E commandline
behaves just like -e, except that it implicitly enables all optional features (in the main compilation unit). See feature.
One of those optional features is say, which makes -l unnecessary.
say FILEHANDLE LIST
say LIST
say
Just like print, but implicitly appends a newline. say LIST is simply an abbreviation for
{ local $\ = "\n"; print LIST }
This keyword is only available when the say feature is enabled: see feature.
If you’d like to have this code in a convenient utility, say mysha1sum.pl, then use
#! /usr/bin/perl
use warnings;
use strict;
use Digest::SHA;
die "Usage: $0 file ..\n" unless #ARGV;
foreach my $file (#ARGV) {
my $sha1 = Digest::SHA->new(1); # use 1 for SHA1, 256 for SHA256, ...
$sha1->addfile($file);
say($sha1->hexdigest, " $file");
}
This will compute a digest for each file named on the command line, and the output format is compatible with that of the Unix sha1sum utility.
C:\> mysha1sum.pl mysha1sum.pl mysha1sum.pl
8f3a7288f1697b172820ef6be0a296560bc13bae mysha1sum.pl
8f3a7288f1697b172820ef6be0a296560bc13bae mysha1sum.pl
You didn’t say whether you have Cygwin installed, but if you do, sha1sum is part of the coreutils package.
Try the Digest::SHA module.
C:\> perl -MDigest::SHA -e "print Digest::SHA->new(1)->addfile('secure.txt')->hexdigest"
As of this writing, both Strawberry Perl and ActiveState Perl include the shasum command that comes with Digest::SHA. If you chose the default options during setup, this will already be in your %PATH%.
If you really, really want to write your own wrapper for Digest::SHA, the other answers here are great.
If you just want to "use Perl" to get the SHA1 hash for a file, in the sense that you have ActiveState Perl, and it comes with the shasum command-line utility, it's as simple as:
shasum secure.txt
The default hashing algorithm is SHA1; add -a1 if you want to be explicit (not a bad idea).
The default output format is the hash, two spaces, then the filename. This is the same format as sha1sum and similar utilities, commonly found on Unix/Linux systems. If redirected into a file, that filename can be given to shasum -c later in order to verify the integrity whatever files you had hashed previously.
If you really, really don't want to see the filename, just the SHA1 hash, either of these will chop off the filename part:
Using Powershell:
shasum secure.txt | %{$_split()[0]}
Using Command Prompt (old-school batch scripting):
for /f %i in ('shasum secure.txt') do echo %i
For the second one, make sure you use %%i instead of %i (both places) if you're putting that in a .cmd or .bat file.
Use Digest::SHA1 like so:
Using the OO strategy:
#!/usr/bin/perl -w
use v5.10; # for 'say'
use strict;
require Digest::SHA1;
my $filename = 'secure.txt';
my $sha1 = Digest::SHA1->new->addfile($filename)->hexdigest;
say $sha1;
Note that calling ->hexdigest causes the object's state to clear, causing the current digest it's calculating to be destroyed. You can re-use the $sha1 object at that point.
You can also use sha1_hex sub on the file contents:
#!/usr/bin/perl -w
use strict;
use Digest::SHA1 qw/ sha1_hex /;
my $filename = "secure.txt";
# open file
open my $fhi, '<', $filename or die "Cannot open file '$filename' for reading: $!";
# slurp all the file contents
my $file_contents;
{local $/; $file_contents = <$fhi>;}
close $fhi;
print &sha1_hex($file_contents);
Note that you can use Digest::SHA instead, where new() takes the algorithm to use as parameter (e.g. 1 for SHA1, 256 for SHA256, ...):
require Digest::SHA;
my $sha1 = Digest::SHA->new(1)->addfile($filename)->hexdigest();
I'm thinking of using find or grep to collect the files, and maybe sed to make the change, but what to do with the output? Or would it be better to use "argdo" in vim?
Note: this question is asking for command line solutions, not IDE's. Answers and comments suggesting IDE's will be calmly, politely and serenely flagged. :-)
I am huge fan of the following
export MYLIST=`find . -type f -name *.java`
for a in $MYLIST; do
mv $a $a.orig
echo "import.stuff" >> $a
cat $a.orig >> $a
chmod 755 $a
done;
mv is evil and eventually this will get you. But I use this same construct for a lot of things and it is my utility knife of choice.
Update: This method also backs up the files which you should do using any method. In addition it does not use anything but the shell's features. You don't have to jog your memory about tools you don't use often. It is simple enough to teach a monkey (and believe me I have) to do. And you are generally wise enough to just throw it away because it took four seconds to write.
you can use sed to insert a line before the first line of the file:
sed -ie "1i import package.name.*;" YourClass.java
use a for loop to iterate through all your files and run this expression on them. but be careful if you have packages, because the import statements must be after the package declaration. you can use a more complex sed expression, if that's the case.
I'd suggest sed -i to obviate the need to worry about the output. Since you don't specify your platform, check your man pages; the semantics of sed -i vary from Linux to BSD.
I would use sed if there was a decent way to so "do this for the first line only" but I don't know of one off of the top of my head. Why not use perl instead. Something like:
find . -name '*.java' -exec perl -p -i.bak -e '
BEGIN {
print "import package.name.*;\n"
}' {} \;
should do the job. Check perlrun(1) for more details.
for i in `ls *java`
do
sed -i '.old' '1 i\
Your include statement here.
' $i
done
Should do it. -i does an in place replacement and .old saves the old file just in case something goes wrong. Replace the iterator *java as necessary (maybe 'find . | grep java' or something instead.)
You may also use the ed command to do in-file search and replace:
# delete all lines matching foobar
ed -s test.txt <<< $'g/foobar/d\nw'
see: http://bash-hackers.org/wiki/doku.php?id=howto:edit-ed
I've actually starting to do it using "argdo" in vim. First of all, set the args:
:args **/*.java
The "**" traverses all the subdir, and the "args" sets them to be the arg list (as if you started vim with all those files in as arguments to vim, eg: vim package1/One.java package1/Two.java package2/One.java)
Then fiddle with whatever commands I need to make the transform I want, eg:
:/^package.*$/s/$/\rimport package.name.*;/
The "/^package.*$/" acts as an address for the ordinary "s///" substitution that follows it; the "/$/" matches the end of the package's line; the "\r" is to get a newline.
Now I can automate this over all files, with argdo. I hit ":", then uparrow to get the above line, then insert "argdo " so it becomes:
:argdo /^package.*$/s/$/\rimport package.name.*;/
This "argdo" applies that transform to each file in the argument list.
What is really nice about this solution is that it isn't dangerous: it hasn't actually changed the files yet, but I can look at them to confirm it did what I wanted. I can undo on specific files, or I can exit if I don't like what it's done (BTW: I've mapped ^n and ^p to :n and :N so I can scoot quickly through the files). Now, I commit them with ":wa" - "write all" files.
:wa
At this point, I can still undo specific files, or finesse them as needed.
This same approach can be used for other refactorings (e.g. change a method signature and calls to it, in many files).
BTW: This is clumsy: "s/$/\rtext/"... There must be a better way to append text from vim's commandline...