I am relatively new to bash scripting and have no experience with LaTeX. I've been asked to develop a script which will replace convenience shortcuts in LaTeX documents with their more cumbersome long-form equivalents.
My approach thus far has been to isolate both the shortcut and the long-form in separate variables and then try to replace them in the text by using sed. I've attached short example files below.
As it is currently the script takes 2 arguments, a file expr from which it retrieves the shortcuts and long-form terminology and an infile to which is is supposed to make the appropriate changes. I know that the script is properly isolating both the shortcuts and long-forms and can return them, but it can't seem to execute the sed command.
I have tried searching this online and found multiple similar question where the suggestion was that sed has difficultly recognizing variable and that various type of quotation combinations might solve the problem. I have tried many permutations and none appear to work. The long-form terminologies in many cases contain special characters such as '$' and '{}', so I suspect that this might be the issue but I'm not sure. I am also very much open to other ideas about how to solve the problem. Please find below samples of both the script and the 2 argument files, expr and infile.
expr file containing shortcuts and long-forms
% a
\newcommand{\ao}{$^{18}$O}
\newcommand{\aodso}{$^{18}$O/$^{16}$O}
% b
\newcommand{\bea}{\begin{equation}}
\newcommand{\beaa}{\begin{eqnarray}}
% c
\newcommand{\cthree}{C$_3$}
\newcommand{\cfour}{C$_4$}
\newcommand{\coz}{CO$_2$}
infile containing shortcuts to be replaced by long-forms
This is my test {\ao}
{\aodso} my test is this
Does it work {\bea}
{\beaa} test test test
work work work {\cthree}
{\cfour} This is my test
my test is this {\coz}
Relevant subsection of script called with expr and infile as arguments
while read line; do
if [[ $line == \newcommand* ]]; then
temp=${line#*\{}
sc=${temp%%\}*}
templf=${temp#*\{}
lf=${templf%\}}
#echo $sc, $lf
sed -i -e 's/${sc}/${lf}/g' ${infile}
fi
done < ${expr}
UPDATE:
For clarification, this is what the desired result would be, the shortcuts present in infile would be substituted with the appropriate long-form
This is my test {$^{18}$O}
{$^{18}$O/$^{16}$O} my test is this
Does it work {\begin{equation}}
{\begin{eqnarray}} test test test
work work work {C$_3$}
{C$_4$} This is my test
my test is this {CO$_2$}
Code for GNU sed:
sed -r '/^%/d;s#.*\b(\{\\\w+\})(\{.*\})#\1 \2#;s#\\#\\\\#g;s#(\S+)\s(\S+)#\\|\1|s|\1|\2|g#' file1|sed -f - file2
$ cat file1
% a
\newcommand{\ao}{$^{18}$O}
\newcommand{\aodso}{$^{18}$O/$^{16}$O}
% b
\newcommand{\bea}{\begin{equation}}
\newcommand{\beaa}{\begin{eqnarray}}
% c
\newcommand{\cthree}{C$_3$}
\newcommand{\cfour}{C$_4$}
\newcommand{\coz}{CO$_2$}
$ cat file2
This is my test {\ao}
{\aodso} my test is this
Does it work {\bea}
{\beaa} test test test
work work work {\cthree}
{\cfour} This is my test
my test is this {\coz}
$ sed -r "/^%/d;s#.*\b(\{\\\w+\})(\{.*\})#\1 \2#;s#\\#\\\\#g;s#(\S+)\s(\S+)#\\|\1|s|\1|\2|g#" file1|sed -f - file2
This is my test {$^{18}$O}
{$^{18}$O/$^{16}$O} my test is this
Does it work {\begin{equation}}
{\begin{eqnarray}} test test test
work work work {C$_3$}
{C$_4$} This is my test
my test is this {CO$_2$}
Explanation:
There are two calls for sed, the first one makes from the file with the search/replace patterns a sed script:
sed -r '/^%/d;s#.*\b(\{\\\w+\})(\{.*\})#\1 \2#;s#\\#\\\\#g;s#(\S+)\s(\S+)#\\|\1|s|\1|\2|g#' file1
\|{\\ao}|s|{\\ao}|{$^{18}$O}|g
\|{\\aodso}|s|{\\aodso}|{$^{18}$O/$^{16}$O}|g
\|{\\bea}|s|{\\bea}|{\\begin{equation}}|g
\|{\\beaa}|s|{\\beaa}|{\\begin{eqnarray}}|g
\|{\\cthree}|s|{\\cthree}|{C$_3$}|g
\|{\\cfour}|s|{\\cfour}|{C$_4$}|g
\|{\\coz}|s|{\\coz}|{CO$_2$}|g
In the second call sed processes this script with the text file to make the replacements.
sed -f - file2
There's a lot of discussion of this issue on this question at tex.SE. But I'll take the opportunity to note that the best answer there (IMO) is to use the de-macro program, which is a python script that comes with TeXLive. It's quite capable, and can handle arguments as well as simple replacements.
To use it, you move the macros that you want expanded into a <something>-private.sty file, and include it into your document with \usepackage{<something>-private}, then run de-macro <mydocument>. It spits out <mydocument>-private.tex, which is the same as your original, but with your private macros replaced by their more basic things.
I know that this question has been marked as answered since quite a while and that you explicitly mention bash and sed as your desired tool.
However, in the interest of others and if you don't insist on bash and sed there exist other options for your problem, e.g. the perl script TME (as suggested here on SO). Usage:
tme [ -c ] [ -D | -Dn ] [ macros.tex ... ] <input.tex >output.tex
Related
I have some text files $f resembling the following
function
%blah
%blah
%blah
code here
I want to append the following text before the first empty line:
%
%This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike
%3.0 Unported License. See notes at the end of this file for more information.
I tried the following:
top=$(cat ./PATH/text.txt)
top="${top//$'\n'/\\n}"
sed -i.bak 's#^$#'"$top"'\\n#' $f
where the second line (I think) preserves the new line in the text and the third line (I think) substitutes the first empty line with the text plus a new empty line.
Two problems:
1- My code appends the following text:
%n%This work is licensed under the Creative Commons
Attribution-NonCommercial-ShareAlike n%3.0 Unported License. See notes
at the end of this file for more information.\n
2- It appends it at end of the file.
Can someone please help me understand the problems with my code?
If you are using GNU sed, following would work.
Use ^$ to find the empty line and then use sed to replace/put the text that you want.
# Define your replacement text in a variable
a="%\n%This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike\n%3.0 Unported License. See notes at the end of this file for more information."
Note, $a should include those \n that will be directly interpreted by sed as newlines.
$ sed "0,/^$/s//$a/" inputfile.txt
In the above syntax, 0 represents the first occurrence.
Output:
function
%blah
%blah
%
%This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike
%3.0 Unported License. See notes at the end of this file for more information.
%blah
code here
test
You've included bash and sed tags in your question. Since I can't seem to come up with a way of doing this in sed, here's a bash-only solution. It's likely to perform the worst of all working solutions you might find.
The following works with your sample input:
$ while read -r x; do [[ -z "$x" ]] && cat boilerplate; printf '%s\n' "$x"; done < src
This will however insert the boilerplate before EVERY blank line, which is probably not what you're after. Instead, we should probably make this more than a one-liner:
#!/usr/bin/env bash
y=true
while read -r x; do
if [[ -z "$x" ]] && $y; then
cat boilerplate
y=false
fi
printf '%s\n' "$x"
done < src
Note that unlike the code in your question, this doesn't store your boilerplate in a variable, it just cats it "at the right time".
Note that this sends the combined output to stdout. If your goal is to modify the original file, you'll need to wrap this in something that moves around temporary files. (Note that sed's -i option also doesn't really edit files in place, it only hides the moving-around-temp-files from you.)
The following alternatives are probably a better idea.
A similar solution to the bash one might be achieved with better performance using awk:
awk 'NR==FNR{b=b $0 ORS;next} /^$/&&!y{printf "%s",b;y++} 1' boilerplate src
This awk solution obviously reads your boilerplate into a variable, though it's not a shell variable.
Notwithstanding non-standard platform-specific extensions, awk does not have any facility for editing files "in place" either. A portable solution using awk would still need to push temp files around.
And of course, the following old standard of ed is great to keep in your back pocket:
printf 'H\n/^$/\n-\n.r boilerplate\nw\nq\n' | ed src
In bash, of course, you could always use heretext, which might be clearer:
$ ed src <<< $'H\n/^$/\n-\n.r boilerplate\nw\nq\n'
The ed command is non-stream version of sed. Or rather, sed is the stream version of ed, which has been around since before the dinosaurs and is still going strong.
The commands we're using are separated by newlines and fed to ed's standard input. You can discard stdout if you feel the urge. The commands shown here are:
H - instruct ed to print more useful errors, if it gets any.
/^$/ - search for the first occurrence of a newline.
- - GO BACK ONE LINE. Awesome, right?
.r boilerplate - Read your boilerplate at the current line,
w - and write the file.
q - Quit.
Note that this does not keep a .bak file. You'll need to do that yourself if you really want one.
And if, as you suggested in comments, the filename you're reading is to be constructed from a variable, note that variable expansion does not happen inside format quoting ($' .. '). You can either switch quoting mechanisms mid-script:
ed "$file" <<< $'H\n/^$/\n-\n.r ./TATTOO_'"$currn"$'/top.txt\nw\nq\n'
Or you could put ed script in a variable constructed by printf
printf -v scr 'H\n/^$/\n-\n.r ./TATTOO_%s/top.txt\nw\nq\n' "$currn"
ed "$file" <<< "$scr"`
Adding the text to a variable so you can interpolate the variable is wasteful and an unnecessary complication. sed can easily read the contents of a file by itself.
sed -i.bak '1r./PATH/text.txt' "$f"
Unfortunately, this part of sed is poorly standardized, so you may have to experiment a little bit. Some dialects require a newline (perhaps, or perhaps not, preceded by a backslash) before the filename.
sed -i.bak '1r\
./PATH/text.txt' "$f"
(Notice also the double quotes around the file name. You generally always want double quotes around variables which contain file names. More here.)
Adapting the recipe from here we can extend this to apply to the first empty line instead of the first line.
sed -i.bak -e '/^$/!b' -e 'r./PATH/text.txt' -e :a -e '$!{' -e n -e ba -e } "$f"
This adds the boilerplate after the first empty line but perhaps that's acceptable. Refactoring it to replace it or add an empty line after should not be too challenging anyway. (Maybe use sed -n and instead explicitly print everything except the empty line.)
In brief terms, this skips to the end (simply prints) up until we find the first empty line. Then, we read and print the file, and go into a loop which prints the remainder of the file without returning to the beginning of the script.
sed that I think works. Uses files for the extra bit to be inserted.
b='##\n## comment piece\n##'
sed --posix -ne '
1,/^$/ {
/^$/ {
x;
/^true$/ !{
x
s/^$/true/
i\
'"$b"'
};
x;
s/^.*$//
}
}
p
' file1
with the examples using ranges of 1,/^$/, an empty first line would result in the disclaimer being printed twice. To avoid this, I've set it up to put a flag in the hold space ( x; s/^$/true/ ) that I can swap to the pattern space to check whether its the first blank. Once theres a match for blank line, i\ inserts the comment ($b) in front of the pattern space.
Thanks to ghoti for the initial plan.
I have an audacity.cfg file in which I want to script the substitution of two plugin paths. The paths were previously different, so I need to inset the updated ones. I will provide one below.
First, I want to locate this text, which begins the line in question:
FFmpegLibPath
Next, I want to replace that entire line with:
FFmpegLibPath=/Library/Application Support/audacity/libs/libavformat.55.dylib
That's it. It should not be so difficult, but it is. I have done lots of experimenting using sed and awk, but have not been able to get anything to work. While there are LOTS of examples of this online and in this forum, none of them have worked. They all produce errors relating to escape characters, as well as some random other things. I have spent hours experimenting and researching, but have not made any headway.
I realize that the slashes and spaces are likely causing issues, and I have spent considerable time attempting to solve this. I've tried all sorts of things, but as I've said, nothing works.
Does anyone have any ideas about this?
Thanks in advance for your help.
Edit:
I am running MacOS 10.10.5, and one of the things I saw in my research was using GNU sed, because some arguments do not work without it. While I am sure that would produce a better result, I cannot use it because my users would not have it. I think this is part of the reason why this is so difficult, because many of the solutions I have seen are utilizing arguments that I cannot use.
If everything other fails, you always can use the old-school ed solution. :) :)
#!/bin/bash
{
printf 'H\n'
printf '/^FFmpegLibPath[ \t=]/\n'
printf '%s\n' c 'FFmpegLibPath=/Library/Application Support/audacity/libs/libavformat.55.dylib' . w q
} | ed -s "/path/to/audacity.cfg" >/dev/null
The quotes, spaces are mandatory.
The above searching for the line starting with FFmpegLibPath and followed by space or tab or =. So it tries avoid collisions with similar prefixes like: FFmpegLibPath2.
If such collisions are not possible, the above could be simply written as:
ed -s "/path/to/audacity.cfg" >/dev/null <<'EOF'
H
/^FFmpegLibPath/
c
FFmpegLibPath=/Library/Application Support/audacity/libs/libavformat.55.dylib
.
w
q
EOF
or
printf '%s\n' H '/^FFmpegLibPath/' c 'FFmpegLibPath=/Library/Application Support/audacity/libs/libavformat.55.dylib' . w q |
ed -s "/path/to/audacity.cfg" >/dev/null
You can escape the special character (forward slash) and assign it to a variable:
REPL=$(sed 's/[\/]/\\&/g' <<< "/Library/Application Support/audacity/libs/libavformat.55.dylib")
& is sed's meta-character to represent the pattern that was matched.
sed -E "s/(FFmpegLibPath=).+/\1$REPL/" audacity.cfg
Option -E is used to support extended regular expressions
output:
etc
FFmpegLibPath=/Library/Application Support/audacity/libs/libavformat.55.dylib
etc
etc
If you preferred to maintain the updates in a separate text file:
cfg_update.txt
key_name1=value
key_name2=value
key_name3=value
# define delimiter
IFS=\=
cat cfg_update.txt | while read KEY VALUE; do
sed -i -E "s/($KEY=).+/\1$VALUE/" audacity.cfg
done
option -i is used to edit file in place
Finally, be sure to make a backup before your tests, good luck!
I'm trying to use enscript to print PDFs from Mutt, and hitting character encoding issues. One way around them seems to be to just use sed to replace the problem characters: sed -ir 's/[“”]/"/g' {input}
My test input file is this:
“very dirty”
we’re
I'm hoping to get "very dirty" and we're but instead I'm still getting
â\200\234very dirtyâ\200\235
weâ\200\231re
I found a nice little post on printing to PDFs from Mutt that I used as a starting point. I have a bash script that I point to from my .muttrc with set print_command="$HOME/.mutt/print.sh" -- the script currently reads about like this:
#!/bin/bash
input="$1" pdir="$HOME/Desktop" open_pdf=evince
# Straighten out curly quotes
sed -ir 's/[“”]/"/g' $input
sed -ir "s/[’]/'/g" $input
tmpfile="`mktemp $pdir/mutt_XXXXXXXX.pdf`"
enscript --font=Courier8 $input -2r --word-wrap --fancy-header=mutt -p - 2>/dev/null | ps2pdf - $tmpfile
$open_pdf $tmpfile >/dev/null 2>&1 &
sleep 1
rm $tmpfile
It does a fine job of creating a PDF (and works fine if you give it a file as an argument) but I can't figure out how to fix the curly quotes.
I've tried a bunch of variations on the sed line:
input=sed -r 's/[“”]/"/g' $input
$input=sed -ir "s/[’]/'/g" $input
Per the suggestion at Can I use sed to manipulate a variable in bash? I also tried input=$(sed -r 's/[“”]/"/g' <<< $input) and I get an error: "Syntax error: redirection unexpected"
But none manages to actually change $input -- what is the correct syntax to change $input with sed?
Note: I accepted an answer that resolved the question I asked, but as you can see from the comments there are a couple of other issues here. enscript is taking in a whole file as a variable, not just the text of the file. So trying to tweak the text inside the file is going to take a few extra steps. I'm still learning.
On Editing Variables In General
BashFAQ #21 is a comprehensive reference on performing search-and-replace operations in bash, including within variables, and is thus recommended reading. On this particular case:
Use the shell's native string manipulation instead; this is far higher performance than forking off a subshell, launching an external process inside it, and reading that external process's output. BashFAQ #100 covers this topic in detail, and is well worth reading.
Depending on your version of bash and configured locale, it might be possible to use a bracket expression (ie. [“”], as your original code did). However, the most portable thing is to treat “ and ” separately, which will work even without multi-byte character support available.
input='“hello ’cruel’ world”'
input=${input//'“'/'"'}
input=${input//'”'/'"'}
input=${input//'’'/"'"}
printf '%s\n' "$input"
...correctly outputs:
"hello 'cruel' world"
On Using sed
To provide a literal answer -- you almost had a working sed-based approach in your question.
input=$(sed -r 's/[“”]/"/g' <<<"$input")
...adds the missing syntactic double quotes around the parameter expansion of $input, ensuring that it's treated as a single token regardless of how it might be string-split or glob-expanded.
But All That May Not Help...
The below is mentioned because your test script is manipulating content passed on the command line; if that's not the case in production, you can probably disregard the below.
If your script is invoked as ./yourscript “hello * ’cruel’ * world”, then information about exactly what the user entered is lost before the script is started, and nothing you can do here will fix that.
This is because $1, in that scenario, will only contain “hello; ’cruel’ and world” are in their own argv locations, and the *s will have been replaced with lists of files in the current directory (each such file substituted as a separate argument) before the script was even started. Because the shell responsible for parsing the user's command line (which is not the same shell running your script!) did not recognize the quotes as valid at the time when it ran this parsing, by the time the script is running, there's nothing you can do to recover the original data.
Abstract: The way to use sed to change a variable is explored, but what you really need is a way to use and edit a file. It is covered ahead.
Sed
The (two) sed line(s) could be solved with this (note that -i is not used, it is not a file but a value):
input='“very dirty”
we’re'
sed 's/[“”]/\"/g;s/’/'\''/g' <<<"$input"
But it should be faster (for small strings) to use the internals of the shell:
input='“very dirty”
we’re'
input=${input//[“”]/\"}
input=${input//[’]/\'}
printf '%s\n' "$input"
$1
But there is an underlying problem with your script, you are trying to clean an input received from the command line. You are using $1 as the source of the string. Once somebody writes:
./script “very dirty”
we’re
That input is lost. It is broken into shell's tokens and "$1" will be “very only.
But I do not believe that is what you really have.
file
However, you are also saying that the input comes from a file. If that is the case, then read it in with:
input="$(<infile)" # not $1
sed 's/[“”]/\"/g;s/’/'\''/g' <<<"$input"
Or, if you don't mind to edit (change) the file, do this instead:
sed -i 's/[“”]/\"/g;s/’/'\''/g' infile
input="$(<infile)"
Or, if you are clear and certain that what is being given to the script is a filename, like:
./script infile
You can use:
infile="$1"
sed -i 's/[“”]/\"/g;s/’/'\''/g' "$infile"
input="$(<"$infile")"
Other comments:
Then:
Quote your variables.
Do not use the very old `…` syntax, use $(…) instead.
Do not use variables in UPPER case, those are reserved for environment variables.
And (unless you actually meant sh) use a shebang (first line) that targets bash.
The command enscript most definitively requires a file, not a variable.
Maybe you should use evince to open the PS file, there is no need of the step to make a pdf, unless you know you really need it.
I believe that is better use a file to store the output of enscript and ps2pdf.
Do not hide the errors printed by the commands until everything is working as desired, then, just call the script as:
./script infile 2>/dev/null
Or as required to make it less verbose.
Final script.
If you call the script with the name of the file that enscript is going to use, something like:
./script infile
Then, the whole script will look like this (runs both in bash or sh):
#!/usr/bin/env bash
Usage(){ echo "$0; This script require a source file"; exit 1; }
[ $# -lt 1 ] && Usage
[ ! -e $1 ] && Usage
infile="$1"
pdir="$HOME/Desktop"
open_pdf=evince
# Straighten out curly quotes
sed -i 's/[“”]/\"/g;s/’/'\''/g' "$infile"
tmpfile="$(mktemp "$pdir"/mutt_XXXXXXXX.pdf)"
outfile="${tmpfile%.*}.ps"
enscript --font=Courier10 "$infile" -2r \
--word-wrap --fancy-header=mutt -p "$outfile"
ps2pdf "$outfile" "$tmpfile"
"$open_pdf" "$tmpfile" >/dev/null 2>&1 &
sleep 5
rm "$tmpfile" "$outfile"
Suppose I've got a list of files
file1
"file 1"
file2
a for...in loop breaks it up between whitespace, not newlines:
for x in $( ls ); do
echo $x
done
results:
file
1
file1
file2
I want to execute a command on each file. "file" and "1" above are not actual files. How can I do that if the filenames contains things like spaces or commas?
It's a little trickier than I think find -print0 | xargs -0 could handle, because I actually want the command to be something like "convert input/file1.jpg .... output/file1.jpg" so I need to permutate the filename in the process.
Actually, Mark's suggestion works fine without even doing anything to the internal field separator. The problem is running ls in a subshell, whether by backticks or $( ) causes the for loop to be unable to distinguish between spaces in names. Simply using
for f in *
instead of the ls solves the problem.
#!/bin/bash
for f in *
do
echo "$f"
done
UPDATE BY OP: this answer sucks and shouldn't be on top ... #Jordan's post below should be the accepted answer.
one possible way:
ls -1 | while read x; do
echo $x
done
I know this one is LONG past "answered", and with all due respect to eduffy, I came up with a better way and I thought I'd share it.
What's "wrong" with eduffy's answer isn't that it's wrong, but that it imposes what for me is a painful limitation: there's an implied creation of a subshell when the output of the ls is piped and this means that variables set inside the loop are lost after the loop exits. Thus, if you want to write some more sophisticated code, you have a pain in the buttocks to deal with.
My solution was to take the "readline" function and write a program out of it in which you can specify any specific line number that you may want that results from any given function call. ... As a simple example, starting with eduffy's:
ls_output=$(ls -1)
# The cut at the end of the following line removes any trailing new line character
declare -i line_count=$(echo "$ls_output" | wc -l | cut -d ' ' -f 1)
declare -i cur_line=1
while [ $cur_line -le $line_count ] ;
do
# NONE of the values in the variables inside this do loop are trapped here.
filename=$(echo "$ls_output" | readline -n $cur_line)
# Now line contains a filename from the preceeding ls command
cur_line=cur_line+1
done
Now you have wrapped up all the subshell activity into neat little contained packages and can go about your shell coding without having to worry about the scope of your variable values getting trapped in subshells.
I wrote my version of readline in gnuc if anyone wants a copy, it's a little big to post here, but maybe we can find a way...
Hope this helps,
RT
I'm thinking of using find or grep to collect the files, and maybe sed to make the change, but what to do with the output? Or would it be better to use "argdo" in vim?
Note: this question is asking for command line solutions, not IDE's. Answers and comments suggesting IDE's will be calmly, politely and serenely flagged. :-)
I am huge fan of the following
export MYLIST=`find . -type f -name *.java`
for a in $MYLIST; do
mv $a $a.orig
echo "import.stuff" >> $a
cat $a.orig >> $a
chmod 755 $a
done;
mv is evil and eventually this will get you. But I use this same construct for a lot of things and it is my utility knife of choice.
Update: This method also backs up the files which you should do using any method. In addition it does not use anything but the shell's features. You don't have to jog your memory about tools you don't use often. It is simple enough to teach a monkey (and believe me I have) to do. And you are generally wise enough to just throw it away because it took four seconds to write.
you can use sed to insert a line before the first line of the file:
sed -ie "1i import package.name.*;" YourClass.java
use a for loop to iterate through all your files and run this expression on them. but be careful if you have packages, because the import statements must be after the package declaration. you can use a more complex sed expression, if that's the case.
I'd suggest sed -i to obviate the need to worry about the output. Since you don't specify your platform, check your man pages; the semantics of sed -i vary from Linux to BSD.
I would use sed if there was a decent way to so "do this for the first line only" but I don't know of one off of the top of my head. Why not use perl instead. Something like:
find . -name '*.java' -exec perl -p -i.bak -e '
BEGIN {
print "import package.name.*;\n"
}' {} \;
should do the job. Check perlrun(1) for more details.
for i in `ls *java`
do
sed -i '.old' '1 i\
Your include statement here.
' $i
done
Should do it. -i does an in place replacement and .old saves the old file just in case something goes wrong. Replace the iterator *java as necessary (maybe 'find . | grep java' or something instead.)
You may also use the ed command to do in-file search and replace:
# delete all lines matching foobar
ed -s test.txt <<< $'g/foobar/d\nw'
see: http://bash-hackers.org/wiki/doku.php?id=howto:edit-ed
I've actually starting to do it using "argdo" in vim. First of all, set the args:
:args **/*.java
The "**" traverses all the subdir, and the "args" sets them to be the arg list (as if you started vim with all those files in as arguments to vim, eg: vim package1/One.java package1/Two.java package2/One.java)
Then fiddle with whatever commands I need to make the transform I want, eg:
:/^package.*$/s/$/\rimport package.name.*;/
The "/^package.*$/" acts as an address for the ordinary "s///" substitution that follows it; the "/$/" matches the end of the package's line; the "\r" is to get a newline.
Now I can automate this over all files, with argdo. I hit ":", then uparrow to get the above line, then insert "argdo " so it becomes:
:argdo /^package.*$/s/$/\rimport package.name.*;/
This "argdo" applies that transform to each file in the argument list.
What is really nice about this solution is that it isn't dangerous: it hasn't actually changed the files yet, but I can look at them to confirm it did what I wanted. I can undo on specific files, or I can exit if I don't like what it's done (BTW: I've mapped ^n and ^p to :n and :N so I can scoot quickly through the files). Now, I commit them with ":wa" - "write all" files.
:wa
At this point, I can still undo specific files, or finesse them as needed.
This same approach can be used for other refactorings (e.g. change a method signature and calls to it, in many files).
BTW: This is clumsy: "s/$/\rtext/"... There must be a better way to append text from vim's commandline...