I'm re-writing an ancient and pretty broken build and ran across a rule with something in it I've never seen before. It looks like this:
%_ui.cc:
${SOME_UTILITY} ${*}
sed '/\#include "${*}.h"/d' > tempstubs.cc ${*}_stubs.cc
/bin/csh -c 'if (-w ${*}_stubs.cc ) cp -f tempstubs.cc ${*}_stubs.cc'
-rm -f tempstubs.cc
The sed line is the one I'm referring to. I've never seen a redirection like that with two files after the >.
Nevermind, figured it out. The sed line could've been re-written as:
sed 'do whatever' ${*}_stubs.cc > tempstubs.cc
... and appears to be semantically identical.
Related
I am trying to to provide a file for my shell as an input which in return should test if the file contains a specific word and decide what command to execute. I am not figuring out yet where the mistake might lie. Please find the shell script that i wrote:
#!/bin/(shell)
input_file="$1"
output_file="$2"
grep "val1" | awk -f ./path/to/script.awk $input_file > $output_file
grep "val2" | sh ./path/to/script.sh $input_file > $output_file
when I input the the file that uses awk everything get executed as expected, but for the second command I don't even get an output file. Any help is much appreciated
Cheers,
You haven't specified this in your question, but I'm guessing you have a file with the keyword, e.g. file cmdfile that contains x-g301. And then you run your script like:
./script "input_file" "output_file" < cmdfile
If so, the first grep command will consume the whole cmdfile on stdin while searching for the first pattern, and nothing will be left for the second grep. That's why the second grep, and then your second script, produces no output.
There are many ways to fix this, but choosing the right one depends on what exactly you are trying to do, and how does that cmdfile look like. Assuming that's a larger file with other things than just the command pattern, you could pass that file as a third argument to your script, like this:
./script "input_file" "output_file" "cmdfile"
And have your script handle it like this:
#!/bin/bash
input_file="$1"
output_file="$2"
cmdfile="$3"
if grep -q "X-G303" "$cmdfile"; then
awk -f ./mno/script.awk "$input_file" > t1.json
fi
if grep -q "x-g301" "$cmdfile"; then
sh ./mno/tm.sh "$input_file" > t2.json
fi
Here I'm also assuming that your awk and sh scripts don't really need the output from grep, since you're giving them the name of the input file.
Note the proper way to use grep for existence search is via its exit code (and the muted output with -q). Instead of the if we could have used shortcircuiting (grep ... && awk ...), but this way is probably more readable.
I am trying to pre-pend a text to a file using a Makefile. The following bash command works in terminal:
echo -e "DATA-Line-1\n$(cat input)" > input
But when I put the above command in a Makefile, it does not work:
copyHeader:
#echo -e "DO NOT EDIT THIS FILE \n$(cat input)" > input
I guess $(cat input) does not work as expected in the Makefile.
I'd recommend sed for prepending a line of text to a file. The i command is kind of a pain; some clever use of the hold space does the same thing in a more complicated but less troublesome way:
copyHeader:
sed -i "" '1{h;s/.*/NEW FIRST LINE/;G;}' input
But if you want to do it your way, I think an extra '$' will do the trick:
copyHeader:
#echo -e "DO NOT EDIT THIS FILE \n$$(cat input)" > input
EDIT: Thanks to MadScientist for pointing out that this method (using $(cat input)) is unreliable.
A file function was added to make in 4.0, which as of 4.2 can also read from files
The newline is a little bit hacky but this can be accomplished with make alone:
define n
endef
copyHeader:
$(file > input,DATA-Line-1$n$(file < input))
After messing around with this for a while (accepted answer does not work with the sed that comes with OSX 10.12, make was too old for the file manipulation options), I settled on the following (ugly) solution:
echo "DATA-Line-1" > line.tmp
mv input input.tmp
cat line.tmp input.tmp > input
rm input.tmp line.tmp
This works for me:
$ cat test
this
is
a
test
$ sed -i "1i new first line" test
$ cat test
new first line
this
is
a
test
I would like to delete filenames from a textfile to have as output only the folder.
Example:
Creature\FrostwolfPup\FrostWolfPup_Shadow.m2
Creature\FrostwolfPup\FrostWolfPup_Fire.m2
To
Creature\FrostwolfPup\
To match only the Filenames i use [^\\]*$
Now i put it together with sed while /d should delete it
D:\filetype\core\sed.exe -n -e "/^[^\\]*$/d" D:\filetype\listfile\archive\tmp\all.txt > D:\filetype\module\model_bruteforce\tmp\folders_tmp1.txt
But instead of a textfile with my folders i got only a empty textfile as output, and so something must be wrong.
Tested on linux, not cygwin
sed -r 's/[^\\]*$//g' /path/to/original/file > /path/to/new/file
Try:
sed.exe -e "s/[^\\]*$//" path/to/folders.txt
The command s/[^\\]*$// asks sed to remove everything after the last \ on a line to the end of the line.
Caveat: since I don't have a windows machine handy for testing, I am unsure if the backslashes need to be doubled as shown above.
Discussion
-n tells sed not print anything unless we explicitly ask it to. The following command never asks sed to print:
sed.exe -n -e "/^[^\\]*$/d"
Consequently, it produces no output.
The problem I have is that some big playlists of mine contain some lines that are missing a newline.
What I want to do is parse the file and insert \n before /run/ if there is no new line. I tried:
text=$(< *.m3u)
text=${text//$'/run/}'/$'\n\n\r/run/}'}
printf "%s\n" "$text" > file.m3u
but it doesn't appear to work. I have tried some other approaches but they all fail, so I'm thinking perhaps I am missing something very obvious and basic.
OK line:
/run/.../../.. .../hippho.mp3
Defective line:
/run/.../../.. .../holla.amigo.mp3/run/.../../.. .../dodoh.mp3
In response to the first reply, this
sed -e 's#/run/#\n/run/#g' *.m3u > PLAYLIST
gives me a file with many \n\n/run/. I tried to fix it with
sed -e ':a;N;$!ba;s/\n\n/\n/g;p' PLAYLIST > PLAYLIST1
which removes them, but instead lists all files twice -- why is that?
My fix to remove the second listing of the files:
playlist='PLAYLIST1'
split -a 1 -d -n l/2 $playlist $playlist
cp PLAYLIST10 PLAYLIST
This finally gives me what I want, but there must be prettier ways.
there was.
sed -e 's#(.)/run/#\1\n/run/#g' *.m3u
does it all thanks tripleee
This uses sed, not bash, and does add a new line at beginning of file.
sed -e 's#/run/#\n/run/#g' *.m3u
In order to simplify my work I usually do this:
for FILE in ./*.txt;
do ID=`echo ${FILE} | sed 's/^.*\///'`;
bin/Tool ${FILE} > ${ID}_output.txt;
done
Hence process loops over all *.txt files.
Now I have two file groups - my Tool uses two inputs (-a & -b). Is there any command to run Tool for every FILE_A over every FILE_B and name the output file as a combination of both them?
I imagine it should look like something like this:
for FILE_A in ./filesA/*.txt;
do for FILE_B in ./filesB/*.txt;
bin/Tool -a ${FILE_A} -b ${FILE_B} > output.txt;
done
So the process would run number of *.txt in filesA over number of *.txt in filesB.
And also the naming issue which I even don't know where to put in...
Hope it is clear what I am asking. Never had to do such task before and a command line would be really helpful.
Looking forward!
NEWNAME="${FILE_A##*/}_${FILE_B##*/}_output.txt"