Specifically, I'm using a combination of >> and tee in a custom alias to store new Homebrew updates in a text file, as well as output on screen:
alias bu="echo `date "+%Y-%m-%d at %H:%M"` \
>> ~/Documents/Homebrew\ Updates.txt && \
brew update | tee -a ~/Documents/Homebrew\ Updates.txt"
Question: What if I wish to prepend this output in my textfile, i.e. placed at the beginning of the file as opposed to appending it to the end?
Edit1: As someone reported in the answers below, the use of temp files might be a good approach, which at least helped me partially:
targetLog="~/Documents/Homebrew\ Updates.txt"
alias bu="(brew update | cat - $targetLog \
> /tmp/out1 && mv /tmp/out1 $targetLog \
&& echo `date "+%Y-%m-%d at %H:%M":%S` | \
cat - $targetLog > /tmp/out2 \
&& mv /tmp/out2 $targetLog)"
But the problem is the output to STDOUT (previously made possible by tee), which I'm not sure can be incorporated in this tempfile approach …?
sed will happily do that for you, using -i to edit in place, eg.
sed -i -e "1i `date "+%Y-%m-%d at %H:%M"`" some_file
This works by creating an output file:
Let's say we have the initial contents on file.txt
echo "first line" > file.txt
echo "second line" >> file.txt
So, file.txt is our 'bottom' text file. Now prepend into a new 'output' file
echo "add new first line" | cat - file.txt > output.txt # <--- Just this command
Now, output has the contents the way we want. If you need your old name:
mv output.txt file.txt
cat file.txt
The only simple and safe way to modify an input file using bash tools, is to use a temp file, eg. sed -i uses a temp file behind the scenes (but to be robust sed needs more).
Some of the methods used have a subtle "can break things" trap, when, rather than running your command on the real data file, you run it on a symbolic link (to the file you intend to modify). Unless catered for correctly, this can break the link and convert it into a real file which receives the mods and leaves the original real file without the intended mods and without the symlink (no error exit-code results)
To avoid this with sed, you need to use the --follow-symlinks option.
For other methods, just be aware that it needs to follow symlinks (when you act on such a link)
Using a temp file, then rm temp file works only if "file" is not a symlink.
One safe way is to use sponge from package moreutils
Unlike a shell redirect, sponge soaks up all its input before
opening
the output file. This allows for constructing pipelines that read from
and write to the same file.
sponge is a good general way to handle this type of situation.
Here is an example, using sponge
hbu=~/'Documents/Homebrew Updates.txt'
{ date "+%Y-%m-%d at %H:%M"; cat "$hbu"; } | sponge "$hbu"
Simplest way IMO would be to use echo and cat:
echo "Prepend" | cat - inputfile > outputfile
Or for your example basically replace the tee -a ~/Documents/Homebrew\ Updates.txt with cat - ~/Documents/Homebrew\ Updates.txt > ~/Documents/Homebrew\ Updates.txt
Edit: As stated by hasturkun this won't work, try:
echo "Prepend" | cat - file | tee file
But this isn't the most efficient way of doing it any more...
Similar to the accepted answer, however if you are coming here because you want to prepend to the first line - rather than prepend an entirely new line - then use this command.
sed -i "1 s/^/string_replacement/" some_file
The -i flag will do a replacement within the file (rather than creating a new file).
Then the 1 will only do the replacement on line 1.
Finally, the s command is used which has the following syntax s/find/replacement/flags.
In our case we don't need any flags. The ^ is called a caret and it is used to represent the very start of a string.
Try this http://www.unix.com/shell-programming-scripting/42200-add-text-beginning-file.html
There is no direct operator or command AFAIK.You use echo, cat, and mv to get the effect.
{ date; brew update |tee /dev/tty; cat updates.txt; } >updates.txt.new
mv updates.txt.new updates.txt
I've no idea why you want to do this. It's pretty standard that logs like this have later entries appearing, well, later in the file.
Related
I keep text files with definitions in a folder. I like to convert them to spoken word so I can listen to them. I already do this manually by running a few commands to insert some pre-processing codes into the text files and then convert the text to spoken word like so:
sed 's/\..*$/[[slnc 2000]]/' input.txt inserts a control code after first period
sed 's/$/[[slnc 2000]]/' input.txt" inserts a control code at end of each line
cat input.txt | say -v Alex -o input.aiff
Instead of having to retype these each time, I would like to create a Bash script that pipes the output of these commands to the final product. I want to call the script with the script name, followed by an input file argument for the text file. I want to preserve the original text file so that if I open it again, none of the control codes are actually inserted, as the only purpose of the control codes is to insert pauses in the audio file.
I've tried writing
#!/bin/bash
FILE=$1
sed 's/$/ [[slnc 2000]]/' FILE -o FILE
But I get hung up immediately as it says sed: -o: No such file or directory. Can anyone help out?
If you just want to use foo.txt to generate foo.aiff with control characters, you can do:
#!/bin/sh
for file; do
test "${file%.txt}" = "${file}" && continue
sed -e 's/\..*$/[[slnc 2000]]/' "$file" |
sed -e 's/$/[[slnc 2000]]/' |
say -v Alex -o "${file%.txt}".aiff
done
Call the script with your .txt files as arguments (eg, ./myscript *.txt) and it will generate the .aiff files. Be warned, if say overwrites files, then this will as well. You don't really need two sed invocations, and the sed that you're calling can be cleaned up, but I don't want to distract from the core issue here, so I'm leaving that as you have it.
This will:-
a} Make a list of your text files to process in the current directory, with find.
b} Apply your sed commands to each text file in the list, but only for the current use, allowing you to preserve them intact.
c} Call "say" with the edited files.
I don't have say, so I can't test that or the control codes; but as long as you have Ed, the loop works. I've used it many times. I learned it as a result of exposure to FORTH, which is a language that still permits unterminated loops. I used to have problems with remembering to invoke next at the end of the script in order to start it, but I got over that by defining my words (functions) first, in FORTH style, and then always placing my single-use commands at the end.
#!/bin/sh
next() {
[[ -s stack ]] && main
end
}
main() {
line=$(ed -s stack < edprint+.txt)
infile=$(cat "${line}" | sed 's/\..*$/[[slnc 2000]]/' | sed 's/$/[[slnc 2000]]/')
say "${infile}" -v Alex -o input.aiff
ed -s stack < edpop+.txt
next
}
end() {
rm -v ./stack
rm -v ./edprint+.txt
rm -v ./edpop+.txt
exit 0
}
find *.txt -type -f > stack
cat >> edprint+.txt << EOF
1
q
EOF
cat >> edpop+.txt << EOF
1d
wq
EOF
next
I need to create a file that lists all the files in a folder into a text file, along with a comma and the number 15 after. For example
My folder has video.mp4, video2.mp4, picture1.jpg, picture2.jpg, picture3.png
I need the text file to read as follows:
video.mp4,15
video2.mp4,15
picture1.jpg,15
picture2.jpg,15
picture3.png,15
No spaces, just filename.ext,15 on each line. I am using a raspberry pi. I am aware that the command ls > filename.txt would put all the file names into a folder, but how would I get a ,15 after every line?
Thanks
bash one-liner:
for f in *; do echo "$f,15" >> filename.txt; done
To avoid opening the output file on each iteration you may redirect the entire output with > filename.txt:
for f in *; do echo "$f,15"; done > filename.txt
$ printf '%s,15\n' *
picture1.jpg,15
picture2.jpg,15
picture3.png,15
video.mp4,15
video2.mp4,15
This will work if those are the only files in the directory. The format specifier %s,15\n will be applied to each of printf's arguments (the names in the current directory) and they will be outputted with ,15 appended (and a newline).
If there are other files, then the following would work too, regardless of whether there are files called like this or not:
$ printf '%s,15\n' video.mp4 video2.mp4 picture1.jpg picture2.jpg "whatever this is"
video.mp4,15
video2.mp4,15
picture1.jpg,15
picture2.jpg,15
whatever this is,15
Or, on all MP4, PNG and JPEG files:
$ printf '%s,15\n' *.mp4 *.jpg *.png
video.mp4,15
video2.mp4,15
picture1.jpg,15
picture2.jpg,15
picture3.png,15
Then redirect this to a file with printf ...as above... >output.txt.
If you're using Bash, then this will not make use of any external utility, as printf is built into the shell.
You need to do something like this:
#!/bin/bash
for i in $(ls folder_name); do
echo $i",15" >> filename.txt;
done
It's possible to do this in one line, however, if you want to create a script, consider code readability in the long run.
Edit 1: better solution
As #CristianRamon-Cortes suggested in the comments below, you should not rely on the output of ls because of the problems explained in this discussion: why not parse ls. As such, here's how you should write the script instead:
#!/bin/bash
cd folder_name
for i in *; do
echo $i",15" >> filename.txt;
done
You can skip the part cd folder_name if you are already in the folder.
Edit 2: Enhanced solution:
As suggested by #kusalananda, you'd better do the redirection after done to avoid opening the file in each iteration of the for loop, so the script will look like this:
#!/bin/bash
cd folder_name
for i in *; do
echo $i",15";
done > filename.txt
Just 1 command line using 2 msr commands recusively (-r) search specific files:
msr -rp your-dir1,dir2,dirN -l -f "\.(mp4|jpg|png)$" -PAC | msr -t .+ -o '$0,15' -PIC > save-file.txt
If you want to sort by time, add --wt to first command like: msr --wt -l -rp your-dirs
Sort by size? Add --sz but only the prior one is effective if use both --sz and --wt.
If you want to exclude some directory, add like: --nd "^(test|garbage)$"
remove tail \r\n in save-file.txt : msr -p save-file.txt -S -t "\s+$" -o "" -R
See msr.exe / msr.gcc48 etc in my open project https://github.com/qualiu/msr tools directory.
A solution without a loop:
ls | xargs -i echo {},15 > filename.txt
I am trying to pre-pend a text to a file using a Makefile. The following bash command works in terminal:
echo -e "DATA-Line-1\n$(cat input)" > input
But when I put the above command in a Makefile, it does not work:
copyHeader:
#echo -e "DO NOT EDIT THIS FILE \n$(cat input)" > input
I guess $(cat input) does not work as expected in the Makefile.
I'd recommend sed for prepending a line of text to a file. The i command is kind of a pain; some clever use of the hold space does the same thing in a more complicated but less troublesome way:
copyHeader:
sed -i "" '1{h;s/.*/NEW FIRST LINE/;G;}' input
But if you want to do it your way, I think an extra '$' will do the trick:
copyHeader:
#echo -e "DO NOT EDIT THIS FILE \n$$(cat input)" > input
EDIT: Thanks to MadScientist for pointing out that this method (using $(cat input)) is unreliable.
A file function was added to make in 4.0, which as of 4.2 can also read from files
The newline is a little bit hacky but this can be accomplished with make alone:
define n
endef
copyHeader:
$(file > input,DATA-Line-1$n$(file < input))
After messing around with this for a while (accepted answer does not work with the sed that comes with OSX 10.12, make was too old for the file manipulation options), I settled on the following (ugly) solution:
echo "DATA-Line-1" > line.tmp
mv input input.tmp
cat line.tmp input.tmp > input
rm input.tmp line.tmp
This works for me:
$ cat test
this
is
a
test
$ sed -i "1i new first line" test
$ cat test
new first line
this
is
a
test
I am looking for a bash one-liner that duplicates stdin to stdout without interleaving. The only solution I have found so far is to use tee, but that does produced interleaved output. What do I mean by this:
If e.g. a file f reads
a
b
I would like to execute
cat f | HERE_BE_COMMAND
to obtain
a
b
a
b
If I use tee - as the command, the output typically looks something like
a
a
b
b
Any suggestions for a clean solution?
Clarification
The cat f command is just an example of where the input can come from. In reality, it is a command that can (should) only be executed once. I also want to refrain from using temporary files, as the processed data is sort of sensitive and temporary files are always error-prone when the executed command gets interrupted. Furthermore, I am not interested in a solution that involves additional scripts (as stated above, it should be a one-liner) or preparatory commands that need to be executed prior to the actual duplication command.
Solution 1:
<command_which_produces_output> | { a="$(</dev/stdin)"; echo "$a"; echo "$a"; }
In this way, you're saving the content from the standard input in a (choose a better name please), and then echo'ing twice.
Notice $(</dev/stdin) is a similar but more efficient way to do $(cat /dev/stdin).
Solution 2:
Use tee in the following way:
<command_which_produces_output> | tee >(echo "$(</dev/stdin)")
Here, you're firstly writing to the standard output (that's what tee does), and also writing to a FIFO file created by process substitution:
>(echo "$(</dev/stdin)")
See for example the file it creates in my system:
$ echo >(echo "$(</dev/stdin)")
/dev/fd/63
Now, the echo "$(</dev/stdin)" part is just the way I found to firstly read the entire file before printing it. It echo'es the content read from the process substitution's standard input, but once all the input is read (not like cat that prints line by line).
Store the second input in a temp file.
cat f | tee /tmp/showlater
cat /tmp/showlater
rm /tmp/showlater
Update:
As shown in the comments (#j.a.) the solution above will need to be adjusted into the OP's real needs. Calling will be easier in a function and what do you want to do with errors in your initial commands and in the tee/cat/rm ?
I recommend tee /dev/stdout.
cat f | tee /dev/stdout
One possible solution I found is the following awk command:
awk '{d[NR] = $0} END {for (i=1;i<=NR;i++) print d[i]; for (i=1;i<=NR;i++) print d[i]}'
However, I feel there must be a more "canonical" way of doing this using.
a simple bash script ?
But this will store all the stdin, why not store the output to a file a read the file both if you need ?
full=""
while read line
do
echo "$line"
full="$full$line\n"
done
printf $full
The best way would be to store the output in a file and show it later on. Using tee has the advantage of showing the output as it comes:
if tmpfile=$(mktemp); then
commands | tee "$tmpfile"
cat "$tmpfile"
rm "$tmpfile"
else
echo "Error creating temporary file" >&2
exit 1
fi
If the amount of output is limited, you can do this:
output=$(commands); echo "$output$output"
I have a template file I want to copy and then edit from a script, inserting content at specific template points. For example, my template file might be something like,
...
rm -rf SomeDirectory
make install
#{INSERT-CONTENT-HERE}
do-something-else
...
In another script, I want to add content at "#{INSERT-CONTENT-HERE}" within a loop, i.e.
for i in c; do
# Write content to the template file copy at the correct point.
done
I think sed is the right tool, but I'm not familiar enough to know the syntax, and the man page isn't helping.
An example:
echo "Line #{INSERT-CONTENT-HERE}" | sed 's/#{INSERT-CONTENT-HERE}/---/'
To modify a file:
sed -i 's/#{INSERT-CONTENT-HERE}/---#{INSERT-CONTENT-HERE}/' filename
where -i means in-place edit so be warned
if you do:
sed -i.bak 's/#{INSERT-CONTENT-HERE}/---/' filename
it should back up original as filename.bak
also to make multiple substitutions at each line use the g flag:
sed -i.bak 's/#{INSERT-CONTENT-HERE}/---/g' filename
You can copy the output of all the commands into a temporary file and then copy the contents of that entire file into the template file:
TEMPFILE=`mktemp` && (
for i in c
echo "SomeTextBasedOn $i" >> $TEMPFILE
done
sed -i '/{INSERT-CONTENT-HERE}/r '$TEMPFILE targetfile
rm $TEMPFILE
)