bash redirection to files not working [duplicate] - bash

This question already has answers here:
Why doesn't "sort file1 > file1" work?
(7 answers)
Closed 7 years ago.
I have this two files, yolo.txt and bar.txt:
yolo.txt:
a
b
c
bar.txt:
c
I have the following command, which gets me the desired output:
$ cat yolo.txt bar.txt | sort | uniq -u | sponge
a
b
But when I add the redirection (>) statement, the output changes:
$ cat yolo.txt bar.txt | sort | uniq -u | sponge > yolo.txt && cat yolo.txt
c
I expected the output to remain the same, and I am quite confused. Please help me.

The > yolo.txt shell redirect happens before any of the commands run. In particular, the shell opens yolo.txt for writing and truncates it before executing cat yolo.txt bar.txt. So by the time cat opens yolo.txt, yolo.txt is empty. Therefore the c line in bar.txt is unique, so uniq -u passes it through.
I guess you wanted to use sponge to avoid this problem, since that's what sponge is for. But you used it incorrectly. This is the correct usage:
cat yolo.txt bar.txt | sort | uniq -u | sponge yolo.txt && cat yolo.txt
Note that I just pass the output filename to sponge as a command-line argument, instead of using a shell redirect.

Related

Switching between "|" and "&&" symbols in a command based on input

I'm using a bash script to concatenate the execution of a series of programs. Each of these programs produces an output and has two flags that can be set as -iInputFile.txt and -oOutputFile.txt, if no flag is set then standard input and output are automatically selected. Most of the time I simply concatenate my programs as
./Program1 | ./Program2 | ./Program3
but if I happen to need to save the data to a file, and then also access it from the next file I need to do
./Program1 | ./Program2 -oFile.txt && ./Program3 -iFile.txt
so my question is if there is a way to provide an input, for example 010, and only convert the symbols between script 2 and 3 from | to && while leaving everything else untouched. Hard-coding it would be impossible since I have up to 12 programs concatenated so it would have even 12! combinations. It's my first time asking so if anything is unclear from the question I'll edit to provide any information required, thank you all in advance.
If you are scripting this, you can hard-code in tee between the pipeline and use bash's default value parameter expansion to essentially turn off the 'write to file feature'
./Program1 | tee ${outFile1:- /dev/null} | ./Program2 | tee ${outFile2:- /dev/null} | \
./Program3 | tee ${outFile3:- /dev/null}
Note that the last call to tee might be superfluous
Proof of Concept
$ unset outFile; echo foo | tee ${outFile:- /dev/null} | cat - && cat ./tmp
foo
cat: ./tmp: No such file or directory
$ outFile=./tmp; echo foo | tee ${outFile:- /dev/null} | cat - && cat ./tmp
foo
foo

Overriding file in bash in one pipe

The result of cat file1 | cat > file1 is empty file1.
The result of head file1 | cat > file1 is empty file1 as well.
Of course, a real pipe is intended to have more steps - tools and operations inside.
Is any way to save transformed read content?
The real case is source .env && cat file1 | envsubst > file1
Thank you, but correct answer is:
cat file1 | envsubst | sponge file1
This (from Veda) is the wrong usage of sponge
cat file1 | envsubst | sponge | cat > file1
The only option is something like (head file1 | cat > tmp_file) && mv tmp_file file1. In other words, you have to write to a temporary file and then replace the original file with the temporary.
Fundamentally bash has to open all the files and pipes before it starts execing each of the stages in the pipeline. Once it has opened your output file for overwrite it is, er, overwritten.
Sponge does that for you. You need to install it separately though, it is not usually included in the os.
cat test | sponge | cat > test
Will leave you with the contents that were originally in file "test".
To get sponge in Ubuntu or Redhat, you need to install package moreutils.

tail -f | sed to file doesn't work [duplicate]

This question already has answers here:
write to a file after piping output from tail -f through to grep
(4 answers)
Closed 5 years ago.
I am having an issue with filtering a log file that is being written and writing the output to another file (if possible using tee, so I can see it working as it goes).
I can get it to output on stdout, but not write to a file, either using tee or >>.
I can also get it to write to the file, but only if I drop the -f options from tail, which I need.
So, here is an overview of the commands:
tail -f without writing to file: tail -f test.log | sed 's/a/b/' works
tail writing to file: tail test.log | sed 's/a/b/' | tee -a a.txt works
tail -f writing to file: tail -f test.log | sed 's/a/b/' | tee -a a.txt doesn't output on stdout nor writes to file.
I would like 3. to work.
It's the sed buffering. Use sed -u. man sed:
-u, --unbuffered
load minimal amounts of data from the input files and flush the
output buffers more often
And here's a test for it (creates files fooand bar):
$ for i in {1..3} ; do echo a $i ; sleep 1; done >> foo &
[1] 12218
$ tail -f foo | sed -u 's/a/b/' | tee -a bar
b 1
b 2
b 3
Be quick or increase the {1..3} to suit your skillz.

Need help writing this specific bash script

Construct the pipe to execute the following job.
"Output of ls should be displayed on the screen and from this output the lines
containing the word ‘poem’ should be counted and the count should be
stored in a file.”
If bash is allowed, use a process substitution as the receiver for tee
ls | tee >( grep -c poem > number.of.poetry.files)
Your attempt was close:
ls | tee /dev/tty | grep poem | wc -l >number_of_poems
The tee /dev/tty copies all ls output to the terminal. This satisfies the requirement that "Output of ls should be displayed on the screen." while also sending ls's output to grep's stdin.
This can be further simplified:
ls | tee /dev/tty | grep -c poem >number_of_poems
Note that neither of these solutions require bash. Both will work with lesser shells and, in particular, with dash which is the default /bin/sh under debian-like systems.
This sounds like a homework assignment :)
#!/bin/bash
ls
ls -l | grep -c poem >> file.txt
The first ls will display the output on the screen
The next line uses a series of pipes to output the number of files/directories containing "poem"
If there were 5 files with poem in them, file.txt would read 5. If file.txt already exists, the new count will be appended to the end. If you want overwrite file each time, change the line to read ls -l | grep -c poem > file.txt

How to get rid of duplicates? [duplicate]

This question already has answers here:
Remove duplicate entries in a Bash script [duplicate]
(4 answers)
Closed 8 years ago.
Hi I am writing a script in bash which read the contents of files that have the word "contact"(in the current directory) in them and sorts all the data in those files in alphabetical order
and writes them to a file called "out.txt". I was wondering if there was any way in which I could get rid of duplicate content. Any help would be appreciated
The code I have written so far.
#!/bin/bash
cat $(ls | grep contact) > out.txt
sort out.txt -o out.txt
sort has option -u (long option: --unique) to output only unique lines:
sort -u out.txt -o out.txt
EDIT: (Thanks to tripleee)
Your script, at present, contains problems of parsing ls output,
This is a better substitute for what you are trying to do:
sort -u *contact* >out.txt
Use this using the uniq command (easier to remember than flags)
#!/bin/bash
cat $(ls | grep contact) | sort | uniq > out.txt
or the -u flag for sort like this
#!/bin/bash
cat $(ls | grep contact) | sort -u > out.txt
uniq may do what you need. It copies lines from input to output, omitting a line if it was the line it just output.
Take a look at the "uniq" command, and pipe it through there after sorting.

Resources