How to remove lines from the output of a command in a bash script [duplicate] - bash

This question already has answers here:
Get last line of shell output as a variable
(2 answers)
Ignoring the first line of stderr and keeping stdout intact
(1 answer)
Cronjob - How to output stdout, and ignore stderr
(3 answers)
Closed 4 years ago.
I am trying to do a command that gives multiple lines; eg ldapwhoami and Id like to make it in a bash script that prints only the last line instead of all the command lines.
I tried the following
#/bin/bash
$(ldapwhoami | sed 1d2d3d)
but it doesn't seem to work, any help would be appreciated.

To print only the final line use tail:
ldapwhoami | tail -n 1
To delete the first three lines with sed, change your command to:
ldapwhoami | sed '1d;2d;3d;'
note the semicolons and the quotes
Also possible with awk
ldapwhoami | awk 'NR > 3'
The above assumes that all output goes to standard output. In unix though there are two output streams that are connected to each process, the standard output (denoted with 1 - that is used for the output of the program), and the standard error (denoted with 2 - that is used for any diagnostic/error messages). The reason for this separation is that it is often desirable not to "pollute" the output with diagnostic messages, if it is processed by another script.
So for commands that generate output on both steams, if we want to capture both, we redirect the standard error to standard output, using 2>&1) like this:
ldapwhoami 2>&1 | tail -n 1
(for awk and sed the same syntax is used)
In bash, the above may be written using shorthand form as
ldapwhoami |& tail -n 1
If all you need is the standard output, and you don't care about standard error, you can redirect it to /dev/null
ldapwhoami 2> /dev/null

Related

Unexpected or empty output from tee command [duplicate]

This question already has answers here:
Why does reading and writing to the same file in a pipeline produce unreliable results?
(2 answers)
Closed 3 years ago.
echo "hello" | tee test.txt
cat test.txt
sudo sed -e "s|abc|def|g" test.txt | tee test.txt
cat test.txt
Output:
The output of 2nd command and last command are different, where as the command is same.
Question:
The following line in above script gives an output, but why it is not redirected to output file?
sudo sed -e "s|abc|def|g" test.txt
sudo sed -e "s|abc|def|g" test.txt | tee test.txt
Reading from and writing to test.txt in the same command line is error-prone. sed is trying to read from the file at the same time that tee wants to truncate it and write to it.
You can use sed -i to modify a file in place. There's no need for tee. (There's also no need for sudo. You made the file, no reason to ask for root access to read it.)
sed -e "s|abc|def|g" -i test.txt
You shouldn't use the same file for both input and output.
tee test.txt is emptying the output file when it starts up. If this happens before sed reads the file, sed will see an empty file. Since you're running sed through sudo, it's likely to take longer to start up, so this is very likely.

How can I delete empty line from my ouput by grep? [duplicate]

This question already has answers here:
Remove empty lines in a text file via grep
(11 answers)
Closed 4 years ago.
Exists way to remove empty lines with cat myfile | grep -w #something ?
I looking for simple way for remove empty lines from my output like in the way the presented above.
This really belongs on the codegolfing stackexchange because it's not related to how anyone would ever write a script. However, you can do it like this:
cat myfile | grep -w '.*..*'
It's equivalent to the more canonical grep ., but adds explicit .*s on either side so that it will always match the complete line, thereby satisfying the word boundary conditions imposed by -w
You can pipe your output to awk to easily remove empty lines
cat myfile | grep -w #something | awk NF
EDIT: so... you just want cat myfile | awk NF?
if you have to use grep, you can do grep myfile -v '^[[:blank:]]*$'

How to store the command output in variable and redirect to file in a same line?

a=`cat /etc/redhat-release | awk '{print $2}' > /tmp/a.txt`
The above command is not redirecting the output to file.
A command substitution captures stdout of the command contained. When you redirect that output to a file, it's no longer on stdout, so it's no longer captured.
Use tee to create two copies -- one in a file, one on stdout.
a=$(awk '{print $2}' </etc/redhat-release | tee /tmp/a)
Note also:
cat shouldn't be used when it isn't needed: Giving awk a direct handle on the input file saves an extra process, allowing a direct write from a file rather than a FIFO -- and is following a practice that will generate much larger efficiency games with programs like sort, shuf, tail, or wc -c that can use more efficient algorithms when reading from a file.
The modern (and standard-compliant, since 1991) syntax for command substitution is $(...). It nests better than the ancient backtick syntax it replaces, and use of backslashes within is less confusing.

No Output When Using tail -f With Multiple Grep Commands In Mac OS-X [duplicate]

This question already has an answer here:
Read from a endless pipe bash [duplicate]
(1 answer)
Closed 7 years ago.
I'm trying to execute the following command from the Mac OS X terminal:
$tail -f FILE_PATH | grep "DESIRED_STRING" | grep -v "EXCLUDED_STRING"
Unfortunately I do not get any results in return.
But, when using cat instead of tail -f:
$cat FILE_PATH | grep "DESIRED_STRING" | grep -v "EXCLUDED_STRING"
I get the expected result. Unfortunately, this workaround is no good for me, as I need to tail the file in realtime.
grep buffers its output by default. Since tail -f never completes, neither does grep and you must wait until the last one has accumulated enough output to fill its buffer. With cat, the command eventually completes, allowing both greps to complete and print whatever output they have accumulated (whether or not its buffer was filled).
Adding --line-buffered to the grep commands changes how grep buffers its output, allowing you to see the output as each line is complete.

Redirecting the lines in to different files under a for loop in shell

I want to put certain lines in to two different files in a shell script. How should I put the syntax for this.
Example:
A for loop prints 6 lines, and I want that the first two lines should be appended to 1st file, and the last 4 lines should be appended to the other file.
A for loop prints 6 lines, and I want that the first two lines should
be appended to 1st file, and the last 4 lines should be appended to
the other file.
There is no way. One option would be to redirect everything to a file and then copy the desired sections of the log to other files.
for i in {1..6}; do
echo $i > log
done
head -4 log >> logfile1 # Appends the first four lines to logfile1
tail -2 log >> logfile2 # Appends the last two lines to logfile2
Answer
If you're using BASH you can use tee to send the same input to both head -n2 and tail -n4 at the same time using a combination of process substitution and a pipe:
$ for i in {1..6}; do echo $i; done | tee >(head -n2 >first2.txt) | tail -n4 >last4.txt
$ cat first2.txt
1
2
$ cat last4.txt
3
4
5
6
Explanation
By default tee takes its STDIN and copies it to file(s) specified as arguments in addition to its STDOUT. Since process substitution returns a /dev/fd path to a file descriptor (echo >(true) to see an example) tee is able to write to that path like any other regular file.
Here's what the tee command looks like after substitution:
tee /dev/fd/xx | tail -n4 >last4.txt
Or more visually:
tee | tail -n4 >last4.txt
:
/dev/fd/xx
:
:..>(head -n2 >first2.txt)
So the output gets copied both to the head process (Whose output is redirected to first2.txt) and out STDIN which is piped to the tail process:
Note that process substitution is a BASH-ism, so if you're using a different shell or concerned about POSIX compliance it might not be available.

Resources