Is it possible to parse standard error without writing out to a file first? [duplicate] - bash

This question already has answers here:
How can I pipe stderr, and not stdout?
(11 answers)
Closed 9 years ago.
In a simple case, let say we have some standard error:
$ ls /fake/file
ls: /fake/file: No such file or directory
QUESTION: Is it possible to parse out "/fake/file" from the standard error without having to write it out to a file first? For example:
$ ls /fake/file 2> tmp.file; sed 's/.* \(.*\):.*/\1/' tmp.file
/fake/file

Something like this?
ls /fake/file 2>&1 | awk -F: '{print $2}'

either way should fetch you the filename
ls /fake/file 2>&1 | awk -F: '{print $2}' | awk '{print $3}'
or
ls /fake/file 2>&1 | awk '{print $4}' | awk -F: '{print $1}'
or
ls /fake/file 2>&1 | sed 's/.* \(.*\):.*/\1/'

Related

How to store output as variable [duplicate]

This question already has answers here:
How do I set a variable to the output of a command in Bash?
(15 answers)
Closed 3 years ago.
I'm looking to store the hash of my most recently downloaded file in my downloads folder as a variable.
So far, this is what I have:
md5sum $(ls -t | head -n1) | awk '{print $1}'
Output:
user#ci-lux-soryan:~/Downloads$ md5sum $(ls -t | head -n1) | awk '{print $1}'
c1924742187128cc9cb2ec04ecbd1ca6
I have tried storing it as a variable like so, but it doesn't work:
VTHash=$(md5sum $(ls -t | head -n1) | awk '{print $1}')
Any ideas, where am I going wrong
As #Cyrus outlined parsing ls has its own pitfalls and therefore better to avoid it altogether rather than allowing unexpected corner cases. The following shall facilitate the requirements epitomised.
VTHash="$(find -type f -mtime 0 | tail -n 1 | xargs md5sum | awk '{ print $1 }')"

Multiple output in single line Shell commend with pipe only

For example:
ls -l -d */ | wc -l | awk '{print $1}' | tee /dev/tty | ls -l
This shell command print the result of wc and ls -l with single line, but tee is used.
Is it possible to using one Shell commend line to achieve multiple output without using “&&” “||” “>” “>>” “<” “;” “&”,tee and temp file?
When you want the output of date and ls -rtl | head -1 on one line, you can use
echo "$(date): $(ls -rtl | head -1)"
Yes, you can achieve writing to multiple files with awk which is not in the list of things you appear not to like:
echo hi | awk '{print > "a.txt"; print > "b.txt"}'
Then check a.txt and b.txt.

How to feed xargs to a piped grep for a piped cat command

How to feed xargs to a piped grep for a piped cat command.
Command 1:
(Generates a grep pattern with unique PIDs for a particular date time, read from runtime.log)
cat runtime.log | grep -e '2018/09/13 14:50' | awk -F'[ ]' '{print $4}' | awk -F'PID=' '{print $2}' | sort -u | xargs -I % echo '2018/09/13 14:50.*PID='%
The output of above command is (It's custom grep pattern):
2018/09/13 14:50.*PID=13109
2018/09/13 14:50.*PID=14575
2018/09/13 14:50.*PID=15741
Command 2:
(Reads runtime.log and fetch the appropriate lines based on the grep pattern (Ideally the grep pattern should comes from command 1))
cat runtime.log | grep '2018/09/13 14:50.*PID=13109'
The question is How to combine both Command 1 & Command 2
Below combined version of command doesn't gives the expected output (The produced output had lines having the date other than '2018/09/13 14:50')
cat runtime.log | grep -e '2018/09/13 14:50' | awk -F'[ ]' '{print $4}' | awk -F'PID=' '{print $2}' | sort -u | xargs -I % echo '2018/09/13 14:50.*PID='% | cat runtime.log xargs grep
grep has an option -f. From man grep:
-f FILE, --file=FILE
Obtain patterns from FILE, one per line. The empty file contains zero patterns, and therefore matches nothing. (-f is specified by POSIX .)
So you could use
cat runtime.log | grep -e '2018/09/13 14:50' | awk -F'[ ]' '{print $4}' | awk -F'PID=' '{print $2}' | sort -u | xargs -I % echo '2018/09/13 14:50.*PID='% > a_temp_file
cat runtime.log | grep -f a_temp_file
The shell has a syntax that avoids having to create the temporary file. <(). From man bash:
Process Substitution
Process substitution is supported on systems that support named pipes
(FIFOs) or the /dev/fd method of naming open files. It takes the form
of <(list) or >(list). The process list is run with its input or
output connected to a FIFO or some file in /dev/fd. The name of this
file is passed as an argument to the current command as the result of
the expansion. If the >(list) form is used, writing to the file will
provide input for list. If the <(list) form is used, the file passed
as an argument should be read to obtain the output of list.
So you can combine it to:
cat runtime.log | grep -f <(cat runtime.log | grep -e '2018/09/13 14:50' | awk -F'[ ]' '{print $4}' | awk -F'PID=' '{print $2}' | sort -u | xargs -I % echo '2018/09/13 14:50.*PID='%)

Get first argument of wc -l myFile.txt [duplicate]

This question already has answers here:
How to get "wc -l" to print just the number of lines without file name?
(10 answers)
Closed 7 years ago.
I'm counting the number of lines in a big file using
wc -l myFile.txt
Result is
110 myFile.txt
But I want only the number
110
How can I do that?
(I want the number of lines as an input argument in a bash script)
There are lots of ways to do this. Here are two:
wc -l myFile.txt | cut -f1 -d' '
wc -l < myFile.txt
Cut is an old Unix tool to
print selected parts of lines from each FILE to standard output.
You can use cat and pipe wc -l:
cat myFile.txt | wc -l
Or if you insist wc -l be the first command, you can use awk:
wc -l myFile.txt | awk '{print $1}'
You can try
wc -l file | awk '{print $1}'

Is there a better way to write awk with pipes?

I'm under Mac OsX with a terminal and Zsh. I use this command:
awk '/download/ {print $2}' | awk 'NR==1' | awk -F"//" '{print $2}'
Is there a better way to write all the awk with only one awk?
This should be what you're after:
$ awk '/download/{split($2,a,"//");print a[2];exit}' file

Resources