redirect to /dev/null: No such file or directory - bash

I would like to know if somebody could help me with this error:
wc: Files/Unused-CMA_host.txt: No such file or directory
The file doesn't exist, then I want to redirect the output to /dev/null
I try with this sentence > /dev/null 2>&1 which is working in other case, but not here:
wc -l Files/Unused-CMA_host.txt | awk '{print $1}' > /dev/null 2>&1
somebody know why?
thanks.

Redirections apply to individual components of a pipeline, not a pipeline as a whole. In your example, you only redirect awk's standard output. To redirect the standard error and stardard output of the entire pipeline would require a command group such as
{ wc -l Files/Unused-CMA_host.txt | awk '{print $1}' ; } > /dev/null 2>&1
However, if the file doesn't exist, there won't be any standard output. Perhaps you want to redirect standard error to suppress the error message? Then you can simply use
wc -l Files/Unused-CMA_host.txt 2> /dev/null | awk '{print $1}'
In the case of a non-existent file, awk will simply read from an empty stream and do nothing.

Related

shell: send grep output to stderr and leave stdout intact

i have a program that outputs to stdout (actually it outputs to stderr, but i can easily redirect that to stdout with 2>&1 or the like.
i would like to run grep on the output of the program, and redirect all matches to stderr while leaving the unmatched lines on stdout (alternatively, i'd be happy with getting all lines - not just the unmatched ones - on stdout)
e.g.
$ myprogram() {
cat <<EOF
one line
a line with an error
another line
EOF
}
$ myprogram | greptostderr error >/dev/null
a line with an error
$ myprogram | greptostderr error 2>/dev/null
one line
another line
$
a trivial solution would be:
myprogram | tee logfile
grep error logfile 1>&2
rm logfile
however, i would rather get the matching lines on stderr when they occur, not when the program exits...
eventually, I found this, which gave me a hint to for a a POSIX solution like so:
greptostderr() {
while read LINE; do
echo $LINE
echo $LINE | grep -- "$#" 1>&2
done
}
for whatever reasons, this does not output anything (probably a buffering problem).
a somewhat ugly solution that seems to work goes like this:
greptostderr() {
while read LINE; do
echo $LINE
echo $LINE | grep -- "$#" | tee /dev/stderr >/dev/null
done
}
are there any better ways to implement this?
ideally i'm looking for a POSIX shell solution, but bash is fine as well...
I would use awk instead of grep, which gives you more flexibility in handling both matched and unmatched lines.
myprogram | awk -v p=error '{ print > ($0 ~ p ? "/dev/stderr" : "/dev/stdout")}'
Every line will be printed; the result of $0 ~ p determines whether the line is printed to standard error or standard output. (You may need to adjust the output file names based on your file system.)

tee bash command with redirection

I have the following file:
file1.txt
geek
for
geeks
I am using the tee command to perform two operations on the output. My question is about the the redirection character after the first tee. I want to get the first column of file1.txt and
write it to file2.txt. When I run the following command, I don't receive an error but it does not give me the first column:
wc -l file1.txt |tee awk '{print $1}' - > file2.txt | sed 's/4/6/g' > file3.txt
However, the following command works as expected. What does the > is doing here?
wc -l file1.txt |tee >(awk '{print $1}' - > file2.txt) | sed 's/4/6/g' > file3.txt
tee awk '{print $1}' - > file2.txt
does:
execute tee with 3 arguments awk and '{print $1}' and -.
tee will create a file named awk, another file named '{print $1}' and yet another file named -.
Then the output of tee will be redirected to file2.txt
tee will duplicate input to those 3 files and will output to file2.txt
Consequently | sed will receive no input, because the output of tee is redirected to the file and the subshell outputs nothing.
tee >(awk '{print $1}' - > file2.txt)
does:
>(...)
Run awk with two arguments '{print $1}' and -
------ '{print $1}' is interpreted as a script
------ - is interpreted as stdin (and could be omitted)
------ then the output of awk of redirected to file2.txt
Then bash creates a fifo or a /dev/fd/something file
Then the output of that file is connected to stdin of awk process
And the >(awk ...) is substituted for the filename of the file, most probably for /dev/fd/something
tee >(...)
executes tee with one argument, like tee /dev/fd/something
The /dev/fd/something is connected to awk process on the other side
So tee writes to /dev/fd/something and awk reads the data from stdin on the other side
the output of tee is redirected to | sed
What does the > is doing here?
The first occurrence is used to introduce a process substitution. The second occurrence is used to redirect output of awk command to a file named file2.txt. The third occurrence is used to redirect the output of sed command to file named file3.txt.
Here, Process substitution is used to capture output that would normally go to a file
The Bash syntax for writing to a process is >(command)

Multiple output in single line Shell commend with pipe only

For example:
ls -l -d */ | wc -l | awk '{print $1}' | tee /dev/tty | ls -l
This shell command print the result of wc and ls -l with single line, but tee is used.
Is it possible to using one Shell commend line to achieve multiple output without using “&&” “||” “>” “>>” “<” “;” “&”,tee and temp file?
When you want the output of date and ls -rtl | head -1 on one line, you can use
echo "$(date): $(ls -rtl | head -1)"
Yes, you can achieve writing to multiple files with awk which is not in the list of things you appear not to like:
echo hi | awk '{print > "a.txt"; print > "b.txt"}'
Then check a.txt and b.txt.

Redirection operator working in shell prompt but not not in script

I have a file called out1.csv which contains tabular data.
When I run the command in the terminal it works:
cat out1.csv | grep -v ^$ | grep -v ^- > out2.csv
It reads the file and greps all lines except blanks and starting with - and redirects the output to out2.csv.
But when I put the same command in a script it does not work.
I have even tried echoing:
echo " `cat out1.csv | grep -v ^$ | grep -v ^- > out2.csv` " > out2.csv
I have also tried to specify full paths of the files. But no luck.
In the script, the command runs, but output is not redirected to the file as per debug mode.
What am I missing?
The issue wasn't of the script but of the sql script that this script was calling before this command. Both commands are actually proper.
You're redirecting twice
The command in backticks writes to file and prints nothing.
You take that nothing and write it to the file, overwriting what was there before.
One way to do it in script will be same as you do in console #BETTER WAY
cat out1.csv | grep -v ^$ | grep -v ^- > out2.csv # No need to echo it or put the command in backtick `
You are redirecting twice
The other way as you are trying is
echo " `cat out1.csv | grep -v ^$ | grep -v ^- > out2.csv` " # Don't redirect the output again to out2.csv

tail -f, awk and output to file >

I am attempting to filter a log file and am running into issues, what I have so far is the following, which does not work,
tail -f /var/log/squid/accesscustom.log | awk '/username/;/user-name/ {print $1; fflush("")}' | awk '!x[$0]++' > /var/log/squid/accesscustom-filtered.log
The goal is to take a file that contains
ipaddress1 username
ipaddress7
ipaddress2 user-name
ipaddress1 username
ipaddress5
ipaddress3 username
ipaddress4 user-name
and save to accesscustom-filtered.log
ipaddress1
ipaddress2
ipaddress3
ipaddress4
It works without the output to accesscustom-filtered.log but something in the > isn't working right and the file ends up empty.
Edit: Changed the original example to be correct
Use tee:
tail -f /var/log/squid/accesscustom.log | awk '/username/;/user-name/ {print $1}' | tee /var/log/squid/accesscustom-filtered.log
See also: Writing “tail -f” output to another file and Turn off buffering in pipe
Note: awk doesn't buffer like grep in the superuser example, so you shouldn't need to do anything special with your awk command. (more info)

Resources