Redirecting piped command into a file in bash - bash

I'm trying to do the following:
ping some.server.com | grep -Po '(?<=\=)[0-9].\.[0-9]' >> file.dat
i.e. I run a command (ping), grep part of the output and redirect the result of grep into a file to be inspected later. While the command itself works (i.e. the part before '>>'), nothing gets written into the file.
How do I do this correctly?

Use --line-buffered argument.
ping some.server.com | grep --line-buffered -Po '(?<=\=)[0-9].\.[0-9]' >> file.dat

Related

Filtering command output and print to file?

I am currently launching this bash line command -option | grep -A 1 --color 'string1\|string2' to filter the output of a process. Instead of printing the filtered output on console, how can I print the output on file?
I tried: command -option | grep -A 1 'string1\|string2' >> test.txt but it didn't print anything on file.
I also tried by adding the regular expression option: command -option | grep -E -A 1 'string1|string2' >> test.txt but I still got an empty file.
Apparently the issue was with buffering. By buffering line by line the problem is solved.
command -option | grep --line-buffered -A 1 'string1\|string2' >> test.txt

Redirection operator working in shell prompt but not not in script

I have a file called out1.csv which contains tabular data.
When I run the command in the terminal it works:
cat out1.csv | grep -v ^$ | grep -v ^- > out2.csv
It reads the file and greps all lines except blanks and starting with - and redirects the output to out2.csv.
But when I put the same command in a script it does not work.
I have even tried echoing:
echo " `cat out1.csv | grep -v ^$ | grep -v ^- > out2.csv` " > out2.csv
I have also tried to specify full paths of the files. But no luck.
In the script, the command runs, but output is not redirected to the file as per debug mode.
What am I missing?
The issue wasn't of the script but of the sql script that this script was calling before this command. Both commands are actually proper.
You're redirecting twice
The command in backticks writes to file and prints nothing.
You take that nothing and write it to the file, overwriting what was there before.
One way to do it in script will be same as you do in console #BETTER WAY
cat out1.csv | grep -v ^$ | grep -v ^- > out2.csv # No need to echo it or put the command in backtick `
You are redirecting twice
The other way as you are trying is
echo " `cat out1.csv | grep -v ^$ | grep -v ^- > out2.csv` " # Don't redirect the output again to out2.csv

How does this sed command works?

I came across the following sed command which I found here https://github.com/shama/grunt-hub:
ps -ef | sed -n '/grunt/{/grep/!p;}'
Could someone explain me how does the sed part work? What's the purpose of {/grep/!p;}?
Thanks for the attention!
compare the output of following two commands:
ps -ef | sed -n '/grunt/p' and ps -ef | sed -n '/grunt/{/grep/!p;}'.
You will notice later is not printing one additional like which contains process id of the grep command you hit. This would be equivalent to:
ps -ef |grep grunt |grep -v grep
Its like print all the lines containing grunt but not the line also containing grep in it

Trouble with piping through sed

I am having trouble piping through sed. Once I have piped output to sed, I cannot pipe the output of sed elsewhere.
wget -r -nv http://127.0.0.1:3000/test.html
Outputs:
2010-03-12 04:41:48 URL:http://127.0.0.1:3000/test.html [99/99] -> "127.0.0.1:3000/test.html" [1]
2010-03-12 04:41:48 URL:http://127.0.0.1:3000/robots.txt [83/83] -> "127.0.0.1:3000/robots.txt" [1]
2010-03-12 04:41:48 URL:http://127.0.0.1:3000/shop [22818/22818] -> "127.0.0.1:3000/shop.29" [1]
I pipe the output through sed to get a clean list of URLs:
wget -r -nv http://127.0.0.1:3000/test.html 2>&1 | grep --line-buffered -v ERROR | sed 's/^.*URL:\([^ ]*\).*/\1/g'
Outputs:
http://127.0.0.1:3000/test.html
http://127.0.0.1:3000/robots.txt
http://127.0.0.1:3000/shop
I would like to then dump the output to file, so I do this:
wget -r -nv http://127.0.0.1:3000/test.html 2>&1 | grep --line-buffered -v ERROR | sed 's/^.*URL:\([^ ]*\).*/\1/g' > /tmp/DUMP_FILE
I interrupt the process after a few seconds and check the file, yet it is empty.
Interesting, the following yields no output (same as above, but piping sed output through cat):
wget -r -nv http://127.0.0.1:3000/test.html 2>&1 | grep --line-buffered -v ERROR | sed 's/^.*URL:\([^ ]*\).*/\1/g' | cat
Why can I not pipe the output of sed to another program like cat?
When sed is writing to another process or to a file, it will buffer data.
Try adding the --unbuffered options to sed.
you can also use awk. since your URL appears in field 3, you can use $3, and you can remove the grep as well.
awk '!/ERROR/{sub("URL:","",$3);print $3}' file

How do you pipe input through grep to another utility?

I am using 'tail -f' to follow a log file as it's updated; next I pipe the output of that to grep to show only the lines containing a search term ("org.springframework" in this case); finally I'd like to make is piping the output from grep to a third command, 'cut':
tail -f logfile | grep org.springframework | cut -c 25-
The cut command would remove the first 25 characters of each line for me if it could get the input from grep! (It works as expected if I eliminate 'grep' from the chain.)
I'm using cygwin with bash.
Actual results: When I add the second pipe to connect to the 'cut' command, the result is that it hangs, as if it's waiting for input (in case you were wondering).
Assuming GNU grep, add --line-buffered to your command line, eg.
tail -f logfile | grep --line-buffered org.springframework | cut -c 25-
Edit:
I see grep buffering isn't the only problem here, as cut doesn't allow linewise buffering.
you might want to try replacing it with something you can control, such as sed:
tail -f logfile | sed -u -n -e '/org\.springframework/ s/\(.\{0,25\}\).*$/\1/p'
or awk
tail -f logfile | awk '/org\.springframework/ {print substr($0, 0, 25);fflush("")}'
On my system, about 8K was buffered before I got any output. This sequence worked to follow the file immediately:
tail -f logfile | while read line ; do echo "$line"| grep 'org.springframework'|cut -c 25- ; done
What you have should work fine -- that's the whole idea of pipelines. The only problem I see is that, in the version of cut I have (GNU coreutiles 6.10), you should use the syntax cut -c 25- (i.e. use a minus sign instead of a plus sign) to remove the first 24 characters.
You're also searching for different patterns in your two examples, in case that's relevant.

Resources