grep "String" war_err.txt > list_of_wlan_common_war.txt | cat
This command is passing on command line. But to include in script I have to give | cat. can I know what does | cat do?
| (pipe) symbol links the output of one process to another process input. So using | cat should the print the output of prev command ran. Because cat commands take the input and print it.
However, in your case it's not doing anything. As you are redirecting the standered output of grep command to text file. So, no further piping is happening.
Related
I want to use tail in my custom pipe command.
For example, I want to execute this command:
>ls -1 | tail -n 1 | awk '{print "last file is "$1}'
>last file is test.txt
And I want to make it short by making my own custom script. It looks like this:
>ls -1 | myscript
>last file is test.txt
I know myscript can get input from "ls -1" by this code:
while read line; do
echo last file is $line
done
But I don't know how to use "tail -n 1" in the custom pipe command code above.
Is there a way to use a pipe command in another pipe command script?
Or do I have to implement the code which does the same process as "tail -n 1" myself?
I hope bash has some solution for this.
Try putting just this in myscript
tail -n 1 | awk '{print "last file is "$1}'
This works as the first command (tail) consumes the stdin of your script. In general, scripts work as though you typed their contest as-is to the terminal.
I have a file called out1.csv which contains tabular data.
When I run the command in the terminal it works:
cat out1.csv | grep -v ^$ | grep -v ^- > out2.csv
It reads the file and greps all lines except blanks and starting with - and redirects the output to out2.csv.
But when I put the same command in a script it does not work.
I have even tried echoing:
echo " `cat out1.csv | grep -v ^$ | grep -v ^- > out2.csv` " > out2.csv
I have also tried to specify full paths of the files. But no luck.
In the script, the command runs, but output is not redirected to the file as per debug mode.
What am I missing?
The issue wasn't of the script but of the sql script that this script was calling before this command. Both commands are actually proper.
You're redirecting twice
The command in backticks writes to file and prints nothing.
You take that nothing and write it to the file, overwriting what was there before.
One way to do it in script will be same as you do in console #BETTER WAY
cat out1.csv | grep -v ^$ | grep -v ^- > out2.csv # No need to echo it or put the command in backtick `
You are redirecting twice
The other way as you are trying is
echo " `cat out1.csv | grep -v ^$ | grep -v ^- > out2.csv` " # Don't redirect the output again to out2.csv
I'm trying to do the following:
ping some.server.com | grep -Po '(?<=\=)[0-9].\.[0-9]' >> file.dat
i.e. I run a command (ping), grep part of the output and redirect the result of grep into a file to be inspected later. While the command itself works (i.e. the part before '>>'), nothing gets written into the file.
How do I do this correctly?
Use --line-buffered argument.
ping some.server.com | grep --line-buffered -Po '(?<=\=)[0-9].\.[0-9]' >> file.dat
I have to run many python script which differ just with one parameter. I name them as runv1.py, runv2.py, runv20.py. I have the original script, say runv1.py. Then I make all copies that I need by
cat runv1.py | tee runv{2..20..1}.py
So I have runv1.py,.., runv20.py. But still the parameter v=1 in all of them.
Q: how can I also replace v parameter to read it from the file name? so e.g in runv4.py then v=4. I would like to know if there is any one-line shell command or combination of commands. Thank you!
PS: direct editing each file is not a proper solution when there are too many files.
Below for loop will serve your purpose I think
for i in `ls | grep "runv[0-9][0-9]*.py"`
do
l=`echo $i | tr -d [a-z.]`
sed -i 's/v/'"$l"'/g' runv$l.py
done
Below command was to pass the parameter to script extracted from the filename itself
ls | grep "runv[0-9][0-9]*.py" | tr -d [a-z.] | awk '{print "./runv"$0".py "$0}' | xargs sh
in the end instead of sh you can use python or bash or ksh.
If I enter a command that runs for a long time or produce a lot of output, I often want to process that output in some way, but I don't want to re-run the command. For example, I might run
$ command
$ command | grep foo
$ command | grep foo | sort | uniq
But if command takes a long time, this is tedious to re-run. Is there a way to have bash (or any other shell) save the output of the last command, similar to the Python REPL's _?. I am aware of tee, but I would rather have my shell do this automatically without having to use tee all the time.
I am also aware I could store the output of a command, but again, I would like my shell to do this automatically, so I don't have to think about storing the command and I can just use my shell normally, and process the previous output when I want to.
You can store the output into a variable:
output=$(command)
echo $output | grep foo
echo $output | grep foo | sort | uniq