a newbie to shell programming here.
I have this codes so far:
prog inputfile outputfile1
sort -rn outputfile1 | cut -f1-2 > outputfile2
My question is there a way to pipe the outputfile directly from the first command to the second to get outputfile2, i.e. skipping the need to create an outputfile1? prog is a custom program that takes inputfile and outpufile names as parameters.
The closest thing I have found is substitution in shell, e.g.
sort <(ls dir)
But it's not really helpful in this case as I want to pipe the outputfile only and not the stdout.
Thanks for your help!
If I get you right — the opposite:
prog inputfile >(sort -rn | cut -f1-2 >outputfile)
depending on the prog you may use
prog inputfile /dev/stdout | sort -rn | cut -f1-2 >outputfile
or even
prog inputfile - | sort -rn | cut -f1-2 >outputfile
Related
I have a grep command that find the files that need a value replaced. Then I have a perl one liner that needs to be executed on each file to replace a variables found in that file.
How can I pipe the results of my grep command to the perl one liner?
grep -Irc "/env/file1/" /env/scripts/ | cut -d':' -f1 | sort | uniq
/env/scripts/config/MainDocument.pl
/env/scripts/config/MainDocument.pl2
/env/scripts/config/MainDocument.pl2.bak
perl -p -i.bak -e 's{/env/file1/}{/env/file2/}g' /env/scripts/config/MainDocument.pl
Thanks for your help.
With the $(...) bash syntax.
perl -p -i.bak -e 's{/env/file1/}{/env/file2/}g' $(grep -Irc "/env/file1/" /env/scripts/ | cut -d':' -f1 | sort | uniq)
I'd forget the perl one liner to use xargs and sed instead.
grep -Irc "/env/file1/" /env/scripts/ | cut -d':' -f1 | sort | uniq | xargs sed -ibak ':/env/file1/:/env/file2/:'
I have a question about bash script, lets say there is file witch contains lines, each line will have path to a file and a date, the problem is how to find most frequent path.
Thanks in advance.
Here's a suggestion
$ cut -d' ' -f1 file.txt | sort | uniq -c | sort -rn | head -n1
# \_____________________/ \__/ \_____/ \______/ \_______/
# select the file column sort print sort on print top
# files counts count result
Example use:
$ cat file.txt
/home/admin/fileA jan:17:13:46:27:2015
/home/admin/fileB jan:17:13:46:27:2015
/home/admin/fileC jan:17:13:46:27:2015
/home/admin/fileA jan:17:13:46:27:2015
/home/admin/fileA jan:17:13:46:27:2015
$ cut -d' ' -f1 file.txt | sort | uniq -c | sort -rn | head -n1
3 /home/admin/fileA
You can strip out 3 from the final result by another cut.
Reverse the lines, cut the begginning (the date), reverse them again, then sort and count unique lines:
cat file.txt | rev | cut -b 22- | rev | sort | uniq -c
If you're absolutely sure you won't have whitespace in your paths, you can avoid rev altogether:
cat file.txt | cut -d " " -f 1 | sort | uniq -c
If the output is too long to inspect visually, aioobe's suggestion of following this with sort -rn | head -n1 will serve you well
It's worth noticing, as aioobe mentioned, that many unix commands optionally take a file argument. By using it, you can avoid the extra cat command in the beginning, by supplying its argument to the next command:
cat file.txt | rev | ... vs rev file.txt | ...
While I personally find the first option both easier to remember and understand, the second is preferred by many (most?) people, as it saves up system resources (specifically, the memory and references used by an additional process) and can have better performance in some specific use cases. Wikipedia's cat article discusses this in detail.
I have a list of files starting with the word "output", and I want to sum up the total number of rows in all the files.
Here's my strategy:
for f in `find outpu*`;do wc -l $f | awk '{x+=$1}END{print $1}' ; done
Before piping over, if there were a way I could do something like >> to a temporary variable and then run the awk command after, I could accomplish this goal.
Any tips?
use this to see details and sum :
wc -l output*
and this to see only the sum:
wc -l output* | tail -n1 | cut -d' ' -f1
Here is some stuff for fun, check it out:
grep -c . out* | cut -d':' -f2- | paste -sd+ | bc
all lines, including empty ones:
grep -c '' out* | cut -d':' -f2- | paste -sd+ | bc
you can play in grep with conditions on lines in files
Watch out, this find command will only find stuff in your current directory if there is one file matching outpu*.
One way of doing it:
awk 'END{print NR}' $(find 'outpu*')
Provided that there is not an insane amount of matching filenames that overflows the maximum command length limit of your shell.
Is there a shell script that runs on a mac to generate a word list from a text file, listing the unique words? Even better if it could sort by frequency....
sorry forgot to mention, yeah i prefer a bash one as i'm using mac now...
oh, my file is in french... (basically i'm reading a novel and learning french, so i try to generate a word list help myself). hope this is not a problem?
If I understood you correctly, you need something like that:
cat <filename> | sed -e 's/ /\n/g' | sort | uniq -c
This command will do
cat file.txt | tr "\"' " '\n' | sort -u
Here sort -u will not work on Macintosh machines. In that case use sort | uniq -c instead. (Thanks to Hank Gay)
cat file.txt | tr "\"' " '\n' | sort | uniq -c
Just answer my question to dot down the final version i'm using:
tr -cs "[:alpha:]" "\n" < FileIn.txt | sort | uniq -c | awk '{print $2","$1}' >> FileOut.csv
some notes:
tr can be used directly to do replacement.
since i'm interested creating a word list for my french vocabulary, i used [:alpha:]
awk is used to insert a comma, so that the output is a csv file, easier for me to upload...
thanks again for everyone helping me.
sorry i didn't put it clearly at the beginning that i'm using a mac and expect a bash script.
cheers.
I am using 'tail -f' to follow a log file as it's updated; next I pipe the output of that to grep to show only the lines containing a search term ("org.springframework" in this case); finally I'd like to make is piping the output from grep to a third command, 'cut':
tail -f logfile | grep org.springframework | cut -c 25-
The cut command would remove the first 25 characters of each line for me if it could get the input from grep! (It works as expected if I eliminate 'grep' from the chain.)
I'm using cygwin with bash.
Actual results: When I add the second pipe to connect to the 'cut' command, the result is that it hangs, as if it's waiting for input (in case you were wondering).
Assuming GNU grep, add --line-buffered to your command line, eg.
tail -f logfile | grep --line-buffered org.springframework | cut -c 25-
Edit:
I see grep buffering isn't the only problem here, as cut doesn't allow linewise buffering.
you might want to try replacing it with something you can control, such as sed:
tail -f logfile | sed -u -n -e '/org\.springframework/ s/\(.\{0,25\}\).*$/\1/p'
or awk
tail -f logfile | awk '/org\.springframework/ {print substr($0, 0, 25);fflush("")}'
On my system, about 8K was buffered before I got any output. This sequence worked to follow the file immediately:
tail -f logfile | while read line ; do echo "$line"| grep 'org.springframework'|cut -c 25- ; done
What you have should work fine -- that's the whole idea of pipelines. The only problem I see is that, in the version of cut I have (GNU coreutiles 6.10), you should use the syntax cut -c 25- (i.e. use a minus sign instead of a plus sign) to remove the first 24 characters.
You're also searching for different patterns in your two examples, in case that's relevant.