Pipes & xargs => top [closed] - macos

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I am trying to use pipes and xargs to start top with a particular pid, but I cannot get it to work and I don't know why:
ps aux|grep ProgramName|awk '{print $2}'|head -n1|xargs top -pid
I get the correct pid printed to screen if I stop after head -n1, and manually adding that pid to the command top -pid XXX also works, but running the whole line as one command just does not return the top screen.
What am I doing wrong here?
EDITs: yes, "-pid" is indeed correct (further checking the remote shell revealed it is actually a Mac OS based system, not a Linux one)

What am I doing wrong here?
Several things:
You are using grep and awk in the same pipeline. Since awk does pattern matching, there is no reason to use grep as a separate process.
You are using awk and head in the same pipeline. Since awk can control the number of items it prints, there is no need to use head.
Your grep command will find both the indicated program, and the grep program.
You are using xargs to provide a single command line argument. Either backticks or $() is a better choice.
top takes a -p switch, not a -pid switch. (At least on my computer.)
Adding it all up, try:
$ top -p $(ps aux | awk '/ProgramName/ && ! /awk/ { print $2; exit; }')

Your problem is
the arg to top should be "-p" not "-pid"
xargs is for running non-interactive programs
Try this:
top -p "$(pgrep ProgramName | head -n 1)"
or
top -p "$(pgrep --oldest ProgramName)"
or
top -p "$(pgrep --newest ProgramName)"

Related

How do I interpret this bash/awk syntax? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am making a sincere effort to master bash.
What would a line like below mean ?
ps -ef | awk '/ora_pmon_/ && !/awk/'
Thank you.
This means as follows.
ps -ef | awk '/ora_pmon_/ && !/awk/'
You are first getting the output of ps -ef which will have information of all processes running. Then by using a <pipe> (|) we send this output to the standard input of the awk command.
awk will check for lines, basically process names, having the string ora_pmon in them AND NOT the string awk. The latter is to exclude the process of this command which we do not want in output.
The correct way to do what you want though is just:
ps -ef | awk '/[o]ra_pmon_/'

Bash: Grep Line Numbers to Correspond to AWK NR [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I suspect I am going around this the long way but please bear with me I am new to Bash, grep and awk ...
The summary of my problem is that line numbers in grep do no correspond to the actual line numbers in a file. From what I gather empty lines are discarded in the line numbering. I would prefer not to iterate through every line in a file to ensure 100% coverage.
What I am trying to do is grab a segment of lines from a file and process them using grep and awk
The grep call gets a list of line numbers since there could be more than one instance of a 'starting position' in a file:
startLnPOSs=($(cat "$c"| grep -e '^[^#]' | grep --label=root -e '^[[:space:]]start-pattern' -n -T | awk '{print $1}'
Then using awk I iterate from a starting point until an 'end' token is encountered.
declarations=($(cat "$c" | awk "$startLnPos;/end-pattern/{exit}" ))
To me this looks a bit like an xy problem as you are showing us what you are doing to solve a problem but not actually outlining the problem.
So on a guess I am thinking you want to return all the items between the start/end patterns to your array (which may also be erroneous, but again we do not know the overall picture).
So what you could do is:
declarations=($(awk '/start-pattern/,/end-pattern/' "$c"))
Or with sed (exactly the same):
declarations=($(sed -n '/start-pattern/,/end-pattern/p' "$c"))
Depending if you want those actual lines included or not the commands may need to be altered a little.
Was this the kind of thing you were looking to do??

shell command deal with only one column [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm really curious: Is there any tool can assist to shell text processing programs -> cut one column out, provide to text processing programs and then paste it back.
For example, I have a file:
3f27,tom,17
6d44,jack,19
139a,jerry,7
I want to change field 2, remove all aeiou.
I known there are many ways to work around this problem. But why we do not face it?
I want a tool, like:
deal-only -d"," -f2 sed 's/[aeiou]//g'
This is more clean and powerful.
So, is anybody known such a tool, or similar solution?
If no, I want to create one.
As I said above, I known sed, or awk can deal above problem well.
But when you meet a complex problem, sed or awk cannot save you.
deal-only -d"," -f2 ./ip2country.rb
Here, I want to modify column 2 from ip to country.
Using awk:
# script.awk
BEGIN { FS="," }
{print $1 "," gensub("[aeiou]+", "", "g", $2) "," $3}
Then:
awk -f script.awk < data.txt
You may use the coprocess feature of bash (see e.g. here):
#!/bin/bash
coproc stdbuf -oL sed 's/[aeiou]//g'
while IFS="," read a b c ; do
echo "${b}" >&${COPROC[1]}
read -u ${COPROC[0]} b2
echo "${a},${b2},${c}"
done
Some random notes:
this is not POSIX
the standard output of process which filters the column data *MUST* be line buffered/unbuffered (this is the stdbuf -oL part -- see section "Buffering" in the above-mentioned document)
(AFAIK) the same effect can be achieved by spawning a background process and i/o redirection
(AFAIK) two named pipes linked to single external "resource-heavy" process input/output should work as well
I am not 100% sure if this is the best way, but it does work for me
Good luck!

So i'm trying to make a background process that 'espeak's specific log events

I'm relatively new to linux - please forgive me if the solution is simple/obvious..
I'm trying to set up a background running script that monitors a log file for certain keyword patterns with awk and tail, and then uses espeak to provide a simplified notification when these keywords appear in the log file (which uses sysklogd)
The concept is derived from this guide
This is a horrible example of what i'm trying to do:
#!/bin/bash
tail -f -n1 /var/log/example_main | awk '/example sshd/&&/session opened for user/{system("espeak \"Opening SSH session\"")}'
tail -f -n1 /var/log/example_main | awk '/example sshd/&&/session closed/{system("espeak \"Session closed. Goodbye.\"")}''
tail -f -n1 /var/log/example_main | awk '/example sshd/&&/authentication failure/{system("espeak \"Warning: Authentication Faliure\"")}'
tail -f -n1 /var/log/example_main | awk '/example sshd/&&/authentication failure/{system("espeak \"Authentication Failure. I have denied access.\"")}'
The first tail command by itself works perfectly; it monitors the defined log file for 'example sshd' and 'session opened for user', then uses espeak to say 'Opening SSH session'. As you would expect given the above excerpt, the bash script will not run multiple tails simultaneously (or at least it stops after this first tail command).
I guess I have a few questions:
How should I set out this script?
What is the best way to constantly run this script in the background - e.g init?
Are there any tutorials/documentation somewhere that could help me out?
Is there already something like this available that I could use?
Thanks, any help would be greatly appreciated - sorry for the long post.
Personally, I would attempt to set each of these up as an individual cron job. This would allow you to run it at a specific time and at specified intervals.
For example, you could type crontab -e
Then inside, have each of these tail commands listed as such:
5 * * * * tail -f -n1 /var/log/example_main | awk '/example sshd/&&/session opened for user/{system("espeak \"Opening SSH session\"")}'
That would run that one command at 5 minutes after the hour, every hour.
This was a decent guide I found: HowTo: Add Jobs To cron

What is your latest useful Perl one-liner (or a pipe involving Perl)? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
The one-liner should:
solve a real-world problem
not be extensively cryptic (should be easy to understand and reproduce)
be worth the time it takes to write it (should not be too clever)
I'm looking for practical tips and tricks (complementary examples for perldoc perlrun).
Please see my slides for "A Field Guide To The Perl Command Line Options."
Squid log files. They're great, aren't they? Except by default they have seconds-from-the-epoch as the time field. Here's a one-liner that reads from a squid log file and converts the time into a human readable date:
perl -pe's/([\d.]+)/localtime $1/e;' access.log
With a small tweak, you can make it only display lines with a keyword you're interested in. The following watches for stackoverflow.com accesses and prints only those lines, with a human readable date. To make it more useful, I'm giving it the output of tail -f, so I can see accesses in real time:
tail -f access.log | perl -ne's/([\d.]+)/localtime $1/e,print if /stackoverflow\.com/'
The problem: A media player does not automatically load subtitles due to their names differ from corresponding video files.
Solution: Rename all *.srt (files with subtitles) to match the *.avi (files with video).
perl -e'while(<*.avi>) { s/avi$/srt/; rename <*.srt>, $_ }'
CAVEAT: Sorting order of original video and subtitle filenames should be the same.
Here, a more verbose version of the above one-liner:
my #avi = glob('*.avi');
my #srt = glob('*.srt');
for my $i (0..$#avi)
{
my $video_filename = $avi[$i];
$video_filename =~ s/avi$/srt/; # 'movie1.avi' -> 'movie1.srt'
my $subtitle_filename = $srt[$i]; # 'film1.srt'
rename($subtitle_filename, $video_filename); # 'film1.srt' -> 'movie1.srt'
}
The common idiom of using find ... -exec rm {} \; to delete a set of files somewhere in a directory tree is not particularly efficient in that it executes the rm command once for each file found. One of my habits, born from the days when computers weren't quite as fast (dagnabbit!), is to replace many calls to rm with one call to perl:
find . -name '*.whatever' | perl -lne unlink
The perl part of the command line reads the list of files emitted* by find, one per line, trims the newline off, and deletes the file using perl's built-in unlink() function, which takes $_ as its argument if no explicit argument is supplied. ($_ is set to each line of input thanks to the -n flag.) (*These days, most find commands do -print by default, so I can leave that part out.)
I like this idiom not only because of the efficiency (possibly less important these days) but also because it has fewer chorded/awkward keys than typing the traditional -exec rm {} \; sequence. It also avoids quoting issues caused by file names with spaces, quotes, etc., of which I have many. (A more robust version might use find's -print0 option and then ask perl to read null-delimited records instead of lines, but I'm usually pretty confident that my file names do not contain embedded newlines.)
You may not think of this as Perl, but I use ack religiously (it's a smart grep replacement written in Perl) and that lets me edit, for example, all of my Perl tests which access a particular part of our API:
vim $(ack --perl -l 'api/v1/episode' t)
As a side note, if you use vim, you can run all of the tests in your editor's buffers.
For something with more obvious (if simple) Perl, I needed to know how many test programs used out test fixtures in the t/lib/TestPM directory (I've cut down the command for clarity).
ack $(ls t/lib/TestPM/|awk -F'.' '{print $1}'|xargs perl -e 'print join "|" => #ARGV') aggtests/ t -l
Note how the "join" turns the results into a regex to feed to ack.
All one-liners from the answers collected in one place:
perl -pe's/([\d.]+)/localtime $1/e;' access.log
ack $(ls t/lib/TestPM/|awk -F'.' '{print $1}'|xargs perl -e 'print join "|" => #ARGV')
aggtests/ t -l
perl -e'while(<*.avi>) { s/avi$/srt/; rename <*.srt>, $_ }'
find . -name '*.whatever' | perl -lne unlink
tail -F /var/log/squid/access.log | perl -ane 'BEGIN{$|++} $F[6] =~ m{\Qrad.live.com/ADSAdClient31.dll}
&& printf "%02d:%02d:%02d %15s %9d\n", sub{reverse #_[0..2]}->(localtime $F[0]), #F[2,4]'
export PATH=$(perl -F: -ane'print join q/:/, grep { !$c{$_}++ } #F'<<<$PATH)
alias e2d="perl -le \"print scalar(localtime($ARGV[0]));\""
perl -ple '$_=eval'
perl -00 -ne 'print sort split /^/'
perl -pe'1while+s/\t/" "x(8-pos()%8)/e'
tail -f log | perl -ne '$s=time() unless $s; $n=time(); $d=$n-$s; if ($d>=2) { print qq
($. lines in last $d secs, rate ),$./$d,qq(\n); $. =0; $s=$n; }'
perl -MFile::Spec -e 'print join(qq(\n),File::Spec->path).qq(\n)'
See corresponding answers for their descriptions.
The Perl one-liner I use the most is the Perl calculator
perl -ple '$_=eval'
One of the biggest bandwidth hogs at $work is download web advertising, so I'm looking at the low-hanging fruit waiting to be picked. I've got rid of Google ads, now I have Microsoft in my line of sights. So I run a tail on the log file, and pick out the lines of interest:
tail -F /var/log/squid/access.log | \
perl -ane 'BEGIN{$|++} $F[6] =~ m{\Qrad.live.com/ADSAdClient31.dll}
&& printf "%02d:%02d:%02d %15s %9d\n",
sub{reverse #_[0..2]}->(localtime $F[0]), #F[2,4]'
What the Perl pipe does is to begin by setting autoflush to true, so that any that is acted upon is printed out immediately. Otherwise the output it chunked up and one receives a batch of lines when the output buffer fills. The -a switch splits each input line on white space, and saves the results in the array #F (functionality inspired by awk's capacity to split input records into its $1, $2, $3... variables).
It checks whether the 7th field in the line contains the URI we seek (using \Q to save us the pain of escaping uninteresting metacharacters). If a match is found, it pretty-prints the time, the source IP and the number of bytes returned from the remote site.
The time is obtained by taking the epoch time in the first field and using 'localtime' to break it down into its components (hour, minute, second, day, month, year). It takes a slice of the first three elements returns, second, minute and hour, and reverses the order to get hour, minute and second. This is returned as a three element array, along with a slice of the third (IP address) and fifth (size) from the original #F array. These five arguments are passed to sprintf which formats the results.
#dr_pepper
Remove literal duplicates in $PATH:
$ export PATH=$(perl -F: -ane'print join q/:/, grep { !$c{$_}++ } #F'<<<$PATH)
Print unique clean paths from %PATH% environment variable (it doesn't touch ../ and alike, replace File::Spec->rel2abs by Cwd::realpath if it is desirable) It is not a one-liner to be more portable:
#!/usr/bin/perl -w
use File::Spec;
$, = "\n";
print grep { !$count{$_}++ }
map { File::Spec->rel2abs($_) }
File::Spec->path;
I use this quite frequently to quickly convert epoch times to a useful datestamp.
perl -l -e 'print scalar(localtime($ARGV[0]))'
Make an alias in your shell:
alias e2d="perl -le \"print scalar(localtime($ARGV[0]));\""
Then pipe an epoch number to the alias.
echo 1219174516 | e2d
Many programs and utilities on Unix/Linux use epoch values to represent time, so this has proved invaluable for me.
Remove duplicates in path variable:
set path=(`echo $path | perl -e 'foreach(split(/ /,<>)){print $_," " unless $s{$_}++;}'`)
Remove MS-DOS line-endings.
perl -p -i -e 's/\r\n$/\n/' htdocs/*.asp
Extracting Stack Overflow reputation without having to open a web page:
perl -nle "print ' Stack Overflow ' . $1 . ' (no change)' if /\s{20,99}([0-9,]{3,6})<\/div>/;" "SO.html" >> SOscores.txt
This assumes the user page has already been downloaded to file SO.html. I use wget for this purpose. The notation here is for Windows command line; it would be slightly different for Linux or Mac OS X. The output is appended to a text file.
I use it in a BAT script to automate sampling of reputation on the four sites in the family:
Stack Overflow, Server Fault, Super User and Meta Stack Overflow.
In response to Ovid's Vim/ack combination:
I too am often searching for something and then want to open the matching files in Vim, so I made myself a little shortcut some time ago (works in Z shell only, I think):
function vimify-eval; {
if [[ ! -z "$BUFFER" ]]; then
if [[ $BUFFER = 'ack'* ]]; then
BUFFER="$BUFFER -l"
fi
BUFFER="vim \$($BUFFER)"
zle accept-line
fi
}
zle -N vim-eval-widget vimify-eval
bindkey '^P' vim-eval-widget
It works like this: I search for something using ack, like ack some-pattern. I look at the results and if I like it, I press arrow-up to get the ack-line again and then press Ctrl + P. What happens then is that Z shell appends and "-l" for listing filenames only if the command starts with "ack". Then it puts "$(...)" around the command and "vim" in front of it. Then the whole thing is executed.
I often need to see a readable version of the PATH while shell scripting. The following one-liners print every path entry on its own line.
Over time this one-liner has evolved through several phases:
Unix (version 1):
perl -e 'print join("\n",split(":",$ENV{"PATH"}))."\n"'
Windows (version 2):
perl -e "print join(qq(\n),split(';',$ENV{'PATH'})).qq(\n)"
Both Unix/Windows (using q/qq tip from #j-f-sebastian) (version 3):
perl -MFile::Spec -e 'print join(qq(\n), File::Spec->path).qq(\n)' # Unix
perl -MFile::Spec -e "print join(qq(\n), File::Spec->path).qq(\n)" # Windows
One of the most recent one-liners that got a place in my ~/bin:
perl -ne '$s=time() unless $s; $n=time(); $d=$n-$s; if ($d>=2) { print "$. lines in last $d secs, rate ",$./$d,"\n"; $. =0; $s=$n; }'
You would use it against a tail of a log file and it will print the rate of lines being outputed.
Want to know how many hits per second you are getting on your webservers? tail -f log | this_script.
Get human-readable output from du, sorted by size:
perl -e '%h=map{/.\s/;7x(ord$&&10)+$`,$_}`du -h`;print#h{sort%h}'
Filters a stream of white-space separated stanzas (name/value pair lists),
sorting each stanza individually:
perl -00 -ne 'print sort split /^/'
Network administrators have the tendency to misconfigure "subnet address" as "host address" especially while using Cisco ASDM auto-suggest. This straightforward one-liner scans the configuration files for any such configuration errors.
incorrect usage: permit host 10.1.1.0
correct usage: permit 10.1.1.0 255.255.255.0
perl -ne "print if /host ([\w\-\.]+){3}\.0 /" *.conf
This was tested and used on Windows, please suggest if it should be modified in any way for correct usage.
Expand all tabs to spaces: perl -pe'1while+s/\t/" "x(8-pos()%8)/e'
Of course, this could be done with :set et, :ret in Vim.
I have a list of tags with which I identify portions of text. The master list is of the format:
text description {tag_label}
It's important that the {tag_label} are not duplicated. So there's this nice simple script:
perl -ne '($c) = $_ =~ /({.*?})/; print $c,"\n" ' $1 | sort | uniq -c | sort -d
I know that I could do the whole lot in shell or perl, but this was the first thing that came to mind.
Often I have had to convert tabular data in to configuration files. For e.g, Network cabling vendors provide the patching record in Excel format and we have to use that information to create configuration files. i.e,
Interface, Connect to, Vlan
Gi1/0/1, Desktop, 1286
Gi1/0/2, IP Phone, 1317
should become:
interface Gi1/0/1
description Desktop
switchport access vlan 1286
and so on. The same task re-appears in several forms in various administration tasks where a tabular data needs to be prepended with their field name and transposed to a flat structure. I have seen some DBA's waste a lot of times preparing their SQL statements from excel sheet. It can be achieved using this simple one-liner. Just save the tabular data in CSV format using your favourite spreadsheet tool and run this one-liner. The field names in header row gets prepended to individual cell values, so you may have to edit it to match your requirements.
perl -F, -lane "if ($.==1) {#keys = #F} else{print #keys[$_].$F[$_] foreach(0..$#F)} "
The caveat is that none of the field names or values should contain any commas. Perhaps this can be further elaborated to catch such exceptions in a one-line, please improve this if possible.
Here is one that I find handy when dealing with a collection compressed log files:
open STATFILE, "zcat $logFile|" or die "Can't open zcat of $logFile" ;
At some time I found that anything I would want to do with Perl that is short enough to be done on the command line with 'perl -e' can be done better, easier and faster with normal Z shell features without the hassle of quoting. E.g. the example above could be done like this:
srt=(*.srt); for foo in *.avi; mv $srt[1] ${foo:r}.srt && srt=($srt[2,-1])

Resources