( ping -n apple.com | ack -o --flush '((?<=icmp_seq=)[0-9]+|(?<=time=)[0-9]+[.][0-9]+)' | paste - - ) > ~/Desktop/ping_ouput.txt
Doesn't seem to be working for some reason, whereas if I take away the
| paste - -
section, it works just fine. I need to join every other line together with a tab instead of newline. Any ideas are appreciated.
Update:
By default, ping produces output indefinitely, until terminated, so your pipe will keep producing output and growing the output file indefinitely, and your command will never finish by itself.
Thus, you need to limit the number of pings performed; e.g.:
using the -c count option to stop after the specified number of pings.
alternatively, on BSD-like systems, you can use -o to stop after the first successful ping.
alternatively, you can stop after a specified timeout in seconds, regardless of how many packets were received:
Linux: -w timeout
BSD-like systems: -t timeout
Incidentally, in order to redirect a pipeline to a file, there's no need to enclose it in (...) as a whole, which needlessly creates a(nother) subshell.
The following was written before (a) the specific symptom was known and (b) before the OP revealed in a comment that their platform is OSX. The commands below show alternatives to the OP's extraction command; -c 2 has been added to limit ping to 2 attempts.
tivn's answer has found at least potential problem in your command: the assumption that time values always have a decimal point, which does not hold on Linux platforms; on BSD/OSX platforms, however, there are always 3 decimal places.
You can bypass this issue as well as the need to merge consecutive lines as follows (I'm using 127.0.0.1 instead of apple.com for easier testing):
Using either GNU sed (Linux) or BSD sed (BSD-like systems, including OSX):
ping -c 2 -n 127.0.0.1 |
sed -E '1d; s/^.* icmp_seq=([^ ]+).* time=([^ ]+).*$/\1'$'\t''\2/'
Alternative solutions, using awk:
ping -c 2 -n 127.0.0.1 | awk -F'[ =]' -v OFS='\t' 'NR>1 { print $6, $10 }'
If you'd rather not rely on field indices, here's an alternative (still relies on field icmp_seq to come before time, and for both to be preceded by a space):
ping -c 2 -n 127.0.0.1 | awk -F' (icmp_seq|time)=' -v OFS='\t' '
NR>1 { sub(" .+$", "", $2); sub(" .+$", "", $3)
print $2, $3 }'
You don't provide sample input you got from the ping, so it is not clear what your problem is. But I guess your issue is because the time part from ping result may not always contains dot ., for example: 64 bytes from x.x.x.x: icmp_seq=1 ttl=63 time=137 ms . You may want to try with the following command :
( ping -n yahoo.com \
| ack -o --flush '((?<=icmp_seq=)\d+|(?<=time=)[\d.]+)' \
| stdbuf -o0 paste - - \
) > ~/Desktop/ping_ouput.txt
Edit: after comment from OP, the issue may actually come from buffered paste command. Please try with adding stdbuf -o0 to the paste command.
Related
There are many tens, maybe a hundred or more previous questions that seem "identical" to this already here, but after extensive search, I found NOTHING that even came close to working - though I did learn quite a lot - and so I decided to just RTFM and figure this out on my own.
The Problem
I wanted to search the output of a ps auxwww command to find processes of interest, and the issue was that I can't just simply use cut to find the exact data from them that I wanted. ps, it turns out, tries to columnate the output, adding either extra spaces or tabs that get in the way of using cut to get the correct data.
So, since I'm not a master at bash, I did a search... The answers I found were all focused on either variables - a "backup strategy" from my point of view that itself didn't solve the whole problem - or they only trimmed leading or trailing space or all "whitespace" including newlines. NOPE, Won't Work For Cut! And, neither will removing trailing newlines and so forth.
So, restated, the question is, how do we efficiently end up with the white space defined as simply a single space between other characters without eliminating newlines?
Below, I will give my answer, but I welcome others to give theirs - who knows, maybe someone has a better answer?!
Answer:
At least MY answer - please leave your own, too! - was to do this:
ps auxwww | grep <program> | tr -s [:blank:] | cut -d ' ' -f <field_of_interest>
This worked great!
Obviously, there are many ways to adapt this to other needs.
As an alternative to all of the pipes and grep with cut, you could simply use awk. The benefit of using awkwith the default field-separator (FS) being set to break on whitespace is that it considers any number of whitespace between fields as a single separator.
So using awk will do away with needing to use tr -s to "squeeze" whitespace to define fields. Further, awk gives far greater control over field matching using regular expressions rather than having to rely on grep of a full line and cut to locate a pre-determined field numbers. (though to some extent you will still have to tell awk what field out of the ps command you are interested in)
Using bash, you can also eliminate the pipe | by using process substitution to send the output of ps auxwww to awk on stdin using redirection, e.g. awk ... < <(ps auxwww) for a single tidy command line.
To get your "program" and "file_of_interest" into awk you have two options. You can initialize awk variables using the -v var=value option (there can be multiple -v otions given), or you can use the BEGIN rule to initialize the variables. The only difference being with -v you can provide a shell variable for value and there is no whitespace allowed surrounding the = sign, while within BEGIN any whitespace is ignored.
So in your case a couple of examples to get the virtual memory size for firefox processes, you could use:
awk -v prog="firefox" -v fnum="5" '
$11 ~ prog {print $fnum}
' < <(ps auxwww)
(above if you had myprog=firefox as a shell variable, you could use -v prog="$myprog" to initialize the prog variable for awk)
or using the BEGIN rule, you could do:
awk 'BEGIN {prog = "firefox"; fnum = "5"}
$11 ~ prog {print $fnum }
' < <(ps auxwww)
In each command above, it locates the COMMAND field from ps (field 11) and checks whether it contains firefox and if so it outputs field no. 5 the virtual memory size used by each process.
Both work fine as one-liners as well, e.g.
awk -v prog="firefox" -v fnum="5" '$11 ~ prog {print $fnum}' < <(ps auxwww)
Don't get me wrong, the pipeline is perfectly fine, it will just be slow. For short commands with limited output there won't be much difference, but when the output is large, awk will provide orders of magnitude improvement over having to tr and grep and cut reading over the same records three times.
The reason being, the pipes and the process on each side requires separate processes be spawned by the shell. So minimizes their use, improves the efficiency of what your script is doing. Now if the data is small as are the processes, there isn't much of a difference. However if you are reading a 3G file 3 times over -- that's is the difference in orders of magnitude. Hours verses minutes or seconds.
I had to use single quotes on CentosOS Linux to get tr working like described above:
ps -o ppid= $$ | tr -d '[:space:]'
You can reduce the number of pipes using this Perl one-liner, which uses Perl regexes instead of a separate grep process. This combines grep, tr and cut in a single command, with an easy way to manipulate the output (#F is the array of fields, 0-indexed):
Examples:
# Start an example process to provide the input for `ps` in the next commands:
/Applications/Emacs.app/Contents/MacOS/Emacs-x86_64-10_14 --geometry 109x65 /tmp/foo &
# Print single space-delimited output of `ps` for all emacs processes:
ps auxwww | perl -lane 'print "#F" if $F[10] =~ /emacs/i'
# Prints:
# bar 72144 0.0 0.5 4610272 82320 s006 SN 11:15AM 0:01.31 /Applications/Emacs.app/Contents/MacOS/Emacs-x86_64-10_14 --geometry 109x65 /tmp/foo
# Print emacs PID and file name opened with emacs:
ps auxwww | perl -lane 'print join "\t", #F[1, -1] if $F[10] =~ /emacs/i'
# Prints:
# 72144 /tmp/foo
The Perl one-liner uses these command line flags:
-e : Tells Perl to look for code in-line, instead of in a file.
-n : Loop over the input one line at a time, assigning it to $_ by default.
-l : Strip the input line separator ("\n" on *NIX by default) before executing the code in-line, and append it when printing.
-a : Split $_ into array #F on whitespace or on the regex specified in -F option.
SEE ALSO:
perldoc perlrun: how to execute the Perl interpreter: command line switches
perldoc perlre: Perl regular expressions (regexes)
I have a bash variable which has the following content:
SSH exit status 255 for i-12hfhf578568tn
i-12hdfghf578568tn is able to connect
i-13456tg is not able to connect
SSH exit status 255 for 1.2.3.4
I want to search the string starting with i- and then extract only that instance id. So, for the above input, I want to have output like below:
i-12hfhf578568tn
i-12hdfghf578568tn
i-13456tg
I am open to use grep, awk, sed.
I am trying to achieve my task by using following command but it gives me whole line:
grep -oE 'i-.*'<<<$variable
Any help?
You can just change your grep command to:
grep -oP 'i-[^\s]*' <<<$variable
Tested on your input:
$ cat test
SSH exit status 255 for i-12hfhf578568tn
i-12hdfghf578568tn is able to connect
i-13456tg is not able to connect
SSH exit status 255 for 1.2.3.4
$ var=`cat test`
$ grep -oP 'i-[^\s]*' <<<$var
i-12hfhf578568tn
i-12hdfghf578568tn
i-13456tg
grep is exactly what you need for this task, sed would be more suitable if you had to reformat the input and awk would be nice if you had either to reformat a string or make some computation of some fields in the rows, columns
Explanation:
-P is to use perl regex
i-[^\s]* is a regex that will match literally i- followed by 0 to N non space character, you could change the * by a + if you want to impose that there is at least 1 char after the - or you could use {min,max} syntax to impose a range.
Let me know if there is something unclear.
Bonus:
Following the comment of Sundeep, you can use one of the improved versions of the regex I have proposed (the first one does use PCRE and the second one posix regex):
grep -oP 'i-\S*' <<<$var
or
grep -o 'i-[^[:blank:]]*' <<<$var
You could use following too(I tested it with GNU awk):
echo "$var" | awk -v RS='[ |\n]' '/^i-/'
You can also use this code (Tested in unix)
echo $test | grep -o "i-[0-z]*"
Here,
-o # Prints only the matching part of the lines
i-[0-z]* # This regular expression, matches all the alphabetical and numerical characters following 'i-'.
I need to resolve a host name to an IP address in a shell script. The code must work at least in Cygwin, Ubuntu and OpenWrt(busybox).
It can be assumed that each host will have only one IP address.
Example:
input
google.com
output
216.58.209.46
EDIT:
nslookup may seem like a good solution, but its output is quite unpredictable and difficult to filter. Here is the result command on my computer (Cygwin):
>nslookup google.com
Unauthorized answer:
Serwer: UnKnown
Address: fdc9:d7b9:6c62::1
Name: google.com
Addresses: 2a00:1450:401b:800::200e
216.58.209.78
I've no experience with OpenWRT or Busybox but the following one-liner will should work with a base installation of Cygwin or Ubuntu:
ipaddress=$(LC_ALL=C nslookup $host 2>/dev/null | sed -nr '/Name/,+1s|Address(es)?: *||p')
The above works with both the Ubuntu and Windows version of nslookup. However, it only works when the DNS server replies with one IP (v4 or v6) address; if more than one address is returned the first one will be used.
Explanation
LC_ALL=C nslookup sets the LC_ALL environment variable when running the nslookup command so that the command ignores the current system locale and print its output in the command’s default language (English).
The 2>/dev/null avoids having warnings from the Windows version of nslookup about non-authoritative servers being printed.
The sed command looks for the line containing Name and then prints the following line after stripping the phrase Addresses: when there's more than one IP (v4 or 6) address -- or Address: when only one address is returned by the name server.
The -n option means lines aren't printed unless there's a p commandwhile the-r` option means extended regular expressions are used (GNU sed is the default for Cygwin and Ubuntu).
If you want something available out-of-the-box on almost any modern UNIX, use Python:
pylookup() {
python -c 'import socket, sys; print socket.gethostbyname(sys.argv[1])' "$#" 2>/dev/null
}
address=$(pylookup google.com)
With respect to special-purpose tools, dig is far easier to work with than nslookup, and its short mode emits only literal answers -- in this case, IP addresses. To take only the first address, if more than one is found:
# this is a bash-specific idiom
read -r address < <(dig +short google.com | grep -E '^[0-9.]+$')
If you need to work with POSIX sh, or broken versions of bash (such as Git Bash, built with mingw, where process substitution doesn't work), then you might instead use:
address=$(dig +short google.com | grep -E '^[0-9.]+$' | head -n 1)
dig is available for cygwin in the bind-utils package; as bind is most widely used DNS server on UNIX, bind-utils (built from the same codebase) is available for almost all Unix-family operating systems as well.
Here's my variation that steals from earlier answers:
nslookup blueboard 2> /dev/null | awk '/Address/{a=$3}END{print a}'
This depends on nslookup returning matching lines that look like:
Address 1: 192.168.1.100 blueboard
...and only returns the last address.
Caveats: this doesn't handle non-matching hostnames at all.
TL;DR; Option 2 is my preferred choice for IPv4 address. Adjust the regex to get IPv6 and/or awk to get both. There is a slight edit to option 2 suggested use given in EDIT
Well a terribly late answer here, but I think I'll share my solution here, esp. because the accepted answer didn't work for me on openWRT(no python with minimal setup) and the other answer errors out "no address found after comma".
Option 1 (gives the last address from last entry sent by nameserver):
nslookup example.com 2>/dev/null | tail -2 | tail -1 | awk '{print $3}'
Pretty simple and straight forward and doesn't really need an explanation of piped commands.
Although, in my tests this always gave IPv4 address (because IPv4 was always last line, at least in my tests.) However, I read about the unexpected behavior of nslookup. So, I had to find a way to make sure I get IPv4 even if the order was reversed - thanks regex
Option 2 (makes sure you get IPv4):
nslookup example.com 2>/dev/null | sed 's/[^0-9. ]//g' | tail -n 1 | awk -F " " '{print $2}'
Explanation:
nslookup example.com 2>/dev/null - look up given host and ignore STDERR (2>/dev/null)
sed 's/[^0-9. ]//g' - regex to get IPv4 (numbers and dots, read about 's' command here)
tail -n 1 - get last 1 line (alt, tail -1)
awk -F " " '{print $2} - Captures and prints the second part of line using " " as a field separator
EDIT: A slight modification based on a comment to make it actually more generalized:
nslookup example.com 2>/dev/null | printf "%s" "$(sed 's/[^0-9. ]//g')" | tail -n 1 | printf "%s" "$(awk -F " " '{print $1}')"
In the above edit, I'm using printf command substitution to take care of any unwanted trailing newlines.
Ive seen the same question asked on linux and windows but not mac (terminal). Can anyone tell me how to get the current processor utilization in %, so an example output would be 40%. Thanks
This works on a Mac (includes the %):
ps -A -o %cpu | awk '{s+=$1} END {print s "%"}'
To break this down a bit:
ps is the process status tool. Most *nix like operating systems support it. There are a few flags we want to pass to it:
-A means all processes, not just the ones running as you.
-o lets us specify the output we want. In this case, it all we want to the cpu% column of ps's output.
This will get us a list of all of the processes cpu usage, like
0.0
1.3
27.0
0.0
We now need to add up this list to get a final number, so we pipe ps's output to awk. awk is a pretty powerful tool for parsing and operating on text. We just simply add up the numbers, then print out the result, and add a "%" on the end.
Adding up all those CPU % can give a number > 100% (probably multiple cores).
Here's a simpler method, although it comes with some problems:
top -l 2 | grep -E "^CPU"
This gives 2 samples, the first of which is nonsense (because it calculates CPU load between samples).
Also, you need to use RegEx like (\d+\.\d*)% or some string functions to extract values, and add "user" and "sys" values to get the total.
(From How to get CPU utilisation, RAM utilisation in MAC from commandline)
Building on previous answers from #Jon R. and #Rounak D, the following line prints the sum of user and system values, with the added percent. I've have tested this value and I like that it roughly tracks well with the percentages shown in the macOS Activity Monitor.
top -l 2 | grep -E "^CPU" | tail -1 | awk '{ print $3 + $5"%" }'
You can then capture that value in a variable in script like this:
cpu_percent=$(top -l 2 | grep -E "^CPU" | tail -1 | awk '{ print $3 + $5"%" }')
PS: You might also be interested in the output of uptime, which shows system load.
Building upon #Jon R's answer, we can pick up the user CPU utilization through some simple pattern matching
top -l 1 | grep -E "^CPU" | grep -Eo '[^[:space:]]+%' | head -1
And if you want to get rid of the last % symbol as well,
top -l 1 | grep -E "^CPU" | grep -Eo '[^[:space:]]+%' | head -1 | sed s/\%/\/
top -F -R -o cpu
-F Do not calculate statistics on shared libraries, also known as frameworks.
-R Do not traverse and report the memory object map for each process.
-o cpu Order by CPU usage
Answer Source
You can do this.
printf "$(ps axo %cpu | awk '{ sum+=$1 } END { printf "%.1f\n", sum }' | tail -n 1),"
I am trying to get some numbers from tcpdump inside a shell script and print that number.
Here is my script
while true
do
{
b=`tcpdump -n -i eth1 | awk -F'[, ]' '{print $10}'`
echo $b
}
done
When I execute this script, I get this
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes
Is there anything special I need to do to capture tcpdump o/p inside shell script ?
By default, tcpdump runs forever (or until it's interrupted by Control-C or something similar). The
b=`tcpdump ...`
construct runs until tcpdump exits... which is never ... and then puts its output into $b. If you want to capture the output from a single packet, you can use tcpdump -c1 ... (or -c5 to capture groups of 5, or similar). Alternately, you could let it run forever but capture its output one line at a time with a while read loop (although you need to use tcpdump -l to prevent excessive buffering):
tcpdump -l -n -i eth1 | awk -F'[, ]' '{print $10}' | while read b; do
echo $b
done
I'm not entirely sure what your script is supposed to do, but I see some other issues. First, unless your version of tcpdump is much more consistent than mine, printing the 10th comma-delimited field of each packet will not get you anything meaningful. Here's some sample output from my computer:
00:05:02:ac:54:1e
1282820004:1282820094
90
73487384:73487474
1187212630:1187212720
90
90
host
2120673524
Second, what's the point of capturing the output into a variable, then printing it? Why not just run the command and let it output directly? Finally, echo $b may garble the output due to word splitting and wildcard expansion (for example, if $b happened to be "*", it would print a list of files in the current directory). For this reason, you should double-quote variables when you use them (in this case, echo "$b").
It's been so long since this question was asked but the simplest way to accomplish the goal of what the script was intending to catch would be to simply record a pcap matching only the packets you're interestedin seeing; as an example, to write a pcap file consisting only of packets where the ack flag is set and the acknowledgment number is a value between 19000 and 20000:
tcpdump -c2500 -iany 'tcp[8:4]>=19000&&tcp[8:4]<=20000&&tcp[13]&16!=0' -Uw./TCP_ACKs.pcap