Format string in ruby - ruby

I have a string and a variable, i want to use this varaible inside this string, so it will take that value before converting it to string:
current_process_id = 222
ruby_command = %q(ps -x | awk '{if($5~"ruby" && $1!= %d ){printf("Killing ruby process: %s \n",$1);}};')
puts ruby_command
I tried :
current_process_id = 222
ruby_command = %q(ps -x | awk '{if($5~"ruby" && $1!= %d ){printf("Killing ruby process: %s \n",$1);}};') % [current_process_id]
puts ruby_command
But this is giving error :
main.rb:2:in `%': too few arguments (ArgumentError)
I tried :
awk_check = %q(ps -x | awk '{if) + "(" + %q($5~"ruby" && $1!=)
print_and_kill = %q({printf("Killing ruby process: %s \n",$1);{system("kill -9 "$1)};}};')
ruby_process_command = awk_check + current_process_id.to_s + ")" + print_and_kill
puts ruby_process_command
This works fine for me. But the way i did is not clean.
I'm looking for more cleaner way to do it.

In your ruby_command variable, you have declared two positional arguments( %d and %s), whereas you only pass one value [current_process_id]. You need to pass the second value as well for %s.
Change your code to:
current_process_id = 222
ruby_command = %q(ps -x | awk '{if($5~"ruby" && $1!= %d ){printf("Killing ruby process: %s \n",$1);}};') % [current_process_id,current_process_id.to_s]
puts ruby_command
Output:
ruby_command
=> "ps -x | awk '{if($5~\"ruby\" && $1!= 222 ){printf(\"Killing ruby process: 222 \\n\",$1);}};'"
If you don't want to display the value, and you just want to display "%s", you can just escape it with %%:
ruby_command = %Q(ps -x | awk '{if($5~"ruby" && $1!= %d ){printf("Killing ruby process: %%s \n",$1);}};') % [current_process_id]
Output:
ruby_command
=> "ps -x | awk '{if($5~\"ruby\" && $1!= 222 ){printf(\"Killing ruby process: %s \n\",$1);}};'"

Related

awk - piping string to external gfortran-built executable in unix

I have the following awk-script "create_grid.awk" which pipes a command through to an external exe (iri_win.exe/iri_unix.exe built with gfortran) and reads back the response via getline (creating a grid of values with two nested for-loops). Whereas everything works like a charm in windows the matter in the unix environment goes wrong.
!/bin/awk -f
BEGIN {
is_windows = 0;
if (index(tolower(ENVIRON["OS"]), "windows") > 0) {
is_windows = 1;
}
exit
}
{
}
END {
mystring=""
#myvar=""
#CMD = "cmdline | getline myvar"
#print "" > "iri_fortran_output_grid.txt"
for (j = -9 ; j <= -9; j+=2){
for (i = -2; i <= 2; i++){
pipe= "\"" j "," i ",0.300\n2017,823,0,12.750\n20200" "\""
#pipe= "\"49,12,0.300\n2017,823,1,12.750\n20200 \""
#pipe= "\"49,12,0.300\n2017,823,1,12.750\n20200\""
if (is_windows)
cmdline = "echo -e " pipe " | ./iri_win.exe"
else
cmdline = "echo -e " pipe " | ./iri_unix.exe"
print cmdline
if ( (cmdline | getline myvar) > 0 ) {
#close(cmdline | getline myvar)
print "latitude " j " , longitude " i " done, TEC: " myvar " ;"
}
else{
#close(cmdline | getline myvar)
print "error in latitude " j " , longitude " i
}
close(cmdline)
#close(getline)
#cmdline | getline myvar
#myvar = $0
mystring = mystring myvar " "
}
# print one line into result file
sub(/[ \t]+$/, "", mystring)
sub(/^[ \t]+/, "", mystring)
print mystring > "iri_fortran_output_grid.txt"
print "longitude " j " done"
mystring=""
fflush("iri_fortran_output_grid.txt")
fflush(stdout)
#close("iri_fortran_output_grid.txt")
}
}
Output of awk script in Unix OS:
loren32#nautilus:~/iri_exe_analysis$gawk -f ./create_grid.awk
At line 107 of file iri_4_tec_al_2.for (unit = 5, file = 'stdin')
Fortran runtime error: Bad real number in item 1 of list input
error in latitude -9 , longitude 0
echo -e "-9,1,0.300
2017,823,0,12.750
20200" | ./iri_unix.exe
...
Output in Unix OS only executing external exe :
loren32#nautilus:~/iri_exe_analysis$ echo -e "-9,1,0.300\n2017,823,1,12.750\n20200" | ./iri_unix.exe
18.452
As can be seen when I pipe my string via echo -e "stringcontent" to iri_unix.exe from bash in Unix it works but the call from within the awk script fails.
I suspect the quotes work differently in unix and somehow additional disturbing string data is sent to iri_unix.exe - hence the error message "Bad real number in item 1 of list input".
I wonder what is going wrong and how to correct my awk script to make it work in Unix OS. In Windows the script works fine
The workaround is to write
echo "..."
in the Unix Environment and leave out the -e option. That die the trick for me.

Combine awk and another command to send report to user

I need small help related to Unix shell script using awk.
I have a file like below:
139341 8.61248 python_dev ntoma2 r 07/17/2017 07:27:43 gpuml#acepd1641.udp.finco.com 1
139342 8.61248 python_val ntoma2 r 07/17/2017 07:27:48 gpuml#acepd1611.udp.finco.com 1
139652 8.61248 python_dev ntoma2 r 07/17/2017 10:55:57 gpuml#acepd1671.udp.finco.com 1
Which is space separated. I need to get 1st col and 4th col which are job-id and user-name(ntoma2 in this case) based on 6th col (which is date in date formate - mm/dd/yyyy), older than 7days. Compare 6th column with current date and I need to get cols which are older than 7days.
I have below one to get Job id and user name of older than 7 days:
cat filename.txt | awk -v dt="$(date "--date=$(date) -7 day" +%m/%d/%Y)" -F" " '/qw/{ if($6<dt) print $4,":",$1 }' >> ./longRunningJob.$$
Also i have another command to get email ids like below using user-name (from the above 4th col):
/ccore/pbis/bin/enum-members "adsusers" | grep ^UNIX -B3 | grep <User-Name> -B2 | grep UPN | awk '{print $2}'
I need to combined above 2 commands and need to send a report to every user as like below:
echo "Hello <User Name>, There is a long running job which is of job-id: <job-id> more than 7days, so please kill the job or let us know if we can help. Thank you!" | mailx -s "Long Running Job"
NOTE: if user name repeated, all the list should go in one email.
I am not sure how can i combine these 2 and send email to user, can some one please help me?
Thank you in advance!!
Vasu
You can certainly do this in awk -- easier in gawk because of date support.
Just to give you an outline of how to do this, I wrote this in Ruby:
$ cat file
139341 8.61248 python_dev ntoma2 r 07/10/2017 07:27:43 gpuml#acepd1641.udp.finco.com 1
139342 8.61248 python_val ntoma2 r 07/09/2017 07:27:48 gpuml#acepd1611.udp.finco.com 1
139652 8.61248 python_dev ntoma2 r 07/17/2017 10:55:57 gpuml#acepd1671.udp.finco.com 1
$ ruby -lane 'BEGIN{ require "date"
jobs=Hash.new { |h,k| h[k]=[] }
users=Hash.new()
pn=7.0
}
t=DateTime.parse("%s %s" % [$F[5].split("/").rotate(-1).join("-"), $F[6]])
ti_days=(DateTime.now-t).to_f
ts="%d days, %d hours, %d minutes and %d seconds" % [60,60,24]
.reduce([ti_days*86400]) { |m,o| m.unshift(m.shift.divmod(o)).flatten }
users[$F[3]]=$F[7]
jobs[$F[3]] << "Job: %s has been running %s" % [$F[0], ts] if (DateTime.now-t).to_f > pn
END{
jobs.map { |id, v|
w1,w2=["is a","job"]
w1,w2=["are","jobs"] if v.length>1
s="Hello #{id}, There #{w1} long running #{w2} running more than the policy of #{pn.to_i} days. Please kill the #{w2} or let us know if we can help. Thank you!\n\t" << v.join("\n\t")
puts "#{users[id]} \n#{s}"
# s is the formated email address and body. You take it from here...
}
}
' /tmp/file
gpuml#acepd1671.udp.finco.com
Hello ntoma2, There are long running jobs running more than the policy of 7 days. Please kill the jobs or let us know if we can help. Thank you!
Job: 139341 has been running 11 days, 9 hours, 28 minutes and 44 seconds
Job: 139342 has been running 12 days, 9 hours, 28 minutes and 39 seconds
I got the Solution, but there is a bug in it, here is the solution:
!#/bin/bash
{ qstat -u \*; /ccore/pbis/bin/enum-members "adsusers"; } | awk -v dt=$(date "--date=$(date) -7 day" +%m/%d/%Y) '
/^User obj/ {
F2 = 1
FS = ":"
T1 = T2 = ""
next
}
!F2 {
if (NR < 3) next
if ($5 ~ "qw" && $6 < dt) JID[$4] = $1 "," JID[$4]
next
}
/^UPN/ {T1 = $2
}
/^Display/ {T2 = $2
}
/^Alias/ {gsub (/ /, _, $2)
EM[$2] = T1
DN[$2] = T2
}
END {for (j in JID) {print "echo -e \"Hello " DN[j] " \\n \\nJob(s) with job id(s): " JID[j] " executing more than last 7 days, hence request you to take action, else job(s) will be killed in another 1 day \\n \\n Thank you.\" | mailx -s \"Long running job for user: " DN[j] " (" j ") and Job ID(s): " JID[j] "\" " EM[j]
}
}
' | sh
The bug in the above code is -- the if condition of date compare (as shown below) is is not working as expected, i am really not sure how to compare the $6 and the variable dt (both of format mm/dd/yyyy). I think i should use either mkdate() or something else. can some one please help?
if ($5 ~ "qw" && $6 < dt)
Thank you!!
Vasu

Using bc in awk

I am trying to use bc in an awk script. In the code below, I am trying to convert hexadecimal number to binary and store it in a variable.
#!/bin/awk -f
{
binary_vector = $(bc <<< "ibase=16;obase=2;FF")
}
Where do I go wrong?
Not saying it's a good idea but:
$ awk 'BEGIN {
cmd = "bc <<< \"ibase=16;obase=2;FF\""
rslt = ((cmd | getline line) > 0 ? line : -1)
close(cmd)
print rslt
}'
11111111
Also see http://gnu.org/software/gawk/manual/gawk.html#Bitwise-Functions and http://gnu.org/software/gawk/manual/gawk.html#Nondecimal-Data
The following one-liner Awk script should do what you want:
awk -vVAR=$(read -p "Enter number: " -u 0 num; echo $num) \
'BEGIN{system("echo \"ibase=16;obase=2;"VAR"\"|bc");}'
Explanation:
-vVAR Passes the variable VAR into Awk
-vVAR=$(read -p ... ) Sets the variable VAR from the
shell to the user input.
system("echo ... |bc") Uses the Awk system built in command to execute the shell commands. Notice how the quoting stops at the variable VAR and then continues just after it, thats so that Awk interprets VAR as an Awk variable and not as part of the string put into the system call.
Update - to use it in an Awk variable:
awk -vVAR=$(read -p "Enter number: " -u 0 num; echo $num) \
'BEGIN{s="echo \"ibase=16;obase=2;"VAR"\"|bc"; s | getline awk_var;\
close(s); print awk_var}'
s | getline awk_var will put the output of the command s into the Awk variable awk_var. Note the string is built before sending it to getline - if not (unless you parenthesize the string concatenation) Awk will try to send it to getline in separate pieces %s VAR %s.
The close(s) closes the pipe - although for bc it doesn't matter and Awk automatically closes pipes upon exit - if you put this into a more elaborate Awk script it is best to explicitly close the pipe. According to the Awk documentation some commands such as mail will wait on the pipe to close prior to completion.
http://www.staff.science.uu.nl/~oostr102/docs/nawk/nawk_39.html
By the way you wrote your example, it looks like you want to convert an awk record ( line ) into an associative array. Here's an awk executable script that allows that by running the bc command over values in a split type array:
#!/usr/bin/awk -f
{
# initialize the a array
cnt = split($0, a, FS)
if( convertArrayBase(10, 2, a, cnt) > -1 ) {
# use the array here
for(i=1; i<=cnt; i++) {
print a[i]
}
}
}
# Destructively updates input array, converting numbers from ibase to obase
#
# #ibase: ibase value for bc
# #obase: obase value for bc
# #a: a split() type associative array where keys are numeric
# #cnt: size of a ( number of fields )
#
# #return: -1 if there's a getline error, else cnt
#
function convertArrayBase(ibase, obase, a, cnt, i, b, cmd) {
cmd = sprintf("echo \"ibase=%d;obase=%d", ibase, obase)
for(i=1; i<=cnt; i++ ) {
cmd = cmd ";" a[i]
}
cmd = cmd "\" | bc"
i = 0 # reset i
while( (cmd | getline b) > 0 ) {
a[++i] = b
}
close( cmd )
return i==cnt ? cnt : -1
}
When used with an input of:
1 2 3
4 s 1234567
this script outputs the following:
1
10
11
100
0
100101101011010000111
The convertArrayBase function operates on split type arrays. So you have to initialize the input array (a here) with the full row (as shown) or a field's subflds(not shown) before calling the it. It destructively updates the array.
You could instead call bc directly with some helper files to get similar output. I didn't find that bc supported - ( stdin as a file name ) so
it's a little more than I'd like.
Making a start_cmds file like this:
ibase=10;obase=2;
and a quit_cmd like:
;quit
Given an input file (called data.semi) where the data is separated by a ;, like this:
1;2;3
4;s;1234567
you can run bc like:
$ bc -q start_cmds data.semi quit_cmd
1
10
11
100
0
100101101011010000111
which is the same data that the awk script is outputting, but only calling bc a single time with all of the inputs. Now, while that data isn't in an awk associative array in a script, the bc output could be written as stdin input to awk and reassembed into an array like:
bc -q start_cmds data.semi quit_cmd | awk 'FNR==NR {a[FNR]=$1; next} END { for( k in a ) print k, a[k] }' -
1 1
2 10
3 11
4 100
5 0
6 100101101011010000111
where the final dash is telling awk to treat stdin as an input file and lets you add other files later for processing.

dmesg convert timestamps to human format

I have the following sample of dmesg:
throttled log output.
57458] bar 3: test 2 on bar 8 is available
[ 19.696163] bar 1403: test on bar 1405 is available
[ 19.696167] foo: [ 19.696168] bar 3: test 5 on bar 1405 is available
[ 19.696178] foo: [ 19.696179] bar 1403: test 5 on bar 1405 is available
[ 20.928730] foo: [ 20.928733] bar 1403: test on bar 1408 is available
[ 20.928742] foo: [ 20.928745] bar 3: test on bar 1408 is available
[ 24.878861] foo: [ 25.878861] foo: [ 25.878863] bar 1403: bar 802 is present
I would like to convert all timestamps in the line to human format ("%d/%m/%Y %H:%M:%S")
Notes:
This system does not have dmesg -T nor has perl installed.
I would prefer a solution w/ sed or awk, but python is also an option.
I've found a few solutions to this problem, but none quite answers what I need. Nor do I know how to modify it to my needs.
awk -F"]" '{"cat /proc/uptime | cut -d \" \" -f 1" | getline st;a=substr( $1,2, length($1) - 1);print strftime("%d/%m/%Y %H:%M:%S",systime()-st+a)" "$0}'
Or
sed -n 's/\]//;s/\[//;s/\([^.]\)\.\([^ ]*\)\(.*\)/\1\n\3/p' | while read first; do read second; first=`date +"%d/%m/%Y %H:%M:%S" --date="#$(($seconds - $base + $first))"`; printf "[%s] %s\n" "$first" "$second"; done
There's also a python script in here. But outputs some errors which I have zero understanding in.
Thanks!
The following code simulates dmesg -T outcomes. It's inline awk within shell and can be stored as a standalone script or shell function:
awk -v UPTIME="$( cut -d' ' -f1 /proc/uptime )" '
BEGIN {
STARTTIME = systime() - UPTIME
}
match($0, /^\[[^\[\]]*\]/) {
s = substr($0, 2, RLENGTH - 2) + STARTTIME;
s = strftime("%a %b %d %H:%M:%S %Y", s);
sub(/^\[[^\[\]]*\]/, "[" s "]", $0);
print
}
'
It doesn't guarantee precision as dmesg -T provides but makes results a bit closer.
This is a bit touch-and-go, but it should at least give you something to work with:
awk '
{
# tail will be the part of the line that still requires processing
tail = $0;
# Read uptime from /proc/uptime and use it to calculate the system
# start time
"cat /proc/uptime | cut -d \" \" -f 1" | getline st;
starttime = systime() - st;
# while we find matches
while((start = match(tail, /\[[^[]*\]/)) != 0) {
# pick the timestamp from the match
s = substr(tail, start + 1, RLENGTH - 2);
# shorten the tail accordingly
tail = substr(tail, start + RLENGTH);
# format the time to our preference
t = strftime("%d/%m/%Y %H:%M:%S", starttime + s);
# substitute it into the original line. [] are replaced with || so
# the match is not re-replaced in the next iteration.
sub(/\[[^[]*\]/, "|" t "|", $0);
}
# When all matches have been replaced, print the line.
print $0
}' foo.txt

Shell ps command under Ubuntu

I have a question regarding shell scripts. I am trying to be as specific as possible. So, I have to write a monitoring shell script in which I have to write in a file all the users that are running a vi command more, than one minute. I don't really have any idea about the approach, except that I should use the ps command. I have something like this:
ps -ewo "%t %u %c %g" | grep '\< vi >'
with this I get the times and the users that run a vi command. The problem is that I don't really know how to parse the result of this command. Can anyone help, please? All answers are appreciated. Thanks
I will use awk:
ps eo user,etime,pid,args --no-heading -C vi | awk '{MIN=int(substr($2,0,2)); printf "minutes=%s pid=%d\n", MIN, $3; }'
Note, that you dont have to grep for "vi", you can use "ps -C procname".
This is what i'd do:
ps fo "etime,user" --no-heading --sort 'uid,-etime' $(pgrep '\<vi\>') |
perl -ne '($min,$sec,$user) = (m/^\s+(\d\d):(\d\d)\s+(\w+)$/mo);
print "$user\t$min:$sec\n" unless ((0+$min)*60+$sec)<60'
Tack on | cut -f1 | uniq or | cut -f1 | uniq -c to get some nicer stats
Note that the way this is formulated it is easy to switch the test to 59 seconds or 3min11s if you so wish by changing <60 to e.g. <191 (for 3m11s)
If you have Ruby(1.9+)
#!/usr/bin/env ruby
while true
process="ps eo user,etime,args"
f = IO.popen(process) #call the ps command
f.readlines.each do|ps|
user, elapsed, command = ps.split
if command["vi"] && elapsed > "01:00"
puts "User #{user} running vi for more than 1 minute: #{elapsed}"
end
end
f.close
sleep 10 # sleep 10 seconds before monitoring again
end
#!/bin/sh
# -e :: all processes (inluding other users')
# -o :: define output format
# user :: user name
# etimes :: time in seconds after the process was started
# pid :: process id
# comm :: name of the executable
# --no-headers :: do not print column names
ps -eo user,etimes,pid,comm --no-headers |
awk '
# (...) :: select only rows that meet the condition in ()
# $4 ~ // :: 4th field (comm) should match the pattern in //
# (^|\/)vim?$ :: beginning of the line or "/", then "vi",
# nothing or "m" (to capture vim), end of the line
# $2 > 60 :: 2nd field (etimes) >= 60 seconds
($4 ~ /(^|\/)vim?$/ && $2 >= 60){
# convert 2nd field (etimes) into minutes
t = int($2 / 60);
# check if the time is more than 1 minute
s = (t > 1) ? "s" : "";
# output
printf "user %s : [%s] (pid=%d) started %d minute%s ago\n", $1, $4, $3, t, s;
}'

Resources