Why would a cygwin program lose path prefix when running under gdb? - debugging

I just came to a really strange problem with gdb with a relatively large software build in Cygwin64 on Windows 10, which I cannot really reproduce with a minimal example.
So, let me first provide a minimal example that works fine (and does not reproduce the error): basically, the below code (test_cygwin.cpp) just wants to create exclusive access file in /var/run.
Let's recall that /var/run is a Unix path, and as such does not exist in Windows; otherwise it is mapped to the Windows filesystem through a directory in the Cygwin installation:
$ cygpath -w /var/run
C:\cygwin64\var\run
Here is test_cygwin.cpp:
// compile with:
// g++ test_cygwin.cpp -g -o test_cygwin.exe
#include <iostream> // cout
#include <errno.h> // errno
#include <string.h> // strerror
#include <fcntl.h> // open
#include <unistd.h> // close
int main(void) {
std::string filepath("/var/run/test");
std::cout << "opts: " << O_RDWR << " | " << O_CREAT << " | " << O_EXCL << " , " << S_IRUSR << " | " << S_IWUSR << " | " << S_IRGRP << " | " << S_IWGRP << " | " << S_IROTH << " | " << S_IWOTH << std::endl;
int my_fd = open(filepath.c_str(), O_RDWR | O_CREAT | O_EXCL, S_IRUSR | S_IWUSR | S_IRGRP | S_IWGRP | S_IROTH | S_IWOTH);
if (my_fd < 0) {
std::cout << "Failed to open " << filepath << ": " << strerror(errno) << std::endl;
return 1;
}
std::cout << "Opened " << filepath << std::endl;
close(my_fd);
return 0;
}
So, basically, if I run this program and the file does not exist, the file is created; if the file exists, the program reports an error - in Cygwin's bash shell:
user#DESKTOP-COMPUTER /tmp
$ ls -la /var/run/test
ls: cannot access '/var/run/test': No such file or directory
user#DESKTOP-COMPUTER /tmp
$ ./test_cygwin.exe
opts: 2 | 512 | 2048 , 256 | 128 | 32 | 16 | 4 | 2
Opened /var/run/test
user#DESKTOP-COMPUTER /tmp
$ ls -la /var/run/test
-rw-r--r-- 1 user None 0 Oct 8 14:38 /var/run/test
user#DESKTOP-COMPUTER /tmp
$ ./test_cygwin.exe
opts: 2 | 512 | 2048 , 256 | 128 | 32 | 16 | 4 | 2
Failed to open /var/run/test: File exists
user#DESKTOP-COMPUTER /tmp
$ rm /var/run/test && ls -la /var/run/test
ls: cannot access '/var/run/test': No such file or directory
Ok, so far so good. Now, the opts are printed out, so that I could re-run the open command from within a gdb session; again in Cygwin bash:
$ gdb --args ./test_cygwin.exe
GNU gdb (GDB) (Cygwin 10.2-1) 10.2
...
Reading symbols from ./test_cygwin.exe...
(gdb) b test_cygwin.cpp:13
Breakpoint 1 at 0x1004011f2: file test_cygwin.cpp, line 13.
(gdb) r
Starting program: /tmp/test_cygwin.exe
[New Thread 12044.0x467c]
[New Thread 12044.0x4f8]
[New Thread 12044.0x280]
[New Thread 12044.0x9b4]
opts: 2 | 512 | 2048 , 256 | 128 | 32 | 16 | 4 | 2
Thread 1 "test_cygwin" hit Breakpoint 1, main () at test_cygwin.cpp:13
13 int my_fd = open(filepath.c_str(), O_RDWR | O_CREAT | O_EXCL, S_IRUSR | S_IWUSR | S_IRGRP | S_IWGRP | S_IROTH | S_IWOTH);
(gdb) p (int)open("/var/run/test", 2 | 512 | 2048 , 256 | 128 | 32 | 16 | 4 | 2)
[New Thread 12044.0x42a0]
[New Thread 12044.0x2ff8]
$1 = 3
(gdb) p (int)open("/var/run/test", 2 | 512 | 2048 , 256 | 128 | 32 | 16 | 4 | 2)
$2 = -1
(gdb) c
Continuing.
Failed to open /var/run/test: File exists
[Thread 12044.0x1c14 exited with code 1]
[Thread 12044.0x9b4 exited with code 1]
[Thread 12044.0x280 exited with code 1]
[Thread 12044.0x4f8 exited with code 1]
[Thread 12044.0x42a0 exited with code 1]
[Thread 12044.0x2ff8 exited with code 1]
Program terminated with signal SIGHUP, Hangup.
The program no longer exists.
(gdb) quit
So, in the above snippet, the breakpoint halts the program right before it attempts to open the file, and the file is created manually by running p (int)open("/var/run/test", 2 | 512 | 2048 , 256 | 128 | 32 | 16 | 4 | 2) in the gdb shell.
The first invocation of this command in the gdb shell succeeds (3 is returned, as the numeric file descriptor of the newly opened file); the second fails (-1 is returned) - and when the program proceeds, it clearly fails, as the requested file exists already.
So far, so good - all is as expected.
Now, here is the problem I have in my actual build - which I cannot reproduce here:
When I run the program normally (e.g. ./myprogram.exe --arg1=1 ...), the open call succeeds
When I run the program via gdb (that is, gdb --args ./myprogram.exe --arg1=1 ...), the open call always fails, with "No such file or directory"
So, similar to the above, I placed a breakpoint right before that open command in the actual program, and I've tried (making sure I've deleted rm /var/run/test at start):
(gdb) p (int)open("/var/run/test", 2 | 512 | 2048 , 256 | 128 | 32 | 16 | 4 | 2)
$27 = -1
(gdb) p (int)open("/var/run/test", 2 | 512 | 2048 , 256 | 128 | 32 | 16 | 4 | 2)
$28 = -1
Yup, so it fails already at very start; however, if I now add the cygdrive prefix:
(gdb) p (int)open("/cygdrive/c/cygwin64/var/run/test", 2 | 512 | 2048 , 256 | 128 | 32 | 16 | 4 | 2)
$29 = 26
(gdb) p (int)open("/cygdrive/c/cygwin64/var/run/test", 2 | 512 | 2048 , 256 | 128 | 32 | 16 | 4 | 2)
$30 = -1
... then it succeeds, from within gdb?!?! And indeed, the Cygwin bash shell also sees this file afterwards:
$ ls -la /var/run/test
-rw-r--r-- 1 user None 0 Oct 8 14:50 /var/run/test
Recalling the error message from the first example: if I want to create a file (/var/run/test) with open, and I get "No such file or directory" error - I interpret that as the parent directory (here /var/run) not existing, at least from the perspective of the program.
So, in this particular case, when gdb debugs this particular myprogram.exe, it has somehow "lost" the reference to root Unix paths (I've confirmed that also creating a file in /tmp fails in this case); or in other words, the program in gdb cannot see these paths (and so these paths don't exist for this program) - however, it can still access the same folder locations, and successfully open/create a file, by having the Cygwin installation path prefixed.
The strange thing being, that /cygdrive is also a Unix path formulation, albeit specific to Cygwin?! Also, interestingly, even when debugging this program, I can "see" these directories from gdb itself:
(gdb) cd /var/run
Working directory /var/run.
(gdb) pwd
Working directory /var/run.
... so, gdb in itself still "sees" these directories - even if the debuggee program does not?!
So, my question is: while the minimal example clearly shows that this is not a general issue with gdb in Cygwin - has anyone ever experienced, that a Cygwin program "loses reference" to (or "cannot see") Unix system paths when running in gdb; but otherwise sees these paths fine when running directly from the Cygwin bash shell?
If so, would anyone have an explanation why does this situation occur - and how to rectify it (that is, make the open succeed, also when the program runs under gdb - just as well as when the program runs standalone in the Cygwin bash shell)?
(just as a clarification note on why I want to know this: the actual program I'm debugging segfaults after the open call, which I'd like to catch in gdb; however, since the open call fails in gdb, the program under gdb currently cannot even get to the point that otherwise segfaults).

Related

ELF go binaries default byte alignment

I empirically see that go ELF binaries use 16 bytes alignment. For example:
$ wget https://github.com/gardener/gardenctl/releases/download/v0.24.2/gardenctl-linux-amd64
$ readelf -W -s gardenctl-linux-amd64 | grep -E "FUNC" | wc -l
44746
$ readelf -W -s gardenctl-linux-amd64 | grep -E "0[ ]+[0-9]* FUNC" | wc -l
44744
so vast majority have 0 in their least significant byte. Is it always like that in go binaries?
This depends on the platform. If you have a source repo checked out:
% cd go/src/cmd/link/internal
% grep "funcAlign =" */*.go
amd64/l.go: funcAlign = 32
arm/l.go: funcAlign = 4 // single-instruction alignment
arm64/l.go: funcAlign = 16
mips64/l.go: funcAlign = 8
ppc64/l.go: funcAlign = 16
riscv64/l.go: funcAlign = 8
s390x/l.go: funcAlign = 16
x86/l.go: funcAlign = 16
the alignment for amd64 may go back down to 16 in the future; it is 32 for a while because of https://github.com/golang/go/issues/35881

Debugging why SPI Master is Reading Arbitary Values

I have an SPI bus between a MAX V device and an AM335x processor.
The MAX V device has an SPI setup to repeatedly send a STD_LOGIC_VECTOR defined as "x0100".
This seems to work fine. The output on a scope is repeatedly the same value.
In Linux, I seem to get either shifted data, or some random data. Using spi-tools from here https://github.com/cpb-/spi-tools
When these tools are used, I get the following:
# spi-config -d /dev/spidev0.0 -m 1 -s 10000000
# spi-pipe -d /dev/spidev0.0 -b 2 -n 1 < /dev/urandom | hexdump
0000000 0202
0000002
# spi-pipe -d /dev/spidev0.0 -b 2 -n 1 < /dev/urandom | hexdump
0000000 0a0a
0000002
# spi-pipe -d /dev/spidev0.0 -b 2 -n 1 < /dev/urandom | hexdump
0000000 2a2a
0000002
# spi-pipe -d /dev/spidev0.0 -b 2 -n 1 < /dev/urandom | hexdump
0000000 aaaa
0000002
# spi-pipe -d /dev/spidev0.0 -b 2 -n 1 < /dev/urandom | hexdump
0000000 aaaa
0000002
You can see how the device is configured there. On the scope, the MISO pin is clearly outputting "00000010 00000000" for every 16 clock cycles on SCLK. What is happening here? How can I repeatedly get the correct value from the device?
For clarity, here is the relevant parts of the device tree and the kernel configuration.
Kernel
CONFIG_SPI=y
CONFIG_SPI_MASTER=y
CONFIG_SPI_GPIO=y
CONFIG_SPI_BITBANG=y
CONFIG_SPI_OMAP24XX=y
CONFIG_SPI_TI_QSPI=y
CONFIG_SPI_SPIDEV=y
CONFIG_REGMAP_SPI=y
CONFIG_MTD_SPI_NOR=y
CONFIG_SPI_CADENCE_QUADSPI=y
Device Tree
&spi1 {
/* spi1 bus is connected to the CPLD only on CS0 */
status = "okay";
pinctrl-names = "default";
pinctrl-0 = <&spi1_pins>;
ti,pindir-d0-out-d1-in;
cpld_spidev: cpld_spidev#0 {
status = "okay";
compatible = "linux,spidev";
spi-max-frequency = <1000000>;
reg = <0>;
};
};
Also here is a screengrab of the waveforms produced.
Really the end goal is an app to report the version stated as the STD_LOGIC_VECTOR, on the MAX V device. So 0100 is intended to be version 1.00.
Use the uboot_overlay in /boot/uEnv.txt called BB-SPIDEV0-00A0.dtbo.
If you need any more info, please ask. Oh! And there is a fellow, Dr. Molloy, that had produced a book a while back.
chp08/spi/ is the location of the file you will need to test the SPI Device.
The command is simply spidev_test

Can´t save files using ftp on genexus

I am trying to save a file on a library inside a Iseries database using the GxFtpPut on genexus 10 V3 with .net but when sending the file genexus tries to send it to a windows directory instead of sending it to the library which works using the ftp command on the cmd
I've already tried to changing the route is using to no avail and trying to find another way of sending the file through genexus.
for example when using the cmd I just put this :
put C:\FILES\Filename.txt Library/Filename
And it works on sending the file inside the library,
but when doing this on genexus:
Call("GxFtpPut", &FileDirectory , 'Library/'+&FileName,'B' )
Does not work and tries to find a directory with that name inside the windows files of the server
I just want to be able to send it to the server library without issue.
IBM i has two distinct name formats depending on the file system you are trying to use. NAMEFMT 0 is the library/filename format, and is likely unknown to PC FTP clients. NAMEFMT 1 is the typical hierarchical directory path used by non-IBM i computers, and also works with IBM i if you want to put a file anywhere in the IFS (Integrated File System).
Fun fact, the native library file system is also accessible from the IFS. But to address it you need to use a format that might be a little unfamiliar. /QSYS.lib/library.lib/filename.file/membername.mbr You may be able to drop the member name.
To change name format, you can issue the SITE sub-command on your remote host like this:
QUOTE SITE NAMEFMT 0 -- This sets name format 0 (library/filename)
QUITE SITE NAMEFMT 1 -- This sets name format 1 (directory path)
I did some testing with a plain Windows FTP client. The test file on the PC was a text file created in Notepad++. Turns out that we start out in NAMEFMT 0 unless it is changed. It looks like genexus only supports a limited set of commands. So here is the limited FTP script that works:
ascii
put test.txt mylib/testpf
I can now pull up testpf on the greenscreen utilities and read it. I can also read testpf in my GUI SQL client. The ASCII text has been converted properly to EBCDIC.
|TESTPF |
|--------------------------------------------------------------------------------|
| |
|// ------------------------------------ |
|// Sweep |
|// |
|// Performs the sweep logic |
|// ------------------------------------ |
|dcl-proc Sweep; |
| |
| |
| exec sql |
| update atty a |
| set ymglsb = (select ymglsb from glaty |
| where atty = a.atty) |
| where atty in (select atty from glaty where atty = a.atty); |
|// where ymglsb in (select ymglsb from glaty where atty = a.atty); |
| if %subst(sqlstate: 1: 2) < '00' or |
| %subst(sqlstate: 1: 2) > '02'; |
| exec sql get diagnostics condition 1 |
| :message = message_text; |
| SendSqlMsg('02: ' + message); |
| endif; |
| |
| exec sql |
| update atty a |
| set ymglsb = '000' |
| where not exists (select * from glaty where atty = a.atty); |
| if %subst(sqlstate: 1: 2) < '00' or |
| %subst(sqlstate: 1: 2) > '02'; |
| exec sql get diagnostics condition 1 |
| :message = message_text; |
| SendSqlMsg('03: ' + message); |
| endif; |
| |
|end-proc; |
However, if I try to transfer in binary mode, the resulting data in the file looks like this:
|TESTPF |
|--------------------------------------------------------------------------------|
|ëÏÁÁø&ÁÊÃ?Ê_ËÈÇÁËÏÁÁø% |
|?ÅÑÄÀÄ%øÊ?ÄëÏÁÁøÁÌÁÄËÉ% |
|ÍøÀ/ÈÁ/ÈÈ`/ËÁÈ`_Å%ËÂËÁ%ÁÄÈ`_Å%ËÂÃÊ?_Å%/È` |
|ÏÇÁÊÁ/ÈÈ`//ÈÈ`ÏÇÁÊÁ/ÈÈ`Ñ>ËÁ%ÁÄÈ/ÈÈ`ÃÊ?_Å%/È`ÏÇÁÊÁ/ÈÈ |
|`//ÈÈ`ÏÇÁÊÁ`_Å%ËÂÑ>ËÁ%ÁÄÈ`_Å%ËÂÃÊ?_Å%/È`ÏÇÁÊÁ/ÈÈ`//ÈÈ |
|`ÑöËÍÂËÈËÉ%ËÈ/ÈÁ?ʶËÍÂËÈËÉ%ËÈ/ÈÁ |
|ÁÌÁÄËÉ%ÅÁÈÀÑ/Å>?ËÈÑÄËÄ?>ÀÑÈÑ?>_ÁËË/ÅÁ_ÁËË/ÅÁ¬ÈÁÌÈ |
|ëÁ>ÀëÉ%(ËÅ_ÁËË/ÅÁÁ>ÀÑÃÁÌÁÄËÉ%ÍøÀ/ÈÁ/ÈÈ`/ |
|ËÁÈ`_Å%ËÂÏÇÁÊÁ>?ÈÁÌÑËÈËËÁ%ÁÄÈÃÊ?_Å%/È`ÏÇÁÊÁ/ÈÈ`// |
|ÈÈ`ÑöËÍÂËÈËÉ%ËÈ/ÈÁ?ʶËÍÂËÈËÉ%ËÈ/ÈÁ |
|ÁÌÁÄËÉ%ÅÁÈÀÑ/Å>?ËÈÑÄËÄ?>ÀÑÈÑ?>_ÁËË/ÅÁ_ÁËË/ÅÁ¬ÈÁÌÈ |
|ëÁ>ÀëÉ%(ËÅ_ÁËË/ÅÁÁ>ÀÑÃÁ>ÀøÊ?Ä |
This has not been converted because we have told IBM i FTP server not to convert to EBCDIC because it is binary.
So try ASCII mode, use the library/filename format. The target file does not need to pre-exist.

Mac OSX get USB vendor id and product id

I'm a newbie in Mac OSX world and I have to write a script which gives me the vendor id and product id of a connected usb device. I have done it for Windows and Linux but for Mac I have no idea where to start from.
I have seen this post but the link with the example is not working. Do you guys have any advice about where I can start from or where I can find some examples?
In particular, which language should I use?
You tagged your question with bash, so I'll answer it as if you're asking how to do this in bash, rather than asking what language to use (which would make the question off-topic for StackOverflow).
You can parse existing data from system_profiler using built-in tools. For example, here's a dump of vendor:product pairs, with "Location ID" and manufacturer...
#!/bin/bash
shopt -s extglob
while IFS=: read key value; do
key="${key##+( )}"
value="${value##+( )}"
case "$key" in
"Product ID")
p="${value% *}"
;;
"Vendor ID")
v="${value%% *}"
;;
"Manufacturer")
m="${value}"
;;
"Location ID")
l="${value}"
printf "%s:%s %s (%s)\n" "$v" "$p" "$l" "$m"
;;
esac
done < <( system_profiler SPUSBDataType )
This relies on the fact that Location ID is the last item listed for each USB device, which I haven't verified conclusively. (It just appears that way for me.)
If you want something that (1) is easier to read and (2) doesn't depend on bash and is therefore more portable (not an issue though; all Macs come with bash), you might want to consider doing your heavy lifting in awk instead of pure bash:
#!/bin/sh
system_profiler SPUSBDataType \
| awk '
/Product ID:/{p=$3}
/Vendor ID:/{v=$3}
/Manufacturer:/{sub(/.*: /,""); m=$0}
/Location ID:/{sub(/.*: /,""); printf("%s:%s %s (%s)\n", v, p, $0, m);}
'
Or even avoid wrapping this in shell entirely with:
#!/usr/bin/awk -f
BEGIN {
while ("system_profiler SPUSBDataType" | getline) {
if (/Product ID:/) {p=$3}
if (/Vendor ID:/) {v=$3}
if (/Manufacturer:/) {sub(/.*: /,""); m=$0}
if (/Location ID:/) {sub(/.*: /,""); printf("%s:%s %s (%s)\n", v, p, $0, m)}
}
}
Note that you can also get output from system_profiler in XML format:
$ system_profiler -xml SPUSBDataType
You'll need an XML parser to handle that output, though. And you'll find that it's a lot of work to parse XML in native bash.
Depending on what you want to do with the information, you can just look it up in System Information.
Click the Apple menu at top left of screen, About this Mac, More Info, System Report and select Hardware at top left, then USB.
You could maybe write Applescript to do that, but if you are going to go on and interact with the device in some way, this may not be the best approach.
You can use the system_profiler command, like this:
system_profiler -detailLevel full
and parse the outout from that. Or you can add the -xml option to the system_profiler command and parse XML pretty easily with awk/grep or the XML module in Perl.
Example extract:
| | | | +-o FaceTime HD Camera (Built-in)#0 <class IOUSBInterface, id 0x1000002b2, registered, matched, active, busy 0 (26 ms), retain 7>
| | | | | {
| | | | | "IOCFPlugInTypes" = {"2d9786c6-9ef3-11d4-ad51-000a27052861"="IOUSBFamily.kext/Contents/PlugIns/IOUSBLib.bundle"}
| | | | | "bcdDevice" = 0x755
| | | | | "IOUserClientClass" = "IOUSBInterfaceUserClientV3"
| | | | | "idProduct" = 0x850b
| | | | | "bConfigurationValue" = 0x1
| | | | | "bInterfaceSubClass" = 0x1
| | | | | "locationID" = 0xfffffffffa200000
| | | | | "USB Interface Name" = "FaceTime HD Camera (Built-in)"
| | | | | "idVendor" = 0x5ac
Regarding the path to the USB device, I have no idea how you would do that simply on a Mac. I might be tempted to run:
find /dev -type b -o -type c
before inserting the USB device, and saving the output. Then have your user insert the device and run the same command again to see what device special files have been added as a result of plugging in your device. Maybe crude, maybe effective - just an idea.

Nested getline in AWK script

Please let me know if we can use nested getline within AWK scripts like:
while ( ("tail -f log" |& getline var0) > 0) {
while ( ("ls" | getline ) > 0) {
}
close("ls")
while ( ("date" | getline ) > 0) {
}
close("date")
}
close("tail -f log")
What is the depth we can make use of nested getline functionality and will there be any data loss of output at any level of the nested getline? What are the things we should make sure in implementing this style?
==================================================================================
UPDATE===================UPDATE==============UPDATE===============UPDATE=======
Requirement : Provide real time statistical data and errors by probing QA box and webserver / services logs and system status. Report would be generated in following format:
Local Date And Time | Category| Component | Condition
Assumption -: AWK script would execute faster than shell script with added advantage of using its inbuilt parsing and other functionalities.
Implementation : - The main command loop is command0="tail -f -n 0 -s 5 ...........". This command would start an infinite loop extracting appended logs of services / webserver of QA box. . Note the -f, -s and –n options which makes to dump all appended data to logs, sleep for 5 seconds after each iterations and start without printing any default content from the existing logs.
After each iteration, capture and verify the system time and execute various OS resource commands after 10 seconds interval (5 seconds sleep in-between each iteration and 4 seconds after processing the tail output – assuming that processing all tail command roughly take 1 sec, hence in all 10 seconds)
Various command I have used for extracting OS resources are:
I. command1="vmstat | nl | tr -s '\\t '"
II. command2="sar -W 0"
III. command3="top -b -n 1 | nl | tr -s '\\t '"
IV. command4="ls -1 /tmp | grep EXIT"
Search for respective command(?) in the script and go thru the while loop of it in the script to figure output processing of the respective command. Note I nave used ‘nl’ command for development / coding ease
Ultimately presence of /tmp/EXIT file on the box will make the script to exit after removing the same from the box
Below is my script - I have added comments as much as possible for self explanatory:
#Useage - awk -f script.awk
BEGIN {
command0="tail -f -n 0 -s 5 /x/web/webserver/*/logs/error_log /x/web/webserver/service/*/logs/log"
command1="vmstat | nl | tr -s '\\t '"
command2="sar -W 0"
command3="top -b -n 1 | nl | tr -s '\\t '"
command4="ls -1 /tmp | grep EXIT"
format = "%a %b %e %H:%M:%S %Z %Y"
split("", details)
split("", fields)
split("", data)
split("", values)
start_time=0
printf "\n>%s:\n\n", command0 #dummy print for debuggng command being executed
while ( (command0 |& getline var0) > 0) { #get the command output
if (start_time == 0) #if block to reset the start_time variable
{
start_time = systime() + 4
}
if (var0 ~ /==>.*<==/) { #if block to extract the file name from the tail output - outputted in '==>FileName<==' format
gsub(/[=><]/, "", var0)
len = split(var0, name, "/")
if(len == 7) {file = name[5]} else {file = name[6]}
}
if (len == 7 && var0 ~ /[Ee]rror|[Ee]xception|ORA|[Ff]atal/) { #extract the logs error statements
print strftime(format,systime()) " | Error Log | " file " | Error :" var0
}
if(systime() >= start_time) #check if curernt system time is greater than start_time as computed above
{
start_time = 0 #reset the start_time variable and now execute the system resource command
printf "\n>%s:\n\n", command1
while ( (command1 |& getline) > 0) { #process output of first command
if($1 <= 1)
continue #not needed for processing skip this one
if($1 == 2) #capture the fieds name and skip to next line
{
for (i = 1; i <= NF; i++){fields[$i] = i;}
continue
}
if ($1 == 3) #store the command data output in data array
split($0, data);
print strftime(format,systime()) " | System Resource | System | Time spent running non-kernel code :" data[fields["us"]]
print strftime(format,systime()) " | System Resource | System | Time spent running kernel code :" data[fields["sy"]]
print strftime(format,systime()) " | System Resource | System | Amount of memory swapped in from disk :" data[fields["si"]]
print strftime(format,systime()) " | System Resource | System | Amount of memory swapped to disk :" data[fields["so"]]
}
close(command1)
printf "\n>%s:\n\n", command2 #start processing second command
while ( (command2 |& getline) > 0) {
if ($4 ~ /[0-9]+[\.][0-9]+/) #check for 4th positional value if its format is of "int.intint" format
{
if( $4 > 0.0) #dummy check now to print if page swapping
print strftime(format,systime()) " | System Resource | Disk | Page rate is > 0.0 reads/second: " $4
}
}
close(command2)
printf "\n>%s:\n\n", command3 # start processing command number 3
while ( (command3 |& getline ) > 0) {
if($1 == 1 && $0 ~ /load average:/) #get the load average from the output if this is the first line
{
split($0, arr, ",")
print strftime(format,systime())" | System Resource | System |" arr[4]
}
if($1 > 7 && $1 <= 12) # print first top 5 process that are consuming most of the CPUs time
{
f=split($0, arr, " ")
if(f == 13)
print strftime(format,systime())" | System Resource | System | CPU% "arr[10]" Process No: "arr[1] - 7" Name: "arr[13]
}
}
close(command3)
printf "\n>%s:\n\n", command4 #process command number 4 to check presence of file
while ( (command4 |& getline var4) > 0) {
system("rm -rf /tmp/EXIT")
exit 0 #if file is there then remove the file and exit this script execution
}
close(command4)
}
}
close(command0)
}
Output -:
>tail -f -n 0 -s 5 /x/web/webserver/*/logs/error_log /x/web/webserver/service/*/logs/log:
>vmstat | nl | tr -s '\t ':
Sun Dec 16 23:05:12 PST 2012 | System Resource | System | Time spent running non-kernel code :9
Sun Dec 16 23:05:12 PST 2012 | System Resource | System | Time spent running kernel code :9
Sun Dec 16 23:05:12 PST 2012 | System Resource | System | Amount of memory swapped in from disk :0
Sun Dec 16 23:05:12 PST 2012 | System Resource | System | Amount of memory swapped to disk :2
>sar -W 0:
Sun Dec 16 23:05:12 PST 2012 | System Resource | Disk | Page rate is > 0.0 reads/second: 3.89
>top -b -n 1 | nl | tr -s '\t ':
Sun Dec 16 23:05:13 PST 2012 | System Resource | System | load average: 3.63
Sun Dec 16 23:05:13 PST 2012 | System Resource | System | CPU% 12.0 Process No: 1 Name: occworker
Sun Dec 16 23:05:13 PST 2012 | System Resource | System | CPU% 10.3 Process No: 2 Name: occworker
Sun Dec 16 23:05:13 PST 2012 | System Resource | System | CPU% 6.9 Process No: 3 Name: caldaemon
Sun Dec 16 23:05:13 PST 2012 | System Resource | System | CPU% 6.9 Process No: 4 Name: occmux
Sun Dec 16 23:05:13 PST 2012 | System Resource | System | CPU% 6.9 Process No: 5 Name: top
>ls -1 /tmp | grep EXIT:
This is your second post that I can recall about using getline this way. I mentioned last time that it was the wrong approach but it looks like you didn't believe me so let me try one more time.
Your question of "how do I use awk to execute commands with getline to read their output?" is like asking "how do I use a drill to cut glass?". You could get an answer telling you to tape over the part of the glass where you'll be drilling to avoid fracturing it and that WOULD answer your question but the more useful answer would probably be - don't do that, use a glass cutter.
Using awk as a shell from which to call commands is 100% the wrong approach. Simply use the right tool for the right job. If you need to parse a text file, use awk. If you need to manipulate files or processes or invoke commands, use shell (or your OS equivalent).
Finally, please read http://awk.freeshell.org/AllAboutGetline and don't even think about using getline until you fully understand all the caveats.
EDIT: here's a shell script to do what your posted awk script does:
tail -f log |
while IFS= read -r var0; do
ls
date
done
Look simpler? Not saying it makes sense to do that, but if you did want to do it, THAT's the way to implement it, not in awk.
EDIT: here's how to write the first part of your awk script in shell (bash in this case), I ran out of enthusiasm for translating the rest of it for you and I think this shows you how to do the rest yourself:
format = "%a %b %e %H:%M:%S %Z %Y"
start_time=0
tail -f -n 0 -s 5 /x/web/webserver/*/logs/error_log /x/web/webserver/service/*/logs/log |
while IFS= read -r line; do
systime=$(date +"%s")
#block to reset the start_time variable
if ((start_time == 0)); then
start_time=(( systime + 4 ))
fi
#block to extract the file name from the tail output - outputted in '==>FileName<==' format
case $var0 in
"==>"*"<==" )
path="${var0%% <==}"
path="${path##==> }"
name=( ${path//\// } )
len="${#name[#]}"
if ((len == 7)); then
file=name[4]
else
file=name[5]
fi
;;
esac
if ((len == 7)); then
case $var0 in
[Ee]rror|[Ee]xception|ORA|[Ff]atal ) #extract the logs error statements
printf "%s | Error Log | %s | Error :%s\n" "$(date +"$format")" "$file" "$var0"
;;
esac
fi
#check if curernt system time is greater than start_time as computed above
if (( systime >= start_time )); then
start_time=0 #reset the start_time variable and now execute the system resource command
....
Note that this would execute slightly faster than your awk script but that absolutely does not matter at all since your tail is taking 5 second breaks between iterations.
Also note that all I'm doing above is translating your awk script into shell, it doesn't necessarily mean it'd be the best way to write this tool from scratch.

Resources