How can I get DEBUG messages HAWQ in log? - hawq

Are there any GUCs or commands which I can get debug messages in HAWQ log? Now, I can only get ERROR or FATAL message but can not get any DEBUG messages. How to print these DEBUG messages in Log file?

You can set log_min_messages level in postgres.conf of hawq master data directory. Log level can be the following values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic

It needs to restart cluster if you change the postgres.conf. But you can set GUC log_min_messages in the PSQL session if you just want to log the debug info within this session.

Different component of apache hawq support different level of debugging message.
The overall supported levels are as below. You may refer to https://github.com/apache/incubator-hawq/blob/master/src/include/utils/elog.h for details.
/* Error level codes */
Level Value
------------------
DEBUG5 10
DEBUG4 11
DEBUG3 12
DEBUG2 13
DEBUG1 14
LOG 15
COMMERROR 16
INFO 17
NOTICE 18
WARNING 19
ERROR 20
FATAL 21
PANIC 22
To get the DEBUG message you want, you need to check the component you care about regarding the supported level of debugging. Then before run your query, use below setting to get debug information:
either persistent level of GUC ("hawq config -c log_min_messages -v DEBUG_LEVEL" and then "hawq restart cluster -a")
or use session level debugging ("set log_min_messages = DEBUG_LEVEL")
If you don't find enough log information even with highest level debugging level, you can try to add it in apache hawq source code yourself.

DEBUG you refer to may have two meaning, One is DEBUG log level in hawq code, which is answered by ztao1987, and the other is when you debug using gdb/lldb, where is the output of your print function.
The answer is in the master/segment log too. stdout has been redirected to log file by HAWQ, For example, when you want to print a tupletableslot in lldb, just type"expr print_slot(yourslot)", and tail -f your.log, the slot info will be printed on the screen.

Related

Save output of NetworkManager monitor command to text file

How can I save the output of the NetworkManager command that listens for and prints the current activity?
The command is nmcli monitor (https://developer.gnome.org/NetworkManager/stable/nmcli.html).
Logging Messages
I will show how to modify the logging levels by NetworkManager.
NetworkManager supports on the fly changing of the logging levels and allows for a fine control over what is logged.
First check what is the current configuration by below command:
root#hostname ~: nmcli general logging
As a result you will be presented with the information about current conf:
LEVEL DOMAINS INFO PLATFORM,RFKILL,ETHER,WIFI,BT,MB,DHCP4,DHCP6,PPP,IP4,IP6,AUTOIP4,DNS,VPN,SHARING,SUPPLICANT,AGENTS,SETTINGS,SUSPEND,CORE,DEVICE,OLPC,INFINIBAND,FIREWALL,ADSL,BOND,VLAN,BRIDGE,TEAM,CONCHECK,DCB,DISPATCH,AUDIT,SYSTEMD
It is possible to change the level either globally or for each domain separately. The command to achieve this is:
nmcli general logging [level <level> [domain <domain>]]
The < level > is the desired log level, here are some examples:
ERR: will log only critical errors
WARN: will log warnin messages
INFO: will log various informational messages
DEBUG: enables verbose logging for debugging purposes
< domain > is the category of messages that shall be logged with given severity.
WIFI: will include only WiFi related messages, IP4 will include only IPv4 related messages
And so on, i cant provide info for every setting but this is how it works in general.
Other things you may wanna try:
Systemd:
journalctl -u NetworkManager > tomyfile.txt
Use debug mode in general:
sudo /usr/lib/NetworkManager/debug-helper.py --nm debug > tomyfile.txt
I was able to log the output of nmcli monitor to a file using:
nmcli monitor >> log.txt

libvirt image based provisioning using logical volumes

Are there known issues with image based provisioning using logical volumes in libvirt? I am getting this error while trying to do the same
Unable to save
Failed to create a compute kvm2 (Libvirt) instance test3.xxx.local: Call
to virNetworkCreateXML failed:
internal error: Child process (/usr/sbin/lvcreate --name
test3.xxx.local-disk1 -L 1K --type snapshot --virtualsize 10485760K -s
/vm-images-pool/images-vol/template_minimal) unexpected exit status 3: 2017-
01-05 00:42:08.133+0000: 12330: debug : virFileClose:102 : Closed fd 29
2017-01-05 00:42:08.133+0000: 12330: debug : virFileClose:102 : Closed fd 31
2017-01-05 00:42:08.133+0000: 12330: debug : virFileClose:102 : Closed fd 27
Volume group name expected (no slash) Run `lvcreate --help' for more
information
This link from Red Hat flags it as a known issue:
https://access.redhat.com/solutions/1995053
That doc has a date of October 20 2015. Not sure if anythig changed after that to support LV.
I tried to satisfy the requirement in that doc by creating a pool based on dir like this:
Setup:
Storage pool vm-images-pool-dir of type dir
Storage pool vm-images-pool of type logical
template_minimal is the image template.
[root#kvm2 libvirt]# virsh vol-list vm-images-pool-dir
Name Path
----------------------------------------------------------------------------
template_minimal /vm-images-pool/images-vol/template_minimal
vm-images-pool storage pool is of type VG with one volume:
images-vol vm-images-pool -wi-ao---- 249.00g
images-vol is mounted under /vm-images-pool/images-vol/
Any insight is appreciated.
Thanks,
TG
=======================================
more details.
Daniel, Thanks. I am a bit confused. I couldn't put the actual commands earlier since I had cleaned them up. I recreated the setup. Here are the commands I used:
virsh pool-define-as vm-images-pool logical --source-dev /dev/mapper/mpathd
virsh pool-build vm-images-pool
virsh pool-start vm-images-pool
virsh vol-create-as vm-images-pool images-vol --capacity 249G
virsh pool-define-as vm-images-pool-dir dir - - - - /vm-images-pool/images- vol/
virsh pool-build vm-images-pool-dir
virsh pool-start vm-images-pool-dir
[root#kvm2 ~]# virsh vol-list vm-images-pool-dir
Name Path
---------------------------------------------------------------------------- --
lost+found /vm-images-pool/images-vol/lost+found
template_minimal /vm-images-pool/images-vol/template_minimal
=======================================
/vm-images-pool/images-vol/template_minimal is the path used for template image
==================================
more tests:
mounted the logical volume at a mount point to match the directory based storage pool:
[root#kvm2 ~]# df -h /vm-images-pool-dir/images-vol
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vm--images--pool-images--vol 245G 1.2G 232G 1% /vm-images- pool-dir/images-vol
[root#kvm2 ~]# virsh vol-list vm-images-pool-dir
Name Path
------------------------------------------------------------------------------
lost+found /vm-images-pool-dir/images-vol/lost+found
template_minimal /vm-images-pool-dir/images-vol/template_minimal
[root#kvm2 ~]#
used /vm-images-pool-dir/images-vol/template_minimal as the template path
same result
Unable to save
Failed to create a compute kvm2 (Libvirt) instance test3.xxx.local: Call
to virNetworkCreateXML failed: internal error: Child process
(/usr/sbin/lvcreate --name test3.xxx.local-disk1 -L 1K --type
snapshot --virtualsize 10485760K - s /vm-images-pool-dir/images-
vol/template_minimal) unexpected exit status 3: 2017-01-05
16:45:10.694+0000: 40712: debug : virFileClose:102 : Closed fd 27 2017-
01-05 16:45:10.694+0000: 40712: debug : virFileClose:102 : Closed fd 29
2017-01-05 16:45:10.694+0000: 40712: debug : virFileClose:102 : Closed fd 24
Volume group name expected (no slash) Run `lvcreate --help' for more
information.
the source of the image is "/vm-images-pool-dir/images-vol/template_minimal" and the guest's target back end is a LV of 10G on another storage pool called "virtual-machines"
Not understanding what the 'lvcreate' commmand is trying to do, shouldnt it at least use "virtual-machines" as the target VG. The tool I am using is Satellite 6.2. I am thinking its something silly that I am overlooking. Not sure where :)
Thanks
TG
Based on the paths in that command, it seems you wanted to create a new file based volume in the /vm-images-pool/images-vol/, ie your "vm-images-pool-dir" pool. The fact that you are seeing an error from "lvcreate" though, suggests that you mistakenly specified "vm-images-pool" to libvirt as the pool to use, causing it to try to create a logical volume instead. You don't show the actual command / API you are running, but check that you've given the right pool name to it.
I know the question has long been asked, but I just hit the same problem and found the answer. I couldn't find the exact virsh command you are using leading to this error, but here I used the following XML file with virsh vol-create libvirtVG logical.xml
<volume >
<name>vol02</name>
<capacity unit='KiB'>2097152</capacity>
<allocation unit='KiB'>0</allocation>
<backingStore>
<path>/dev/libvirtVG/sles15sp1</path>
</backingStore>
</volume>
To be able to get rid of the error I had to set the allocation to the value of the capacity. You can also see that virt-manager is automatically doing it for you:
https://github.com/virt-manager/virt-manager/blob/master/virtinst/storage.py#L646
The equivalent using the virsh vol-create-as command would be:
virsh vol-create-as libvirtVG vol02 2048MiB --allocation 2048MiB \
--backing-vol /dev/libvirtVG/sles15sp1

dynamic debug statement of kernel in which file

I have enabled the CONFIG_DYNAMIC_DEBUG option in kernel. After which we get control file in debug/dynamic_debug directory.
After we enable some debug logs in control file, where this log statements will be printed, in which log file ?
You can check kernel log level by cat /proc/sys/kernel/printk. Default is 4. Log levels are defined here https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/linux/kern_levels.h?id=refs/tags/v4.8-rc8#n7. As a test you can set it to highest to make sure that everything is logged: echo "7" > /proc/sys/kernel/printk.
You can also run cat /proc/kmsg while the dynamic debug statements are running. It /proc/kmsg holds kernel messages for them to be picked up by dmesg or something else.

Stop squashing syslog debug messages on OS X

I'm using syslog to capture logging from an application. I've configured syslog to write my applications logs to an application-specific file in asl.conf:
# Redirect foo to /var/log/foo.log
? [= Sender foo] file /var/log/foo.log
This works, but repeated messages are culled. For example:
Jun 21 17:22:03 hostname.domain foo[346] <Debug>: This is a message!
Jun 21 17:22:03 --- last message repeated 3 times ---
How can I disable the squashing of repeated messages?
If you restart the syslogd daemon with the option -dup_delay and set the value to 0, it will stop this happening, but it's there to prevent the log getting filled with unnecessary messages.
More detail of syslogd is described here.

How can I get the value of "%d" variables in dynamic_debug info of Linux kernel?

I enabled config_dynamic_debug=y in the Linux kernel customized by myself, and following the dynamic_debug documentation shipped with kernel source code, I run the following command to enable the output of the debug information in bluetooth subsystem:
echo -n 'file net/bluetooth/bnep/core.c line 722 +p' > /sys/kernel/debug/dynamic_debug/control
which means the debug info in the line 772 of the file net/bluetooth/bnep/core.c will be logged.
After the bnep.ko module is loaded, I checked the output of /sys/kernel/debug/dynamic_debug/control, the debug information is there.
But most of them look like:
> net/bluetooth/bnep/core.c:422 [bnep]bnep_tx_frame - "skb %p dev %p type %d\012"
I really want to know the value represented by %p or %d, but I don't know how to do it.
Thank you very much!
You have enabled that debug statement, which is what reading from /sys/kernel/debug/dynamic_debug/control tells you.
From now on, that debug message will be sent to the normal kernel log, which you can view with dmesg and/or with your syslog daemon (which will normally log to /var/log/messages or /var/log/everything/, or some similar path).

Resources