I have a perf.data file generated on another system that I am trying to read on my own machine using perf report -i [file]. Doing this, generates the following error
file uses a more recent and unsupported ABI (bytes extra)
How can I work around this issue and open the profiling data file?
Related
For my work purposes, I have to build a plugin for zabbix Agent2 that has to on Windows systems. For simplicity purposes I chose to go with a loadable plugin but more on that later.
I followed the guide at # https://www.zabbix.com/documentation/guidelines/en/plugins/loadable_plugins to build the example myip plugin on my Windows laptop and after linking it through an additionnal config file in the C:\Program Files\Zabbix Agent 2\zabbix_agent2.d\plugins.d directory, it works.
It works meaning that I can query the new item both with:
> zabbix_get -s localhost -k myip
> zabbix_agent -c zabbi_agent2.conf -t myip.
The only issue is the zabbix_agent2 process CPU usage sky rocks when I linked my loadable plugin, and restarted the agent.
I am talking about a usage of about 17-20% on my 8cores/1.99Ghz i7 laptop. That is just insane.
If I unlink the agent2 and the loadable plugin, the agent cpu usage comes back to normal (less than 1%).
After looking at the agent log file (zabbi_agent2.log), it is full of lines like
2022/11/10 09:22:16.817154 failed to read response for plugin Myip, failed to read type header, The handle is invalid.
When I run zabbix_agent -v -c zabbi_agent2.conf -t myip with the -v (verbose) option, I see the same error in the output multiple times before the actual value of the metric.
I suspect the high CPU usage is caused by this error happening and being logged too frequently.
After having read the source code of the plugin-support at https://git.zabbix.com/projects/AP/repos/plugin-support/browse/plugin/comms/connection.go (line 42), I recognized the line of code creating the error.
As there are builtin plugins for the agent2 too, it tried to recompile the agent2 on windows but was getting an even worse error while i have not even added my plugin yet.
I acutually wrote a post on that as well here (# https://www.zabbix.com/forum/zabbix-troubleshooting-and-problems/454191-error-during-zabbix-agent2-compilation-from-sources and Error during Windows Zabbix Agent2 compilation from sources)
Resources:
Go: dl.google.com/go/go1.19.3.windows-amd64.msi
Zabbix Agent2: https://cdn.zabbix.com/zabbix/binaries/stable/6.2/6.2.4/zabbix_agent2-6.2.4-windows-amd64-openssl.msi
I build the same plugin for the agent2 on linux both as a loadable and builtin plugin and it is working flawlessly so i think it really is a windows issue.
When copying a large file (say 8GB) the destination file shows as 8GB immediately, before the file has finished copying.
This is causing problems with a third party software that is polling the file waiting for the full size to be copied.
Is there a way to change this behavior so the file system reports the file size incrementally as it is being copied?
Additional info:
The Cygwin "cp" command seems to solve the problem, but I need a native Windows solution
This behavior can be seen when using the cmd prompt "copy" command and when using File Explorer to copy files
Windows 10 Version 1909 (OS Build 19363.535)
The have tried to take a pprof dump of a part of our server code that we are trying to optimize. I'm not using the net/http/pprof, rather relying on runtime/pprof. My whole setup works correctly and I'm able to use the pprof dump on the server machine.
However when I scp the pprof dump to my local machine - in order to use web since I don't want to install graphviz on my servers, pprof starts showing me this warning:
Local symbolization failed for contacts: open /home/deploy/amigo/bin/contacts: no such file or directory
Some binary filenames not available. Symbolization may be incomplete.
Try setting PPROF_BINARY_PATH to the search path for local binaries.
File: contacts
Type: cpu
Time: May 8, 2019 at 3:27pm (IST)
Duration: 1.06s, Total samples = 1.07s (101.25%)
Entering interactive mode (type "help" for commands, "o" for options)
I copied the original bin from the server and try setting the PPROF_BINARY_PATH. But it still doesn't help. What am I doing wrong here?
My goal is to port this driver on current Linux Kernel.
Things which I did till now....
1) Downloaded the source code of the current kernel version.
2) Downloaded the dev_parallel.c, Makefile, Kconfig for reworking on the code.
3) Using "make" command I was able to compile the driver with no errors.
4) Using "make modules" command I was able to generate a .o file.
5) Using "make modules_install" command I was able to get the .ko file.
6) Using "modprobe" command I was able to successfully load the module without any kernel panics.
But I see that there is a DTS file for this driver located here. I know that dts files are compiled to dtb files which are read by the kernel during boot time and it automatically loads the module.
But is it necessary to have this DTS file or just modprobe command will do the job for me?
The driver which I am talking about is for an Electronic paper display (EPD).
So If I connect EPD and then do modprobe for loading the driver, will it work or do I need to have DTS file for making it work correctly?
It is not necessary to use DTS file in a driver but for some reasons like defining pins, setting configurations, etc. It should get parameters from DTS file to prevent user to modify the driver and recompile it.
It seems that your example doesn't get any parameters from the DTS file but on the other side, it hardcoded some pin definitions so you need to take care of them.
If you want to force it to read parameters from DTS file you should rewrite the driver. You can use this for driver and this for GPIO. Then you must include the new driver in your current DTS file and recompile it.
For the driver compilation, you can create a kernel module. You can use this tutorial for the basics.
I am new to GWAS analysis and I've been trying to run the PLINK tutorial sample datasets (hapmap 80K loci) on gPLINK to do some exclusions. I am currently working on a Mac OSX 10.10. I've applied the threshold settings (high missing rate, low MAF etc.) to my file "hapmap1.ped" and prepared to execute the command through gPLINK, however it keeps giving me the error prompt "can not execute command locally".
Is there something wrong with my library or directory settings?
gPlink runs in two modes a remote mode and a local one. It seems you are running the local one. Please check if you are specifying the correct path where PLINK is installed when cofiguring gPlink. For more details refer to gPlink configuration