I share my .gdbinit script (via NFS) across machines running different versions of gcc. I would like some gdb commands to be executed if the code I am debugging has been compiled with a specific compiler version. Can gdb do that?
I came up with this:
define hook-run
python
from subprocess import Popen, PIPE
from re import search
# grab the executable filename from gdb
# this is probably not general enough --
# there might be several objfiles around
objfilename = gdb.objfiles()[0].filename
# run readelf
process = Popen(['readelf', '-p', '.comment', objfilename], stdout=PIPE)
output = process.communicate()[0]
# match the version number with a
regex = 'GCC: \(GNU\) ([\d.]+)'
match=search(regex, output)
if match:
compiler_version = match.group(1)
gdb.execute('set $compiler_version="'+str(compiler_version)+'"')
gdb.execute('init-if-undefined $compiler_version="None"')
# do what you want with the python compiler_version variable and/or
# with the $compiler_version convenience variable
# I use it to load version-specific pretty-printers
end
end
It is good enough for my purpose, although it is probably not general enough.
Related
I am able to symbolicate symbol address through following lldb command:
image lookup --address $SYMBOL_ADDRRESS
But while writing a shell script to parse, I am not able to find a way to store the output of above command into a variable or file.
First off, if your script's job is mostly about driving lldb and you happen to know Python, you will be much happier using the lldb module in Python, where you can drive the debugger directly, than getting lldb to produce text output which you parse in the shell script.
The lldb Python module provides API's like SBTarget.ResolveSymbolContextForAddress, which runs the same lookup as image lookup --address but returns the result as a Python lldb.SBSymbolContext object, which you can either query for module/file/line etc using API's on the object. So getting bits of info out of this result will be easier with the lldd API's.
But if you have to use a shell script, then the easiest thing is probably to write the command output to a file and read that back into the shell script. lldb doesn't have generic support for tee-ing command output into a log file yet, but the lldb Python module allows you to run command-line commands and programmatically capture the output.
So you can do it easily from lldb's Python script interpreter:
(lldb) script
Python Interactive Interpreter. To exit, type 'quit()', 'exit()' or Ctrl-D.
>>> result = lldb.SBCommandReturnObject()
>>> lldb.debugger.GetCommandInterpreter().HandleCommand("image lookup -va $pc", result)
2
>>> fh = open("/tmp/out.txt", "w")
>>> fh.write(result.GetOutput())
>>> fh.close()
>>> quit
(lldb) plat shell cat /tmp/out.txt
Address: foo[0x0000000100003f6f] (foo.__TEXT.__text + 15)
Summary: foo`main + 15 at foo.c:6:3
Module: file = "/tmp/foo", arch = "x86_64"
CompileUnit: id = {0x00000000}, file = "/tmp/foo.c", language = "c99"
Function: id = {0x7fffffff00000032}, name = "main", range = [0x0000000100003f60-0x0000000100003f8a)
FuncType: id = {0x7fffffff00000032}, byte-size = 0, decl = foo.c:4, compiler_type = "int (void)"
Blocks: id = {0x7fffffff00000032}, range = [0x100003f60-0x100003f8a)
LineEntry: [0x0000000100003f6f-0x0000000100003f82): /tmp/foo.c:6:3
Symbol: id = {0x00000005}, range = [0x0000000100003f60-0x0000000100003f8a), name="main"
You can also write a lldb command in Python that wraps this bit of business, which would make it easier to use. Details on that are here:
https://lldb.llvm.org/use/python-reference.html#create-a-new-lldb-command-using-a-python-function
You could even do a hybrid approach, and make all the lldb work you want to do a custom Python command. That would allow you to use the lldb Python API's to get what info you needed and write it out in whatever format is convenient for you, and would simplify the lldb invocation in your shell script and facilitate recovering the information lldb provided...
We have two versions of Active perl 5.6 and 5.24. We have web services which has to be executed on Active perl '5.24' versions(to adopt TLS 1.2 version) and this needs to be invoked from Active perl '5.6' version. We are using windows operating system.
Steps followed :
Caller code which is executed in 5.6 version invokes the 5.24 version using system /require command.
Problem:
How to call the 5.24 perl function(example: webservicecall(arg1){return "xyz") from 5.6 perl script through system command, require or etc..?
Also how to get the return value of perl function 5.24?
Note:
Its a temporary work around to have two perl versions and the we have a plan to do upgrade it for higher version.
Here perl version 5.6 installed in "C:\Perl\bin\perl\" and perl version 5.24 installed in "D:\Perl\bin\perl\".
"**p5_6.pl**"
print "Hello Perl5_6\n";
system('D:\Perl\bin\perl D:\sample_program\p5.24.pl');
print $OUTFILE;
$retval = Mul(25, 10);
print ("Return value is $retval\n" );
"**p5_24.pl**"
print "Hello Perl5_24\n";
our $OUTFILE = "Hello test";
sub Mul($$)
{
my($a, $b ) = #_;
my $c = $a * $b;
return($c);
}
I have written sample program for detail information to call perl 5.24 version from perl script 5.6 version. During execution I didn't get the expected output. How to get the "return $c" value & the "our $OUTFILE" value of p5_24.pl in p5_6.pl script?
Note: The above is the sample program based on this I will modify the actual program using serialized data.
Place the code for the function that needs v5.24 in a wrapper script, written just so that it runs that function (and prints its result). Actually, I'd recommend writing a module with that function and then loading that module in the wrapper script.
Then run that script under the wanted (5.24) interpreter, by invoking it via its full path. (You may need to be careful to make sure that all libraries and environment are right.) Do this in a way that allows you to pick up its output. That can be anything from backticks (qx) to pipe-open or, better, to good modules. There is a range of modules for this, like IPC::System::Simple, Capture::Tiny, IPC::Run3, or IPC::Run. Which to use would depend on how much you need out of that call.
You can't call a function in a running program but to have it somehow run under another program.
Also, variables (like $OUTFILE) defined in one program cannot be seen in another one. You can print them from the v5.24 program, along with that function result, and then parse that whole output in the v5.6 program. Then the two programs would need a little "protocol" -- to either obey an order in which things are printed, or to have prints labeled in some way.
Much better, write a module with functions and variables that need be shared. Then the v5.24 program can load the module and import the function it needs and run it, while the v5.6 program can load the same module but only to pick up that variable (and also run the v5.24 program).
Here is a sketch of all this. The package file SharedBetweenPerls.pm
package SharedBetweenPerls;
use warnings;
use strict;
use Exporter qw(import);
our #EXPORT_OK = qw(Mul export_vars);
my $OUTFILE = 'test_filename';
sub Mul { return $_[0] * $_[1] }
sub export_vars { return $OUTFILE }
1;
and then the v5.24 program (used below as program_for_5.24.pl) can do
use warnings;
use strict;
# Require this to be run by at least v5.24.0
use v5.24;
# Add path to where the module is, relative to where this script is
# In our demo it's the script's directory ($RealBin)
use FindBin qw($RealBin);
use lib $RealBin;
use SharedBetweenPerls qw(Mul);
my ($v1, $v2) = #ARGV;
print Mul($v1, $v2);
while the v5.6 program can do
use warnings;
use strict;
use feature 'say';
use FindBin qw($RealBin);
use lib $RealBin;
use SharedBetweenPerls qw(export_vars);
my $outfile = export_vars(); #--> 'test_filename'
# Replace "path-to-perl..." with an actual path to a perl
my $from_5.24 = qx(path-to-perl-5.24 program_for_5.24.pl 25 10); #--> 250
say "Got variable: $outfile, and return from function: $from_5.24";
where $outfile has the string test_filename while $from_5.24 variable is 250.†
This is tested to work as it stands if both programs, and the module, are in the same directory, with names as in this example. (And with path-to-perl-5.24 replaced with the actual path to your v5.24 executable.) If they are at different places you need to adjust paths, probably the package name and the use lib line. See lib pragma.
Please note that there are better ways to run an external program --- see the recommended modules above. All this is a crude demo since many details depend on what exactly you do.
Finally, the programs can also connect via a socket and exchange all they need but that is a bit more complex and may not be needed.
† The question's been edited, and we now have D:\Perl\bin\perl for path-to-perl-5.24 and D:\sample_program\p5.24.pl for program_for_5.24.
Note that with such a location of the p5.24.pl program you'd have to come up with a suitable location for the module and then its name would need to have (a part of) that path in it and to be loaded with such name. See for example this post.
A crude demo without a module (originally posted)
As a very crude sketch, in your program that runs under v5.6 you could do
my $from_5.24 = qx(path-to-perl-5.24 program_for_5.24.pl 25 10);
where the program_for_5.24.pl then could be something like
use warnings;
use strict;
sub Mul { return $_[0] * $_[1] }
my ($v1, $v2) = #ARGV;
print Mul($v1, $v2);
and the variable $from_5.24 ends up being 250 in my test.
You cannot directly call a Perl function running with another Perl version. You would need to create a program which explicitly invokes the function. The input and output need to be explicitly serialized in order to be transported between these two programs.
Serializing could be done with Data::Dumper, Storable or similar. If lower performance is needed you could invoke the program which provides the function with system and share the serialized data with temporary files or pipes. Or you could create some client-server architecture and share the serialized data with sockets. The latter is faster since it skips the repeated start and teardown of the other process but instead keeps it running.
I use GDB to understanding how CPython executes the test.py source file and I want to stop the CPython when it starts the execution of opcode I am interested.
OS: Ubuntu 18.04.2 LTS
Debugger: GNU gdb (Ubuntu 8.1-0ubuntu3) 8.1.0.20180409-git
The first problem - many CPython's .py own files are executed before my test.py gets its turn, so I can't just break at the _PyEval_EvalFrameDefault - there are many of them, so I should distinguish my file from others.
The second problem - I can't set the condition like "when the filename is equal to the test.py", because the filename is not a simple C string, it is the CPython's Unicode object, so the standard GDB string functions can't be used for comparing.
At this moment I do the next trick for breaking the execution at the needed line of test.py source:
For example, I have the source file:
x = ['a', 'b', 'c']
# I want to set the breakpoint at this line.
for e in x:
print(e)
I add the binary left shift operator to the code:
x = ['a', 'b', 'c']
# Added for breakpoint
a = 12
b = 2 << a
for e in x:
print(e)
And then, track the BINARY_LSHIFT opcode execution in the Python/ceval.c file by this GDB command:
break ceval.c:1327
I have chosen the BINARY_LSHIFT opcode, because of its seldom usage in the code. Thus, I can reach the needed part of .py file quickly - it happens once in the all other .py modules executed before my test.py.
I look the more straightforward way of doing the same, so
the questions:
Can I catch the moment the test.py starts executing? I should mention, what the test.py filename is appearing on different stages: parsing, compilation, execution. So, it also will be good to can break the CPython execution at the any stage.
Can I specify the line of the test.py, where I want to break? It is easy for .c files, but is not for .py files.
My idea would be to use a C-extension, to make setting C-breakpoints possible in a python-script (similar to pdb.set_trace() or breakpoint() since Python3.7), which I will call cbreakpoint.
Consider the following python-script:
#example.py
from cbreakpoint import cbreakpoint
cbreakpoint(breakpoint_id=1)
print("hello")
cbreakpoint(breakpoint_id=2)
It could be used as follows in gdb:
>>> gdb --args python example.py
[gdb] b cbreakpoint
[gdb] run
Now, the debuger would stops at cbreakpoint(breakpoint_id=1) and cbreakpoint(breakpoint_id=2).
Here is proof of concept, written in Cython to avoid the otherwise needed boilerplate-code:
#cbreakpoint.pyx
cdef extern from *:
"""
long long last_breakpoint_id = -1;
void cbreakpoint(long long breakpoint_id){
last_breakpoint_id = breakpoint_id;
}
"""
void c_cbreakpoint "cbreakpoint"(long long breakpoint_id)
def cbreakpoint(breakpoint_id = 0):
c_cbreakpoint(breakpoint_id)
which can be build inplace via:
cythonize -i cbreakpoint.pyx
If Cython isn't installed, I have uploaded a version which doesn't depend on Cython (too much code for this post) on github.
It is also possible to break conditionally, given the breakpoint_id, i.e.:
>>> gdb --args python example.py
[gdb] break src/cbreakpoint.c:595 if breakpoint_id == 2
[gdb] run
will break only after hello was printed - at cbreakpoint with id=2 (while cbreakpoint with id=1 will be skipped). Depending on Cython version the line can vary, but can be found out once gdb stops at cbreakpoint.
It would also do something similar without any additional modules:
add breakpoint or import pdb; pdb.set_trace() instead of cbreakpoint
gdb --args python example.py + run
When pdb interrupts the program, hit Ctrl+C in order to interrupt in gdb.
Activate breakpoints in gdb.
continue in gdb and then in pdb (i.e. c+enter twice).
A small problem is, that after that the breakpoints might be hit while in pdb, so the first method is a little bit more robust.
I'm trying to setup a set of functions to be skipped by gdb from stepping in by commands like:
skip myfunction
. But if I place them in ~/.gdbinit instead of just saying in the terminal gdb prompt, I get the error:
No function found named myfunction.
Ignore function pending future shared library load? (y or [n]) [answered N; input not from terminal]
So I need GDB to get Y answer. I've tried what was suggested for breakpoints as well as set confirm off suggested in a comment to this question. But these don't help with skip command.
How can I set skip in a .gdbinit script, answering Y about future library load?
you can use Python to wait for the execution to start, which is equivalent to pending on:
import gdb
to_skip = []
def try_pending_skips(evt=None):
for skip in list(to_skip): # make a copy for safe remove
try:
# test if the function (aka symbol is defined)
symb, _ = gdb.lookup_symbol(skip)
if not symb:
continue
except gdb.error:
# no frame ?
continue
# yes, we can skip it
gdb.execute("skip {}".format(skip))
to_skip.remove(skip)
if not to_skip:
# no more functions to skip
try:
gdb.events.new_objfile.disconnect(try_pending_skips) # event fired when the binary is loaded
except ValueError:
pass # was not connected
class cmd_pending_skip(gdb.Command):
self = None
def __init__ (self):
gdb.Command.__init__(self, "pending_skip", gdb.COMMAND_OBSCURE)
def invoke (self, args, from_tty):
global to_skip
if not args:
if not to_skip:
print("No pending skip.")
else:
print("Pending skips:")
for skip in to_skip:
print("\t{}".format(skip))
return
new_skips = args.split()
to_skip += new_skips
for skip in new_skips:
print("Pending skip for function '{}' registered.".format(skip))
try:
gdb.events.new_objfile.disconnect(try_pending_skips)
except ValueError: pass # was not connected
# new_objfile event fired when the binary and libraries are loaded in memory
gdb.events.new_objfile.connect(try_pending_skips)
# try right away, just in case
try_pending_skips()
cmd_pending_skip()
Save this code into a Python file pending_skip.py (or surrounded with python ... end in your .gdbinit), then:
source pending_skip.py
pending_skip fct1
pending_skip fct2 fct3
pending_skip # to list pending skips
Documentation references:
GDB Python TOC
Basic Python
Events in Python
Symbols in Python
This feature has been proposed here:
https://sourceware.org/ml/gdb-prs/2015-q2/msg00417.html
https://sourceware.org/bugzilla/show_bug.cgi?id=18531
So far, there's been no activity on that issue for 6 months though. As of writing this, the feature is not included in GDB 7.10.
I'm trying to use parallel computing from ipython parallel library. But I have little knowledge about it and I find the doc difficult to read from someone who knows nothing about parallel computing.
Funnily, all tutorials I found just re-use the example in the doc, with the same explanation, which from my point of view, is useless.
Basically what I'd like to do is running few scripts in background so they are executed in the same time. In bash it would be something like :
for my_file in $(cat list_file); do
python pgm.py my_file &
done
But bash interpreter of Ipython notebook doesn't handle the background mode.
It seems that solution was to use parallel library from ipython.
I tried :
from IPython.parallel import Client
rc = Client()
rc.block = True
dview = rc[:2] # I take only 2 engines
But then I'm stuck. I don't know how to run twice (or more) the same script or pgm at the same time.
Thanks.
One year later, I eventually managed to get what I wanted.
1) Create a function with what you want to do on the different cpu. Here it is just calling a script from the bash with the ! magic ipython command. I guess it would work with the call() function.
def my_func(my_file):
!python pgm.py {my_file}
Don't forget the {} when using !
Note also that the path to my_file should be absolute, since the clusters are where you started the notebook (when doing jupyter notebook or ipython notebook) which is not necessarily where you are.
2) Start your ipython notebook Cluster with the number of CPU you want.
Wait 2s and execute the following cell:
from IPython import parallel
rc = parallel.Client()
view = rc.load_balanced_view()
3) Get a list of file you want to process:
files = list_of_files
4) Map asynchronously your function with all your files to the view of your engines you just created. (not sure of the wording).
r = view.map_async(my_func, files)
While it's running you can do something else on the notebook (It runs in "background"!). You can also call r.wait_interactive() that enumerates interactively the number of files processed and the number of time spent so far and the number of files left. This will prevent you to run other cells (but you can interrupt it).
And if you have more files than engines, no worries, they will be processed as soon as an engine finishes with 1 file.
Hope this will help others !
This tutorial might be of some help:
http://nbviewer.ipython.org/github/minrk/IPython-parallel-tutorial/blob/master/Index.ipynb
Note also that I still have IPython 2.3.1, I don't know if it changed since Jupyter.
Edit: Still works with Jupyter, see here for difference and potential issues you may encounter
Note that if you use external libraries in your function, you need to import them on the different engines with:
%px import numpy as np
or
%%px
import numpy as np
import pandas as pd
Same with variable and other functions, you need to push them to the engine name space:
rc[:].push(dict(
foo=foo,
bar=bar))
If you're trying to executing some external scripts in parallel, you don't need to use IPython's parallel functionality. Replicating bash's parallel execution can be achieved with the subprocess module as follows:
import subprocess
procs = []
for i in range(10):
procs.append(subprocess.Popen(['ls', '/Users/shad/tmp/'], stdout=subprocess.PIPE))
results = []
for proc in procs:
stdout, _ = proc.communicate()
results.append(stdout)
Be wary that if your subprocess generates a lot of output, the process will block. If you print the output (results) you get:
print results
['file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n']