The optimal way to set a breakpoint in the Python source code while debugging CPython by GDB - debugging

I use GDB to understanding how CPython executes the test.py source file and I want to stop the CPython when it starts the execution of opcode I am interested.
OS: Ubuntu 18.04.2 LTS
Debugger: GNU gdb (Ubuntu 8.1-0ubuntu3) 8.1.0.20180409-git
The first problem - many CPython's .py own files are executed before my test.py gets its turn, so I can't just break at the _PyEval_EvalFrameDefault - there are many of them, so I should distinguish my file from others.
The second problem - I can't set the condition like "when the filename is equal to the test.py", because the filename is not a simple C string, it is the CPython's Unicode object, so the standard GDB string functions can't be used for comparing.
At this moment I do the next trick for breaking the execution at the needed line of test.py source:
For example, I have the source file:
x = ['a', 'b', 'c']
# I want to set the breakpoint at this line.
for e in x:
print(e)
I add the binary left shift operator to the code:
x = ['a', 'b', 'c']
# Added for breakpoint
a = 12
b = 2 << a
for e in x:
print(e)
And then, track the BINARY_LSHIFT opcode execution in the Python/ceval.c file by this GDB command:
break ceval.c:1327
I have chosen the BINARY_LSHIFT opcode, because of its seldom usage in the code. Thus, I can reach the needed part of .py file quickly - it happens once in the all other .py modules executed before my test.py.
I look the more straightforward way of doing the same, so
the questions:
Can I catch the moment the test.py starts executing? I should mention, what the test.py filename is appearing on different stages: parsing, compilation, execution. So, it also will be good to can break the CPython execution at the any stage.
Can I specify the line of the test.py, where I want to break? It is easy for .c files, but is not for .py files.

My idea would be to use a C-extension, to make setting C-breakpoints possible in a python-script (similar to pdb.set_trace() or breakpoint() since Python3.7), which I will call cbreakpoint.
Consider the following python-script:
#example.py
from cbreakpoint import cbreakpoint
cbreakpoint(breakpoint_id=1)
print("hello")
cbreakpoint(breakpoint_id=2)
It could be used as follows in gdb:
>>> gdb --args python example.py
[gdb] b cbreakpoint
[gdb] run
Now, the debuger would stops at cbreakpoint(breakpoint_id=1) and cbreakpoint(breakpoint_id=2).
Here is proof of concept, written in Cython to avoid the otherwise needed boilerplate-code:
#cbreakpoint.pyx
cdef extern from *:
"""
long long last_breakpoint_id = -1;
void cbreakpoint(long long breakpoint_id){
last_breakpoint_id = breakpoint_id;
}
"""
void c_cbreakpoint "cbreakpoint"(long long breakpoint_id)
def cbreakpoint(breakpoint_id = 0):
c_cbreakpoint(breakpoint_id)
which can be build inplace via:
cythonize -i cbreakpoint.pyx
If Cython isn't installed, I have uploaded a version which doesn't depend on Cython (too much code for this post) on github.
It is also possible to break conditionally, given the breakpoint_id, i.e.:
>>> gdb --args python example.py
[gdb] break src/cbreakpoint.c:595 if breakpoint_id == 2
[gdb] run
will break only after hello was printed - at cbreakpoint with id=2 (while cbreakpoint with id=1 will be skipped). Depending on Cython version the line can vary, but can be found out once gdb stops at cbreakpoint.
It would also do something similar without any additional modules:
add breakpoint or import pdb; pdb.set_trace() instead of cbreakpoint
gdb --args python example.py + run
When pdb interrupts the program, hit Ctrl+C in order to interrupt in gdb.
Activate breakpoints in gdb.
continue in gdb and then in pdb (i.e. c+enter twice).
A small problem is, that after that the breakpoints might be hit while in pdb, so the first method is a little bit more robust.

Related

Debugging with GDB - seeing code around a given breakpoint

I am trying to debug a C++ program using GDB. I have set 15 breakpoints. Most of the breakpoints are in different files. After the first 5 breakpoints, it became difficult to remember what line of code any given breakpoint refers to.
I struggle quite a bit simply trying to recall what a given breakpoint refers to. I find this quite distracting. I was wondering if there is a way to tell gdb to display code around a certain breakpoint.
Something like this - $(gdb) code 3 shows 30 lines of code around breakpoint 3. Is this possible today. Could you please show me how?
I run gdb in tui mode, and I also keep emacs open to edit my source files.
You can use gdb within emacs.
In emacs, type M-x gdb, after entering the name of the executable, type M-x gdb-many-windows. It brings up an IDE-like interface, with access to debugger, locals, source, input/output, stack frame and breakpoints.
You can find a reference and snapshot here.
I don't think you can do it exactly like this in gdb as such, but it can be scripted in gdb python.
This crude script should help:
import gdb
class Listbreak (gdb.Command):
""" listbreak n Lists code around breakpoint """
def __init__ (self):
super(Listbreak, self).__init__ ("listbreak", gdb.COMMAND_DATA)
def invoke (self, arg, from_tty):
printed = 0
for bp in gdb.breakpoints():
if bp.number == int(arg[0]):
printed = 1
print ("Code around breakpoint " + arg[0] + " (" + bp.location + "):")
gdb.execute("list " + bp.location)
if printed == 0:
print ("No such breakpoint")
Listbreak()
Copy this to listbreak.py, source it in gdb (source listbreak.py), then use it like this:
listbreak 2

How to set skipping of uninteresting functions while stepping from gdbinit script?

I'm trying to setup a set of functions to be skipped by gdb from stepping in by commands like:
skip myfunction
. But if I place them in ~/.gdbinit instead of just saying in the terminal gdb prompt, I get the error:
No function found named myfunction.
Ignore function pending future shared library load? (y or [n]) [answered N; input not from terminal]
So I need GDB to get Y answer. I've tried what was suggested for breakpoints as well as set confirm off suggested in a comment to this question. But these don't help with skip command.
How can I set skip in a .gdbinit script, answering Y about future library load?
you can use Python to wait for the execution to start, which is equivalent to pending on:
import gdb
to_skip = []
def try_pending_skips(evt=None):
for skip in list(to_skip): # make a copy for safe remove
try:
# test if the function (aka symbol is defined)
symb, _ = gdb.lookup_symbol(skip)
if not symb:
continue
except gdb.error:
# no frame ?
continue
# yes, we can skip it
gdb.execute("skip {}".format(skip))
to_skip.remove(skip)
if not to_skip:
# no more functions to skip
try:
gdb.events.new_objfile.disconnect(try_pending_skips) # event fired when the binary is loaded
except ValueError:
pass # was not connected
class cmd_pending_skip(gdb.Command):
self = None
def __init__ (self):
gdb.Command.__init__(self, "pending_skip", gdb.COMMAND_OBSCURE)
def invoke (self, args, from_tty):
global to_skip
if not args:
if not to_skip:
print("No pending skip.")
else:
print("Pending skips:")
for skip in to_skip:
print("\t{}".format(skip))
return
new_skips = args.split()
to_skip += new_skips
for skip in new_skips:
print("Pending skip for function '{}' registered.".format(skip))
try:
gdb.events.new_objfile.disconnect(try_pending_skips)
except ValueError: pass # was not connected
# new_objfile event fired when the binary and libraries are loaded in memory
gdb.events.new_objfile.connect(try_pending_skips)
# try right away, just in case
try_pending_skips()
cmd_pending_skip()
Save this code into a Python file pending_skip.py (or surrounded with python ... end in your .gdbinit), then:
source pending_skip.py
pending_skip fct1
pending_skip fct2 fct3
pending_skip # to list pending skips
Documentation references:
GDB Python TOC
Basic Python
Events in Python
Symbols in Python
This feature has been proposed here:
https://sourceware.org/ml/gdb-prs/2015-q2/msg00417.html
https://sourceware.org/bugzilla/show_bug.cgi?id=18531
So far, there's been no activity on that issue for 6 months though. As of writing this, the feature is not included in GDB 7.10.

Determine compiler name/version from gdb

I share my .gdbinit script (via NFS) across machines running different versions of gcc. I would like some gdb commands to be executed if the code I am debugging has been compiled with a specific compiler version. Can gdb do that?
I came up with this:
define hook-run
python
from subprocess import Popen, PIPE
from re import search
# grab the executable filename from gdb
# this is probably not general enough --
# there might be several objfiles around
objfilename = gdb.objfiles()[0].filename
# run readelf
process = Popen(['readelf', '-p', '.comment', objfilename], stdout=PIPE)
output = process.communicate()[0]
# match the version number with a
regex = 'GCC: \(GNU\) ([\d.]+)'
match=search(regex, output)
if match:
compiler_version = match.group(1)
gdb.execute('set $compiler_version="'+str(compiler_version)+'"')
gdb.execute('init-if-undefined $compiler_version="None"')
# do what you want with the python compiler_version variable and/or
# with the $compiler_version convenience variable
# I use it to load version-specific pretty-printers
end
end
It is good enough for my purpose, although it is probably not general enough.

ipython notebook : how to parallelize external script

I'm trying to use parallel computing from ipython parallel library. But I have little knowledge about it and I find the doc difficult to read from someone who knows nothing about parallel computing.
Funnily, all tutorials I found just re-use the example in the doc, with the same explanation, which from my point of view, is useless.
Basically what I'd like to do is running few scripts in background so they are executed in the same time. In bash it would be something like :
for my_file in $(cat list_file); do
python pgm.py my_file &
done
But bash interpreter of Ipython notebook doesn't handle the background mode.
It seems that solution was to use parallel library from ipython.
I tried :
from IPython.parallel import Client
rc = Client()
rc.block = True
dview = rc[:2] # I take only 2 engines
But then I'm stuck. I don't know how to run twice (or more) the same script or pgm at the same time.
Thanks.
One year later, I eventually managed to get what I wanted.
1) Create a function with what you want to do on the different cpu. Here it is just calling a script from the bash with the ! magic ipython command. I guess it would work with the call() function.
def my_func(my_file):
!python pgm.py {my_file}
Don't forget the {} when using !
Note also that the path to my_file should be absolute, since the clusters are where you started the notebook (when doing jupyter notebook or ipython notebook) which is not necessarily where you are.
2) Start your ipython notebook Cluster with the number of CPU you want.
Wait 2s and execute the following cell:
from IPython import parallel
rc = parallel.Client()
view = rc.load_balanced_view()
3) Get a list of file you want to process:
files = list_of_files
4) Map asynchronously your function with all your files to the view of your engines you just created. (not sure of the wording).
r = view.map_async(my_func, files)
While it's running you can do something else on the notebook (It runs in "background"!). You can also call r.wait_interactive() that enumerates interactively the number of files processed and the number of time spent so far and the number of files left. This will prevent you to run other cells (but you can interrupt it).
And if you have more files than engines, no worries, they will be processed as soon as an engine finishes with 1 file.
Hope this will help others !
This tutorial might be of some help:
http://nbviewer.ipython.org/github/minrk/IPython-parallel-tutorial/blob/master/Index.ipynb
Note also that I still have IPython 2.3.1, I don't know if it changed since Jupyter.
Edit: Still works with Jupyter, see here for difference and potential issues you may encounter
Note that if you use external libraries in your function, you need to import them on the different engines with:
%px import numpy as np
or
%%px
import numpy as np
import pandas as pd
Same with variable and other functions, you need to push them to the engine name space:
rc[:].push(dict(
foo=foo,
bar=bar))
If you're trying to executing some external scripts in parallel, you don't need to use IPython's parallel functionality. Replicating bash's parallel execution can be achieved with the subprocess module as follows:
import subprocess
procs = []
for i in range(10):
procs.append(subprocess.Popen(['ls', '/Users/shad/tmp/'], stdout=subprocess.PIPE))
results = []
for proc in procs:
stdout, _ = proc.communicate()
results.append(stdout)
Be wary that if your subprocess generates a lot of output, the process will block. If you print the output (results) you get:
print results
['file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n']

Uncaught Throw generated by JLink or UseFrontEnd

This example routine generates two Throw::nocatch warning messages in the kernel window. Can they be handled somehow?
The example consists of this code in a file "test.m" created in C:\Temp:
Needs["JLink`"];
$FrontEndLaunchCommand = "Mathematica.exe";
UseFrontEnd[NotebookWrite[CreateDocument[], "Testing"]];
Then these commands pasted and run at the Windows Command Prompt:
PATH = C:\Program Files\Wolfram Research\Mathematica\8.0\;%PATH%
start MathKernel -noprompt -initfile "C:\Temp\test.m"
Addendum
The reason for using UseFrontEnd as opposed to UsingFrontEnd is that an interactive front end may be required to preserve output and messages from notebooks that are usually run interactively. For example, with C:\Temp\test.m modified like so:
Needs["JLink`"];
$FrontEndLaunchCommand="Mathematica.exe";
UseFrontEnd[
nb = NotebookOpen["C:\\Temp\\run.nb"];
SelectionMove[nb, Next, Cell];
SelectionEvaluate[nb];
];
Pause[10];
CloseFrontEnd[];
and a notebook C:\Temp\run.nb created with a single cell containing:
x1 = 0; While[x1 < 1000000,
If[Mod[x1, 100000] == 0,
Print["x1=" <> ToString[x1]]]; x1++];
NotebookSave[EvaluationNotebook[]];
NotebookClose[EvaluationNotebook[]];
this code, launched from a Windows Command Prompt, will run interactively and save its output. This is not possible to achieve using UsingFrontEnd or MathKernel -script "C:\Temp\test.m".
During the initialization, the kernel code is in a mode which prevents aborts.
Throw/Catch are implemented with Abort, therefore they do not work during initialization.
A simple example that shows the problem is to put this in your test.m file:
Catch[Throw[test]];
Similarly, functions like TimeConstrained, MemoryConstrained, Break, the Trace family, Abort and those that depend upon it (like certain data paclets) will have problems like this during initialization.
A possible solution to your problem might be to consider the -script option:
math.exe -script test.m
Also, note that in version 8 there is a documented function called UsingFrontEnd, which does what UseFrontEnd did, but is auto-configured, so this:
Needs["JLink`"];
UsingFrontEnd[NotebookWrite[CreateDocument[], "Testing"]];
should be all you need in your test.m file.
See also: Mathematica Scripts
Addendum
One possible solution to use the -script and UsingFrontEnd is to use the 'run.m script
included below. This does require setting up a 'Test' kernel in the kernel configuration options (basically a clone of the 'Local' kernel settings).
The script includes two utility functions, NotebookEvaluatingQ and NotebookPauseForEvaluation, which help the script to wait for the client notebook to finish evaluating before saving it. The upside of this approach is that all the evaluation control code is in the 'run.m' script, so the client notebook does not need to have a NotebookSave[EvaluationNotebook[]] statement at the end.
NotebookPauseForEvaluation[nb_] := Module[{},While[NotebookEvaluatingQ[nb],Pause[.25]]]
NotebookEvaluatingQ[nb_]:=Module[{},
SelectionMove[nb,All,Notebook];
Or##Map["Evaluating"/.#&,Developer`CellInformation[nb]]
]
UsingFrontEnd[
nb = NotebookOpen["c:\\users\\arnoudb\\run.nb"];
SetOptions[nb,Evaluator->"Test"];
SelectionMove[nb,All,Notebook];
SelectionEvaluate[nb];
NotebookPauseForEvaluation[nb];
NotebookSave[nb];
]
I hope this is useful in some way to you. It could use a few more improvements like resetting the notebook's kernel to its original and closing the notebook after saving it,
but this code should work for this particular purpose.
On a side note, I tried one other approach, using this:
UsingFrontEnd[ NotebookEvaluate[ "c:\\users\\arnoudb\\run.nb", InsertResults->True ] ]
But this is kicking the kernel terminal session into a dialog mode, which seems like a bug
to me (I'll check into this and get this reported if this is a valid issue).

Resources