With an enum declared and instantiated like this:
enum test_enumeration
{
test1 = 4,
test2 = 10,
test3,
};
test_enumeration test_enum;
I can call
(gdb) ptype test_enum
to get out
type = enum test_enumeration {test1 = 4, test2 = 10, test3}
This gives me the numerical value of test1 and test2 but NOT test3.
If I call
(gdb) print (int)test3
GDB prints out the value 11.
However I want to be able to get something like this:
type = enum test_enumeration {test1 = 4, test2 = 10, test3 = 11}
By printing out the entire type definition using test_enum.
Unfortunately
(gdb) ptype (int)test_enum
returns the the type as int and not the values.
Is there a way to print out the enum constants like this or an option that needs to be set to always display their numerical versions?
GDB V10.1
This will print out all the elements of an enum type. The executable needs to have been compiled with debuginfo.
$ cat print-enum.py
import gdb
class PrintEnumCmd(gdb.Command):
"""print all elements of the given enum type"""
def __init__(self):
super(PrintEnumCmd, self).__init__("print-enum", gdb.COMMAND_DATA, gdb.COMPLETE_EXPRESSION)
def invoke(self, argstr, from_tty):
typename = argstr
if not typename or typename.isspace():
raise gdb.GdbError("Usage: print-enum type")
try:
t = gdb.lookup_type(typename)
except gdb.error:
typename = "enum " + typename
try:
t = gdb.lookup_type(typename)
except gdb.error:
raise gdb.GdbError("type " + typename + " not found")
if t.code != gdb.TYPE_CODE_ENUM:
raise gdb.GdbError("type " + typename + " is not an enum")
for f in t.fields():
print(f.name, "=", f.enumval)
PrintEnumCmd()
$ gdb enu
Reading symbols from enu...done.
(gdb) source print-enum.py
(gdb) print-enum
Usage: print-enum type
(gdb) print-enum test<tab>
test1 test2 test3 test_enumeration
(gdb) print-enum test_enumeration
test1 = 4
test2 = 10
test3 = 11
This is not currently possible in GDB. The code that decides if the = VAL part should be printed is here:
https://sourceware.org/git/?p=binutils-gdb.git;a=blob;f=gdb/c-typeprint.c;h=0502d31eff9605e7e2e430c8ad72908792c1b475;hb=HEAD#l1607
The value is only printed if it is not 1 more than the value of the previous enum entry.
Related
I'm trying to use Cython to call external C library from Python. The code looks like this:
header.h
typedef enum {
A = 0,
B = 1,
C = 2,
D = 3
} my_enum_type;
cheader.pxd
cdef extern from "header.h":
ctypedef enum my_enum_type:
A = 1,
B = 2,
C = 3,
D = 4
code.pyx
cimport cheader
cdef cheader.my_enum_type test = A
if test == A:
print("OK")
else:
print("NOK")
With code above I get compiler error: undeclared name not builtin: A
With code bellow there are no compilation errors.
cimport cheader
cdef cheader.my_enum_type test = cheader.my_enum_type.A
if test == cheader.my_enum_type.A:
print("OK")
else:
print("NOK")
If enum is defined in pyx file than code bellow can be used:
ctypedef enum my_enum_type:
A = 1,
B = 2,
C = 3,
D = 4
cdef my_enum_type test = A
if test == A:
print("OK")
else:
print("NOK")
Is it possible to declare enum in pxd file so it's not necessary to reference imported module every time enum has to be used?
What is the proper way to declare enum in pxd file and use it in pyx files?
I have a C struct that looks something like this:
struct room {
void *reset_first;
}
I have a Golang struct that looks something like this:
type reset struct {
command string
}
Thanks to the joys of cgo, I can do the following from Go:
func someStuff(a *C.room, b *reset) {
a.reset_first = unsafe.Pointer(b)
}
and all is well
Then I attempt to retrieve it later:
func otherStuff(a *C.room) {
var r = (*reset)(a.reset_first)
fmt.Println(r.command)
}
and I end up with a segfault:
Thread 12 "test" received signal SIGSEGV, Segmentation fault.
316 fmt.Println(r.command)
but some of the values I get from inspection with gdb surprise me:
(gdb) info args
a = 0x7fffd5bc2d48
(gdb) info locals
r = 0x5
(gdb) print a
$1 = (struct main._Ctype_struct_room *) 0x7fffd5bc2d48
(gdb) print b
$2 = (struct main.reset *) 0x5
(gdb) print *a
3 = {_type = 2, _ = "\000\000\000", reset_first = 0xc422224a80}
How does r end up as 0x5 not 0xc422224a80?
I'm not surprised that trying to dereference 5 results in a segfault, but I'm baffled as to where the 5 came from!
I would like to know if this is possible to get the value of c++ enum item in Xcode.
In Visual Studio you just have to hover the item and you got a tooltip with its value but it does not do the same in Xcode.
I also tried to print the value in lldb console without success.
For instance with this simple enum:
enum Params{
eP1,
eP2,
eP3,
eP4,
eP5,
};
I tried different ways like p eP1 or p Param::eP1.
I also tried with an enum class with the same result.
At present, you have to use enumName:enumElement, but that is working for me:
> cat foo.cpp
#include <stdio.h>
enum Params
{
eP1,
eP2,
eP3,
eP4
};
int main()
{
enum Params elem = eP1;
printf ("%d\n", elem);
return 0;
}
> lldb a.out
(lldb) target create "a.out"
Current executable set to 'a.out' (x86_64).
(lldb) b s -p printf
Breakpoint 1: where = a.out`main + 29 at foo.cpp:14, address = 0x0000000100000f6d
(lldb) run
Process 26752 launched: '/private/tmp/a.out' (x86_64)
Process 26752 stopped
* thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1
frame #0: 0x0000000100000f6d a.out`main at foo.cpp:14
11 int main()
12 {
13 enum Params elem = eP1;
-> 14 printf ("%d\n", elem);
^
15 return 0;
16 }
Target 0: (a.out) stopped.
(lldb) expr Params::eP1
(int) $0 = 0
If you still can't get this to work, can you post a more complete example where it fails?
The problem for lldb, BTW, is that the debug information is organized into the full debug information and then a name->info accelerator table. lldb depends on the accelerator tables for lookup (otherwise it would have to go looking through all the debug info which can get pretty slow for big apps). The accelerator tables at present only have the name of the enum, not the element names.
Given the following:
use std::fmt::Debug;
#[derive(Debug)]
enum A<T: Debug> {
X,
Y(T),
}
#[derive(Debug)]
struct B;
type C = A<B>;
// use A<B> as C; // Does not compile
I can use it as:
fn main() {
let val0 = A::X::<B>;
let val1 = A::Y::<B>(B);
println!("{:?}\t{:?}", val0, val1);
}
But then for more than one generic parameter (or if A, B etc were much longer names then to alias it I tried the following but it doesn't compile:
fn main() {
let val0 = C::X;
let val1 = C::Y(B);
println!("{:?}\t{:?}", val0, val1);
}
with errors:
src/main.rs:656:16: 656:20 error: no associated item named `X` found for type `A<B>` in the current scope
src/main.rs:656 let val0 = C::X;
^~~~
src/main.rs:657:16: 657:20 error: no associated item named `Y` found for type `A<B>` in the current scope
src/main.rs:657 let val1 = C::Y(B);
As also noted i am unable to use use to solve the problem. Is there a way around it (because typing the whole thing seems to be cumbersome) ?
rustc --version
rustc 1.9.0 (e4e8b6668 2016-05-18)
Is there a way around it (because typing the whole thing seems to be cumbersome)?
You can specify C as the type of the variable so you can use A::X or A::Y without explicit specifying the type parameter:
let val0: C = A::X;
let val1: C = A::Y(B);
I would like to have a formatter for the buildin string type of the nim language, but somehow I fail at providing it. Nim compilis to c, and the c representation of the string type you see here:
#if defined(__GNUC__) || defined(__clang__) || defined(_MSC_VER)
# define SEQ_DECL_SIZE /* empty is correct! */
#else
# define SEQ_DECL_SIZE 1000000
#endif
typedef char NIM_CHAR;
typedef long long int NI64;
typedef NI64 NI;
struct TGenericSeq {NI len; NI reserved; };
struct NimStringDesc {TGenericSeq Sup; NIM_CHAR data[SEQ_DECL_SIZE]; };
and here is the output of what I have tried in the lldb session:
(lldb) frame variable *longstring
(NimStringDesc) *longstring = {
Sup = (len = 9, reserved = 15)
data = {}
}
(lldb) frame variable longstring->data
(NIM_CHAR []) longstring->data = {}
(lldb) type summary add --summary-string "${&var[0]%s}" "NIM_CHAR []"
(lldb) frame variable longstring->data
(NIM_CHAR []) longstring->data = {}
(lldb) type summary add --summary-string "${var%s}" "NIM_CHAR *"
(lldb) frame variable longstring->data
(NIM_CHAR []) longstring->data = {}
(lldb) frame variable &longstring->data[0]
(NIM_CHAR *) &[0] = 0x00007ffff7f3a060 "9 - 3 - 2"
(lldb) frame variable *longstring
(lldb) type summary add --summary-string "${var.data%s}" "NimStringDesc"
(lldb) frame variable *longstring
(NimStringDesc) *longstring = NIM_CHAR [] # 0x7ffff7f3a060
(lldb) type summary add --summary-string "${&var.data[0]%s}" "NimStringDesc"
(lldb) frame variable *longstring
(NimStringDesc) *longstring = {
Sup = (len = 9, reserved = 15)
data = {}
}
(lldb)
I simply can't manage, that the output will just be data interpreted as a '\0' terminated c-string
The summary string syntax you've tried is (by design) not as syntax rich as C.
And since you're using a zero-sized array, I don't think we have any magic provision to treat that as a pointer-to string. You might want to file a bug about it, but in this case, it's arguable whether it would help you. Since your string is length-encoded it doesn't really need to be zero-terminated, and that is the only hint LLDB would understand out of the box to know when to stop reading out of a pointer-to characters.
In your case, you're going to have to resort to Python formatters
The things you need are:
the memory location of the string buffer
the length of the string buffer
a process to read memory out of
This is a very small Python snippet that does it - I'll give you enhancement suggestions as well, but let's start with the basics:
def NimStringSummary(valobj,stuff):
l = valobj.GetChildMemberWithName('Sup').GetChildMemberWithName('len').GetValueAsUnsigned(0)
s = valobj.GetChildMemberWithName('data').AddressOf()
return '"%s"'% valobj.process.ReadMemory(s.GetValueAsUnsigned(0),l,lldb.SBError())
As you can see, first of all it reads the value of the length field;
then it reads the address-of the data buffer; then it uses the process that the value comes from to read the string content, and returns it in quotes
Now, this is a proof of concept. If you used it in production, you'd quickly run into a few issues:
What if your string buffer hasn't been initialized yet, and it says the size of the buffer is 20 gigabytes? You're going to have to limit the size of the data you're willing to read. For string-like types it has builtin knowledge of (char*, std::string, Swift.String, ...) LLDB prints out the truncated buffer followed by ..., e.g.
(const char*) buffer = "myBufferIsVeryLong"...
What if the pointer to the data is invalid? You should check that s.GetValueAsUnsigned(0) isn't actually zero - if it is you might want to print an error message like "null buffer".
Also, here I just passed an SBError that I then ignore - it would be better to pass one and then check it
All in all, you'd end up with something like:
import lldb
import os
def NimStringSummary(valobj,stuff):
l = valobj.GetChildMemberWithName('Sup').GetChildMemberWithName('len').GetValueAsUnsigned(0)
if l == 0: return '""'
if l > 1024: l = 1024
s = valobj.GetChildMemberWithName('data').AddressOf()
addr = s.GetValueAsUnsigned(0)
if addr == 0: return '<null buffer>'
err = lldb.SBError()
buf = valobj.process.ReadMemory(s.GetValueAsUnsigned(0),l,err)
if err.Fail(): return '<error: %s>' % str(err)
return '"%s"' % buf
def __lldb_init_module(debugger, internal_dict):
debugger.HandleCommand("type summary add NimStringDesc -F %s.NimStringSummary" % os.path.splitext(os.path.basename(__file__))[0])
The one extra trick is the __lldb_init_module function - this function is automatically called by LLDB whenever you 'command script import' a python file. This will allow you to add the 'command script import ' to your ~/.lldbinit file and automatically get the formatter to be picked up in all debug sessions
Hope this helps!