ARM Linux: How to get current_pt_regs - linux-kernel

I'm a newbie of Linux, currently I read some code of Linux, and I see a macro current_pt_regs in arch/arm/include/asm/ptrace.h defined as below:
#define current_pt_regs(void) ({ (struct pt_regs *) \
((current_stack_pointer | (THREAD_SIZE - 1)) - 7) - 1; \
})
In my opinion current_stack_pointer | (THREAD_SIZE - 1) can get the top address of stack. But what confused me is that, why it has -7?
Can someone tell me something about that?
Thanks and regards,
kan.

Related

Modifying BOOST_PP_SEQ

I have some sequence: (a)(1)(b)(2)(c)(3) and I want to expand it in: a_b_c, what solutions? I tried BOOST_PP_SEQ_FOLD_LEFT vs BOOST_PP_SEQ_FILTER but unsuccessfully, it's was: a1b2c3 ...
Help)
You could do something like this (I can't guarantee it's the best approach):
#define DO_CONCAT_WITH_UNDERSCORE(__,STATE,ELEM) BOOST_PP_CAT(STATE,BOOST_PP_CAT(_,ELEM))
#define DO_FILTER_POSITION(_,__,INDEX,ELEM) BOOST_PP_IF(BOOST_PP_MOD(INDEX,2),BOOST_PP_EMPTY(),(ELEM)) //IF IT'S ODD ADD NOTHING, IF IT'S EVEN ADD THE CURRENT ELEM IN PARENS
#define TAKE_EVEN_POSITIONS(SEQ) BOOST_PP_SEQ_FOR_EACH_I(DO_FILTER_POSITION,_,SEQ)
#define TAKE_EVEN_POSITIONS_AND_CONCAT_HELPER(FILTERED_SEQ) BOOST_PP_SEQ_FOLD_LEFT(DO_CONCAT_WITH_UNDERSCORE,BOOST_PP_SEQ_HEAD(FILTERED_SEQ),BOOST_PP_SEQ_TAIL(FILTERED_SEQ))
#define TAKE_EVEN_POSITIONS_AND_CONCAT(SEQ) TAKE_EVEN_POSITIONS_AND_CONCAT_HELPER(TAKE_EVEN_POSITIONS(SEQ))
Running on Wandbox

lldb - How to display float with decimals using "type format add"

I have a variable of type float. Xcode displays it using scientific notation (i.e. 3.37626e+07). I'm trying to get it to display using dot notation (i.e. 33762616.00).
I've tried every format provided by lldb, but none displays the float using decimals. I read other posts and watched the WWDC2012 session 415 (as suggested here), but I must be too close the forest to see the trees. Any help would be greatly appreciated!
Try adding a custom data formatter in your ~/.lldbinit file for type float. e.g.
Process 13204 stopped
* thread #1: tid = 0xb6f8d, 0x0000000100000f33 a.out`main + 35 at a.c:5, stop reason = step over
#0: 0x0000000100000f33 a.out`main + 35 at a.c:5
2 int main ()
3 {
4 float f = 33762616.0;
-> 5 printf ("%f\n", f);
6 }
(lldb) p f
(float) $0 = 3.37626e+07
(lldb) type summ add -v -o "return '%f' % valobj.GetData().GetFloat(lldb.SBError(), 0)" float
(lldb) p f
(float) $1 = 33762616.000000
(lldb)
The default set of formatters provided by lldb can't do this, but dropping into Python allows you a lot of flexibility.

Pretty Printing a tree data structure in Ruby

I am working on a building a compiler and within that I generate a tree that represents the source program that is passed in. I want to display this is a tree like fashion so I can display the structure of the program to anyone interested.
Right now I just have the tree printing on a single line like this:
ProgramNode -> 'Math' BlockNode -> DeclarationNode -> ConstantDeclarationNode -> const ConstantListNode -> [m := 7, ConstantANode -> [n := StringLiteralNode -> ""TEST"" ]] ;
What I would like is something like this:
ProgramNode
/ \
'Math' BlockNode
|
DeclarationNode
|
ConstantDeclarationNode ------------------------------
/ \ |
const ConstantListNode |
/ | \ \ |
m := 7 ConstantANode |
/ | \ |
n := StringLiteralNode |
/ | \ |
" TEST " ;
I haven't really worked with trees in Ruby, how are they usually represented?
Any help would be appreciated.
This kind of pretty printing requires quite a bit of math. Besides, it's unclear what should happen if the tree grows too wide for the console window. I don't know of any existing libraries that'll do this. I personally use awesome_print.
tree = {'ConstantDeclarationNode' => ['const',
'ConstantListNode' => ['m', ':=', '7']]}
require 'awesome_print'
ap tree
# >> {
# >> "ConstantDeclarationNode" => [
# >> [0] "const",
# >> [1] {
# >> "ConstantListNode" => [
# >> [0] "m",
# >> [1] ":=",
# >> [2] "7"
# >> ]
# >> }
# >> ]
# >> }
It has tons of options, check it out!
You need to check out the Graph gem. It is amazing and remarkably simple to work with. You can choose the direction of your tree and the shape of the nodes, as well as colors and so much more. I first found out about it at Rubyconf last year and was blown away.
It is as simple as:
digraph do
edge "Programnode", "Blocknode"
edge "Programnode", "Math"
edge "Blocknode", "DeclarationNode"
end
Obviously you would want to programmatically enter the edges :)
Here is a link to a pdf of the talk which will give more information on it:
There is also a video of the talk on Confreaks if you are interested.
Cheers,
Sean

Interpreting valgrind error Invalid write of size 4

I was recently trying to track down some bugs in a program I am working on using valgrind, and one of the errors I got was:
==6866== Invalid write of size 4
==6866== at 0x40C9E2: superneuron::read(_IO_FILE*) (superneuron.cc:414)
the offending line # 414 reads
amplitudes__[points_read] = 0x0;
and amplitudes__ is defined earlier as
uint32_t * amplitudes__ = (uint32_t* ) amplitudes;
Now obviously a uint32_t is 4 bytes long, so this is the write size, but could someone tell me why it's invalid ?
points_read is most likely out of bounds, you're writing past (or before) the memory you allocated for amplitudes.
A typical mistake new programmers do to get this warning is:
struct a *many_a;
many_a = malloc(sizeof *many_a * size + 1);
and then try to read or write to the memory at location 'size':
many_a[size] = ...;
Here the allocation should be:
many_a = malloc(sizeof *many_a * (size + 1));

__sync_fetch_and_and atomic gives wrong result in single threaded program with Clang

I'm having a problem with __sync_fetch_and_and incorrectly performing. I wrote the following code to illustrate it:
bool equal;
int64_t mask = 0x01234567BEEFDEAD;
int64_t orig = 0xDEADBEEF01234567;
int64_t test1, test2;
test1 = test2 = orig;
equal = (test1 == test2);
printf("Before anding\n");
printf("test1:\t0x%016llX\n", test1);
printf("test2:\t0x%016llX\n", test2);
printf("equal:\t%d\n", equal);
// Try anding
test1 &= mask;
__sync_fetch_and_and(&test2, mask);
equal = (test1 == test2);
printf("After anding\n");
printf("test1:\t0x%016llX\n", test1);
printf("test2:\t0x%016llX\n", test2);
printf("equal:\t%d\n", equal);
The output from this is:
Before anding
test1: 0xDEADBEEF01234567
test2: 0xDEADBEEF01234567
equal: 1
After anding
test1: 0x0021046700234425
test2: 0xDFAFFFEFBFEFDFEF
equal: 0
...which is obviously not correct. I've tried replacing __sync_and_and_fetch but that doesn't fix it. Or-ing with ``__sync_fetch_and_or` works correctly. I'm using Xcode 4.2.1 and compiling with the default compiler Apple LLVM Compiler 3.0 (Clang). When I switch to using GCC 4.2, it works correctly.
This certainly seems like a compiler bug, but I'm not sure if I'm somehow doing this wrong on Clang. Are there some differences in Clang that I'm not accounting for, or is this indeed a bug?
EDIT: I haven't tried the latest release of Clang (3.0) because I'm stuck using Xcode for now.
Yes, it's a bug; it's been fixed in newer versions of clang. As a workaround, you can add "-no-integrated-as" to your compiler flags.

Resources