__stop_SECTION contains wrong value after compilation - gcc

While developing code for STM32 MCU, I'm placing some structures into custom memory section:
struct MyStruct structName;
__attribute__((__section__("my_section"))) __attribute__((__used__)) const struct MyStruct *const mystruct1 = &x
Linker generated additional symbols, __start_my_section_ and __stop_my_section_.
After compilation while debugging MCU, I see that __start_my_section_ contains a valid value, e.g. 0x20000468 but __stop_my_section_ is incorrect, equal to 0.
In the .map file, the __stop_my_section_ also contains a valid value:
0x20000470 __stop_my_section_ = .
What's wrong? I haven't added anything to linker script, linker automatically created that section and placed in bss.

Related

Rust WINAPI outputting HMODULE value

I am trying to write a simle rust console application that calls into WINAPI.
I will ommit the obvious use and extern crate parts of my code. Here it is:
fn win32_string(value : &str ) -> Vec<u16> {
OsStr::new( value ).encode_wide().chain( once( 0 ) ).collect()
}
fn main() {
println!("===== Rust Windows experiment #1 =====");
let module_name = win32_string("ntdll.dll");
let h_instance: HMODULE;
unsafe {
h_instance = GetModuleHandleW(module_name.as_ptr());
}
println!("Value of h_instance: {:#?}", h_instance);
}
I am building it against the target triple:
[build]
target = "i686-pc-windows-msvc"
As you can see I am targeting to build a 32 bit application. Now the output of my program is the following:
===== Rust Windows experiment #1 =====
Value of h_instance: 0x00007ffb61c40000
It is showing a 64 bit address. How can this happen? Am I writing the HMODULE value wrongly to the console, or what am I doing wrong?
With help from rodrigo: The build tag is supposed to be included in the .cargo\config file instead of the cargo.toml file, in order for the compiler to pick it up. :) After changing that the issue is now fixed. :)

How to call a c-function that takes a c-struct that contains pointers

From a GO program on a Raspberry PI I'm trying to call a function(Matlab function converted to C function) and the input to the function is a pointer to a struct and the struct contains pointer to a double(data) and a pointer to an int(size) and two int(allocatedSize, numDimensions). I have tried several ways but nothing has worked, when I have passed the compilation it usually throws a panic: runtime error: cgo argument has Go pointer to Go pointer when I run the program.
sumArray.c
/*sumArray.C*/
/* Include files */
#include "sumArray.h"
/* Function Definitions */
double sumArray(const emxArray_real_T *A1)
{
double S1;
int vlen;
int k;
vlen = A1->size[0];
if (A1->size[0] == 0) {
S1 = 0.0;
} else {
S1 = A1->data[0];
for (k = 2; k <= vlen; k++) {
S1 += A1->data[k - 1];
}
}
return S1;
}
sumArray.h
#ifndef SUMARRAY_H
#define SUMARRAY_H
/* Include files */
#include <stddef.h>
#include <stdlib.h>
#include "sumArray_types.h"
/* Function Declarations */
extern double sumArray(const emxArray_real_T *A1);
#endif
sumArray_types.h
#ifndef SUMARRAY_TYPES_H
#define SUMARRAY_TYPES_H
/* Include files */
/* Type Definitions */
#ifndef struct_emxArray_real_T
#define struct_emxArray_real_T
struct emxArray_real_T
{
double *data;
int *size;
int allocatedSize;
int numDimensions;
};
#endif /*struct_emxArray_real_T*/
#ifndef typedef_emxArray_real_T
#define typedef_emxArray_real_T
typedef struct emxArray_real_T emxArray_real_T;
#endif /*typedef_emxArray_real_T*/
#endif
/* End of code generation (sumArray_types.h) */
main.go
// #cgo CFLAGS: -g -Wall
// #include <stdlib.h>
// #include "sumArray.h"
import "C"
import (
"fmt"
)
func main() {
a1 := [4]C.Double{1,1,1,1}
a2 := [1]C.int{4}
cstruct := C.emxArray_real_T{data: &a1[0], size: &a2[0]}
cstructArr := [1]C.emxArray_real_T{cstruct}
y := C.sumArray(&cstructArr[0])
fmt.Print(float64(y))
}
With this example I get panic: runtime error: cgo argument has Go pointer to Go pointer when I run the program.
I do not how to make it work or if it is possible to make it work. I hope someone can help me or give some direction on how to solve this.
Too much for a comment, so here's the answer.
First, the original text:
A direct solution is to use C.malloc(4 * C.sizeof(C.double))to allocate the array of double-s. Note that you have to make sure to call C.free() on it when done. The same applies to the second array of a single int.
Now, your comment to the Mattanis' remark, which was, reformatted a bit:
thanks for giving some pointers. I tried with
a1 := [4]C.double{1,1,1,1}
sizeA1 := C.malloc(4 * C.sizeof_double)
cstruct := C.emxArray_real_T{
data: &a1[0],
size: (*C.int)(sizeA1)
}
y := C.sumArray(cstruct)
defer C.free(sizeA1)
but it gave me the same
answer as before cgo argument
has Go pointer to Go pointer when I
tried to run the program
You still seem to miss the crucial point. When you're using cgo, there are two disjoint "memory views":
"The Go memory" is everything allocated by the Go runtime powering your running process—on behalf of that process. This memory (most of the time, barring weird tricks) is known to the GC—which is a part of the runtime.
"The C memory" is memory allocated by the C code—typically by calling the libc's malloc()/realloc().
Now imagine a not-so-far-fetched scenario:
Your program runs, the C "side" gets initialized and
spawns its own thread (or threads), and holds handles on them.
Your Go "side" already uses multiple threads to run your goroutines.
You allocate some Go memory in your Go code and pass it
to the C side.
The C side passes the address of that memory to one or more of its own threads.
Your program continues to chug away, and so do the C-side threads—in parallel with your Go code.
As a result you have a reasonably classical scenario in which you get a super-simple situation for unsynchronized parallel memory access, which is a sure recepy for disaster on today's multi-core multi-socket hardware.
Also consider that Go is considerably a more higher-level programming language than C; at the bare minimum, it has automatic garbage collection, and notice that nothing in the Go spec specifies how exactly the GC must be specified.
This means, a particular implementation of Go (including the reference one—in the future) is free to allow its GC to move arbitrary objects in the memory¹, and this means updating every pointer pointing into the memory block in its original location to point to the same place in the block's new location—after it was moved.
With these considerations in mind, the Go devs postulated that in order to keep cgo-using programs future-proof², it is forbidden to pass to C any memory blocks which contain pointers to other Go memory blocks.
It's okay to pass Go memory blocks which contain pointers to C memory, though.
Going back to the example from your second comment,
you still allocate the array of 4 doubles, a1, in the Go memory.
Then the statement cstruct := C.emxArray_real_T{...} again allocates an instance of C.emxArray_real_T in the Go memory, and so after you initialize its data field with a pointer to Go memory (&a1[0]), and then pass its address to the C side, the runtime performs its dynamic checks before actually calling into the C side and crashes your program.
¹ This is typical behaviour for the so-called "generational" garbage collectors, for one example.
² That is, you recompile your program with a future version of the Go compiler of the same "major" release, and the program continues to work, unmodified.

Linux kernel module - accessing memory mapping

I'm running into some odd issue on kernel module load that I'm suspecting having to do with linking and loading. How to I programmatically figure out the address of each section after they are loaded in memory (from inside the module itself). Like where is .bss / .data / .text and so on.
From reading this article
https://lwn.net/Articles/90913/
It is sorta in the directly that I'm looking for.
You can see the sections begin addresses like this from userspace (need root permissions):
sudo cat /sys/module/<modulename>/sections/.text
I have browsed how syfs retrieves this addresses, and i found the following:
There is a section attributes in struct module
309 /* Section attributes */
310 struct module_sect_attrs *sect_attrs;
This attrs is a bunch of attr structs
1296 struct module_sect_attrs {
1297 struct attribute_group grp;
1298 unsigned int nsections;
1299 struct module_sect_attr attrs[0];
1300 };
where sect attr is the thing you are looking for
1290 struct module_sect_attr {
1291 struct module_attribute mattr;
1292 char *name;
1293 unsigned long address;
From the module's code THIS_MODULE macro is actually a pointer to the struct module object. Its module_init and module_core fields point to memory regions, where all module sections are loaded.
As I understand, sections division is inaccessible from the module code(struct load_info is dropped after module is loaded into memory). But having module's file you can easily deduce section's addresses after load:
module_init:
- init sections with code (.init.text)
- init sections with readonly data
- init sections with writable data
module_core:
- sections with code (.text)
- sections with readonly data
- sections with writable data
If several sections suit to one category, they are placed in the same order, as in the module's file.
Within module's code you can also print address of any its symbol, and after calculate start of the section, contained this symbol.
While this question is five years old, I thought I would contribute my two-cents. I was able to access the kernel's sections in a sort of hack-y way inspired by Alex Hoppus' answer. I don't advocate doing things this way, unless you are writing the kernel module to debug things or understand the kernel etc.
Anyway, I copy the following two structs into my module to help resolve incomplete types.
struct module_sect_attr {
struct module_attribute mattr;
char *name;
unsigned long address;
};
struct module_sect_attrs {
struct attribute_group grp;
unsigned int nsections;
struct module_sect_attr attrs[0];
};
Then, in my module initialization function, I do the following to get the section addresses.
unsigned long text = 0;
unsigned int nsections = 0;
unsigned int i;
struct module_sect_attr* sect_attr;
nsections = THIS_MODULE->sect_attrs->nsections;
sect_attr = THIS_MODULE->sect_attrs->attrs;
for (i = 0; i < nsections; i++) {
if (strcmp((sect_attr + i)->name, ".text") == 0)
text = (sect_attr + i)->address;
}
Finally, it should be noted that if you are looking for the address of .rodata, .bss, or .data you will need to define constant global variables, uninitialized global variables, or regular global variables, respectively, if you don't want those sections to be omitted.

risk of compile warning

I have mainly two kinds of compile warning:
1. implicit declaration of function
in a.c, it has char *foo(char *ptr1, char *ptr2), in b.c, some functions use this foo function without any declaration, and I found seems compiler will treat the function foo return value as integer, and even I can pass some variables less or more than foo function declaration
2. enumerated type mixed with another type
My target chip is ARM11, it seems that even I don't solve these two kinds of compile warning, my program can run without any issues, but I believe it must have some risk behind these. Can anyone give me some good example that these two kinds of compile warning can cause some unexpected issues?
Meanwhile, if these two warnings have potential risk, why c compiler allow these kinds warning happen but not set them to error directly? any story behind?
Implicit declaration. E.g. you have function: float foo(float a), which isn't declared when you call it. Implicit rules will create auto-declaration with following signature: int foo(double) (if passed argument is float). So value you pass will be converted to double, but foo expects float. The same with return - calling code expects int, but returned float. Values would be a complete mess.
enum mixed with other type. Enumerated type have list of values it could take. If you trying to assign numeric value to it, there is a chance that it isn't one of listed values; if later your code expects only specified range and presumes nothing else could be there - it could misbehave.
Simple example:
File: warn.c
#include <stdio.h>
double foo(double x)
{
return myid(x);
}
int
main (void)
{
double x = 1.0;
fprintf (stderr, "%lg == %lg\n", x, foo (x));
return 0;
}
File: foo.c
double
myid (double x)
{
return x;
}
Compile and run:
$ gcc warn.c foo.c -Wall
warn.c: In function ‘foo’:
warn.c:5: warning: implicit declaration of function ‘myfabs’
$ ./a.out
1 == 0
Old C standard (C90) had this strange "default int" rule and for compatibility it is supported even in latest compilers.

Clang does not allow static_cast to parent class with template, while g++ and icc allow

I am trying my C++11 code to see if all recent major compiler supports the features I used, and the following shortened code
#include <valarray>
struct T
{
double vv[3];
};
class V : public std::valarray<T>
{
public:
auto begin()->decltype(std::begin(static_cast<std::valarray<T>>(*this)))
{
return std::begin(static_cast<std::valarray<T>>(*this));
}
};
int main(void)
{
}
would compile with g++ 4.8.1(from Debian sid repository), Intel C++ compiler 13.1.1 20130313, but not Clang 3.3-2(from Debian sid repository).
The given error is:
test.cpp:11:73: error: no viable conversion from 'V' to 'std::valarray<T>'
auto begin()->decltype(std::begin(static_cast<std::valarray<T>>(*this)))
^~~~~
However, code like this
namespace std
{
auto begin(V& vv) -> decltype(std::begin(static_cast<V::parent_t>(vv)))
{
return std::begin(static_cast<V::parent_t>(vv));
}
}
would compile by all three compilers.
My question is: is the code itself allowed by the language standard, just Clang miscompiled it, or it is only supported by g++/icc extension? Or it is undefined behavior?
The code very dangerous and needs to be fixed even for GCC and ICC.
You're doing a static_cast to a value type, not a reference or pointer. That creates a new temporary valarray object, so the const overload of begin gets called (probably not what you intended), and the iterator returned by begin() refers to the temporary which goes out of scope immediately, so the returned iterator is invalid and dereferencing it is undefined behaviour.
The code will compile like this:
auto begin()->decltype(std::begin(std::declval<std::valarray<T>&>()))
{
return std::begin(static_cast<std::valarray<T>&>(*this));
/* cast to reference type! ^^^^^^^^^^^^^^^^^ */
}
The decltype doesn't need to cast this, it just needs to know the type of calling std::begin on a valarray<T>, so it doesn't matter if the type is incomplete because you don't need a cast.
In the body of the function the type is considered complete anyway, so the cast is valid.

Resources