gcc and explicit 256B boundary alignment on ARM - gcc

I'm using gcc toolchain for ARM and I need a linker to place a structure on a 256B boundary in memory. I've tried the aligned attribute:
volatile struct ohci_hcca hcca __attribute__ ((aligned (256)));
but with no luck:
nm out.elf | grep hcca
20011dc0 B hcca
I remember using this in past for fields that had to be on 512B boundary so I know it's possible but I seem to be missing something this time.
Thanks for any guidance.
EDIT:
The mystery's been solved. The BSP library is composed from tens of drivers and one of their header files contaned this:
#if defined __ICCARM__ || defined __CC_ARM || defined __GNUC__
//#pragma data_alignment=8 /* IAR */
#pragma pack(1) /* IAR */
#define __attribute__(...) /* IAR */
#endif /* IAR */
For __GNUC__ it doesn't make any sense unless the author of this chunk is a member of guerrilla force against GCC users. Ruined my entire evening :)

Related

Is it possible to set breakpoint at OpenGL functions?

Sometimes I need to find out which part of the code call a certain OpenGL function, so I try this:
b glEnableVertexAttribArray
----------------------------
Breakpoint 3 at 0x7ffff0326c80 (2 locations)
But it doesn't work, is there any way to make this work?
I'm using gdb in ubuntu18.04, my GPU is GeForce GTX 1050 Ti
If you look at your GL/glew.h header, you will see that it contains lines similar to the following:
#define GLEW_GET_FUN(x) x
#define glCopyTexSubImage3D GLEW_GET_FUN(__glewCopyTexSubImage3D)
#define glDrawRangeElements GLEW_GET_FUN(__glewDrawRangeElements)
#define glTexImage3D GLEW_GET_FUN(__glewTexImage3D)
#define glTexSubImage3D GLEW_GET_FUN(__glewTexSubImage3D)
When you call glewInit, these __glew* variables are populated with pointers extracted from your OpenGL implementation.
In your case, you should set a breakpoint on the contents of such a pointer, so *__glewEnableVertexAttribArray.
For GLAD you will have to put a breakpoint on *glad_glEnableVertexAttribArray. Note the * in both cases: that tells your debugger to dereference the pointer and put the breakpoint at the correct location.

Warnings with a new Xcode project using FMDB, SqliteCipher and CocoaPods

I have installed CocoaPods.. and loaded the workspace as instructed.
I am getting these warnings which I do not understand, here is an example:
Pods-CipherDatabaseSync-SQLCipher
sqlite.c
/Users/admin/Code/CipherDatabaseSync/Pods/SQLCipher/sqlite3.c:24035:13: Ambiguous expansion of macro 'MAX'
I have looked around for a couple of hours and I am stumped at what I need to do, can someone please point me in the direction of somewhere that will provide some insight?
Thanks.
In the sqlite.c file it looks as if MIN and MAX are trying to be defined in two different areas of the file. The first time on line 214
/* Macros for min/max. */
#ifndef MIN
#define MIN(a,b) (((a)<(b))?(a):(b))
#endif /* MIN */
#ifndef MAX
#define MAX(a,b) (((a)>(b))?(a):(b))
#endif /* MAX */
Then secondly at line 8519
/*
** Macros to compute minimum and maximum of two numbers.
*/
#define MIN(A,B) ((A)<(B)?(A):(B))
#define MAX(A,B) ((A)>(B)?(A):(B))
I commented out where they define it the second time and all of the warnings went away after cleaning and building the project again.
Remove the MAX and MIN macro definitions from the sqlite3.c file, since they are already defined in system headers.
Issue Solution:
Open Xcode project Build settings
add “-Wno-ambiguous-macro” into “Other C flags”

__attribute__() macro and its effect on Visual Studio 2010 based projects

I have some legacy code that has got conditional preprocessing e.g. #ifdef and #else where I have found the use of __attribute__ macro. I have done a quick research and found out that it is specific to GNU compilers. I have to use this legacy code in Visual Studio 2010 using MSVC10 compiler and apparently it is complaining everywhere it sees attribute((unused)) even though it is protected by #ifndef and #ifdefs. An example is:
#ifdef __tone_z__
static const char *mcr_inf
#else
static char *mcr_inf
#endif
#ifndef _WINDOWS
__attribute__(( unused )) % this is causing all the problem!!
#endif
= "#(#) bla bla copyright bla";
#ifdef __tone_z__
static const char *mcr_info_2a353_id[2]
#else
static char *mcr_info_2a353_id[2]
#endif
__attribute__(( unused )) = { "my long copyright info!" } ;
I am really struggling to understand if it is very poorly planned code or is it just my misunderstanding. How do I avoid the usual compiler and linker errors with this __attribute__() directive? I have started to get C2061 errors (missing identifiers/unknown). I have got all necessary header files and nothing is missing, may be except GNU compiler (which I don't want!!).
Also, it seems that the end of line character ; is also being messed up when I take to code in windows....argh....I mean the UNIX end-of-line and Windows EOL how can I use this code without modifying the body....I can define in my property sheet about the _WINDOWS thingy, but cannot automatically adjust the EOL character recognition.
ANy help is appreciated!
Thanks.
My best guess is that _WINDOWS is, in fact, not defined by your compiler, and so use of __attibute__ is not protected.
In my opinion, the best way to protect against attributes is to define a macro like this:
#define __attribute__(A) /* do nothing */
That should simply remove all __attribute__ instances from the code.
In fact, most code that has been written to be portable has this:
#ifdef _GNUC
#define ATTR_UNUSED __attribute__((unused))
#else
#define ATTR_UNUSED
#endif
static TONE_Z_CONST char *integ_func ATTR_UNUSED = "my copyright info";
(Other __tone_z__ conditional removed for clarity only.)

Which one is the real copy_from_user()?

I am reading the Linux kernel code for copy_fom_user, which is architecture dependent and I focus on x86 architectures.
But I got two pieces of implementation for it.
One is here (in arch/x86/lib/usercopy_32.c), while the other is here (in include/asm-generic/uaccess.h).
Which one will be finally compiled into the kernel. I guess the former is the real one, but I am not sure. What is more strange is that the former has the function name _copy_from_user instead of copy_from_user
I always have this kind of confusions when reading the kernel code. For example, due to the conditional compiling, the same function may have multiple implementation and I cannot determine which one will be used in general. Is there any tool that, given a complied kernel and a function of interest, tells you the corresponding binary code, so that you can disassemble it? Or it would be even better if it can tell you the source code that the binary code corresponds to.
Generally, if there is a module present in the architecture-specific subdirectory, that is the one being used. Otherwise, the generic one is it.
For the modules given, the .c is the correct one. Rarely is there any executable code in a .h. I have 2.6.27.8's uaccess.h handy:
#ifndef _ASM_GENERIC_UACCESS_H_
#define _ASM_GENERIC_UACCESS_H_
/*
* This macro should be used instead of __get_user() when accessing
* values at locations that are not known to be aligned.
*/
#define __get_user_unaligned(x, ptr) \
({ \
__typeof__ (*(ptr)) __x; \
__copy_from_user(&__x, (ptr), sizeof(*(ptr))) ? -EFAULT : 0; \
(x) = __x; \
})
/*
* This macro should be used instead of __put_user() when accessing
* values at locations that are not known to be aligned.
*/
#define __put_user_unaligned(x, ptr) \
({ \
__typeof__ (*(ptr)) __x = (x); \
__copy_to_user((ptr), &__x, sizeof(*(ptr))) ? -EFAULT : 0; \
})
#endif /* _ASM_GENERIC_UACCESS_H */
Look at that carefully. These are macro wrappers to call the underlying __copy_from_user() and __copy_to_user() functions, which are implemented differently on each architecture.

Does VB6 have a #pragma pack equivalent?

I am developing a TCP/IP client that has to deal with a proprietary binary protocol. I was considering using user-defined types to represent the protocol headers, and using CopyMemory to shuffle data to and from the UDT and a byte array. However, it appears that VB6 adds padding bytes to align user-defined types. Is there any way to force VB6 to not pad UDT's, similar to the #pragma pack directive available in many C/C++ compilers? Perhaps a special switch passed to the compiler?
No.
Your best bet is to write the low level code in C or C++ (where you do have #pragma pack), then expose the interface via COM.
There is not any way to force VB6 to not pad UDT's, similar to the #pragma pack directive available in many C/C++ compilers, but you can do it the other way around.
According to Q194609 Visual Basic uses 4 bytes alignment and
Visual C++ uses 8 bytes by default.
When using VB6 to call out to a C DLL, I used the MS "pshpack4.h" header files to handle the alignment because various compilers do this in different ways, as shown in this (rather edited) example:
// this is in a header file called vbstruct.h
...
# define VBSTRING char
# define VBFIXEDSTRING char
# define VBDATE double
# define VBSINGLE float
# ifdef _WIN32
# define VBLONG long
# define VBINT short
# else // and this was for 16bit code not 64bit!!!!
# define VBLONG long
# define VBINT int
# endif
...
# include "pshpack4.h"
...
typedef struct VbComputerNameStruct
{
VBLONG sName;
VBSTRING ComputerName[VB_COMPUTERNAME_LENGTH];
} VbComputerNameType;
typedef struct VbNetwareLoginInfoStruct
{
VBLONG ObjectId;
VBINT ObjectType;
VBSTRING ObjectName[48];
} VbNetwareLoginInfoType;
...
# include "poppack.h"

Resources