ARM GCC compiler "buggy" conversion - gcc

Problem
I am working with flash memory optimization of STM32F051. It's revealed, that conversion between floatand int types consumes a lot of flash.
Digging into this, it turned out that the conversion to int takes around 200 bytes of flash memory; while the conversion to unsigned int takes around 1500 bytes!
It’s known, that both int and unsigned int differ only by the interpretation of the ‘sign’ bit, so such behavior – is a great mystery for me.
Note: Performing the 2-stage conversion float -> int -> unsigned int also consumes only around 200 bytes.
Questions
Analyzing that, I have such questions:
1) What is a mechanism of the conversion of float to unsigned int. Why it takes so many memory space, when in the same time conversion float->int->unsigned int takes so little memory? Maybe it’s connected with IEEE 754 standard?
2) Are there any problems expected when the conversion float->int->unsigned int is used instead of a direct float ->int?
3) Are there any methods to wrap float -> unsigned int conversion keeping the low memory footprint?
Note: The familiar question has been already asked here (Trying to understand how the casting/conversion is done by compiler,e.g., when cast from float to int), but still there is no clear answer and my question is about the memory usage.
Technical data
Compiler: ARM-NONE-EABI-GCC (gcc version 4.9.3 20141119 (release))
MCU: STM32F051
MCU's core: 32 bit ARM CORTEX-M0
Code example
float -> int (~200 bytes of flash)
int main() {
volatile float f;
volatile int i;
i = f;
return 0;
}
float -> unsigned int (~1500 bytes! of flash)
int main() {
volatile float f;
volatile unsigned int ui;
ui = f;
return 0;
}
float ->int-> unsigned int (~200 bytes of flash)
int main() {
volatile float f;
volatile int i;
volatile unsigned int ui;
i = f;
ui = i;
return 0;
}

There is no fundamental reason for the conversion from float to unsigned int should be larger than the conversion from float to signed int, in practice the float to unsigned int conversion can be made smaller than the float to signed int conversion.
I did some investigations using the GNU Arm Embedded Toolchain (Version 7-2018-q2) and
as far as I can see the size problem is due to a flaw in the gcc runtime library. For some reason this library does not provide an specialized version of the __aeabi_f2uiz function for Arm V6m, instead it falls back on a much larger general version.

Related

How do I allocate memory with cudaHostAlloc() for a c++ vector? [duplicate]

I am in the process of implementing multithreading through a NVIDIA GeForce GT 650M GPU for a simulation I have created. In order to make sure everything works properly, I have created some side code to test that everything works. At one point I need to update a vector of variables (they can all be updated separately).
Here is the gist of it:
`\__device__
int doComplexMath(float x, float y)
{
return x+y;
}`
`// Kernel function to add the elements of two arrays
__global__
void add(int n, float *x, float *y, vector<complex<long double> > *z)
{
int index = blockIdx.x * blockDim.x + threadIdx.x;
int stride = blockDim.x * gridDim.x;
for (int i = index; i < n; i += stride)
z[i] = doComplexMath(*x, *y);
}`
`int main(void)
{
int iGAMAf = 1<<10;
float *x, *y;
vector<complex<long double> > VEL(iGAMAf,zero);
// Allocate Unified Memory – accessible from CPU or GPU
cudaMallocManaged(&x, sizeof(float));
cudaMallocManaged(&y, sizeof(float));
cudaMallocManaged(&VEL, iGAMAf*sizeof(vector<complex<long double> >));
// initialize x and y on the host
*x = 1.0f;
*y = 2.0f;
// Run kernel on 1M elements on the GPU
int blockSize = 256;
int numBlocks = (iGAMAf + blockSize - 1) / blockSize;
add<<<numBlocks, blockSize>>>(iGAMAf, x, y, *VEL);
// Wait for GPU to finish before accessing on host
cudaDeviceSynchronize();
return 0;
}`
I am trying to allocate unified memory (memory accessible from the GPU and CPU). When compiling using nvcc, I get the following error:
error: no instance of overloaded function "cudaMallocManaged" matches the argument list
argument types are: (std::__1::vector, std::__1::allocator>> *, unsigned long)
How can I overload the function properly in CUDA to use this type with multithreading?
It isn't possible to do what you are trying to do.
To allocate a vector using managed memory you would have to write your own implementation of an allocator which inherits from std::allocator_traits and calls cudaMallocManaged under the hood. You can then instantiate a std::vector using your allocator class.
Also note that your CUDA kernel code is broken in that you can't use std::vector in device code.
Note that although the question has managed memory in view, this is applicable to other types of CUDA allocation such as pinned allocation.
As another alternative, suggested here, you could consider using a thrust host vector in lieu of std::vector and use a custom allocator with it. A worked example is here in the case of pinned allocator (cudaMallocHost/cudaHostAlloc).

c alignment of pointers

I'm wondering if it's possible to hint to gcc that a pointer points to an aligned boundary. if I have a function:
void foo ( void * pBuf ) {
uint64_t *pAligned = pBuf;
pAligned = ((pBuf + 7) & ~0x7);
var = *pAligned; // I want this to be aligned 64 bit access
}
And I know that pBuf is 64 bit aligned, is there any way to tell gcc that pAligned points to a 64 bit boundary? If I do:
uint64_t *pAligned __attribute__((aligned(16)));
I believe that means that the address of the pointer is 64 bit aligned, but it doesn't tell the compiler that the what it points to is aligned, and therefore the compiler would likely tell it to do an unaligned fetch here. This could slow things down if I'm looping through a large array.
There are several ways to inform GCC about alignment.
Firstly you can attach align attribute to pointee, rather than pointer:
int foo() {
int __attribute__((aligned(16))) *p;
return (unsigned long long)p & 3;
}
Or you can use (relatively new) builtin:
int bar(int *p) {
int *pa = __builtin_assume_aligned(p, 16);
return (unsigned long long)pa & 3;
}
Both variants optimize to return 0 due to alignment.
Unfortunately the following does not seem to work:
typedef int __attribute__((aligned(16))) *aligned_ptr;
int baz(aligned_ptr p) {
return (unsigned long long)p & 3;
}
and this one does not either
typedef int aligned_int __attribute__((aligned (16)));
int braz(aligned_int *p) {
return (unsigned long long)p & 3;
}
even though docs suggest the opposite.

long double subnormals/denormals get truncated to 0 [-Woverflow]

In the IEEE754 standarad, the minimum strictly positive (subnormal) value is 2−16493 ≈ 10−4965 using Quadruple-precision floating-point format. Why does GCC reject anything lower than 10-4949? I'm looking for an explanation of the different things that could be going on underneath which determine the limit to be 10-4949 rather than 10−4965.
#include <stdio.h>
void prt_ldbl(long double decker) {
unsigned char * desmond = (unsigned char *) & decker;
int i;
for (i = 0; i < sizeof (decker); i++) {
printf ("%02X ", desmond[i]);
}
printf ("\n");
}
int main()
{
long double x = 1e-4955L;
prt_ldbl(x);
}
I'm using GNU GCC version 4.8.1 online - not sure which architecture it's running on (which I realize may be the culprit). Please feel free to post your findings from different architectures.
Your long double type may not be(*) quadruple-precision. It may simply be the 387 80-bit extended-double format. This format has the same number of bits for the exponent as quad-precision, but many fewer significand bits, so the minimum value that would be representable in it sounds about right (2-16445)
(*) Your long double is likely not to be quad-precision, because no processor implements quad-precision in hardware. The compiler can always implement quad-precision in software, but it is much more likely to map long double to double-precision, to extended-double or to double-double.
The smallest 80-bit long double is around 2-16382 - 63 ~= 10-4951, not 2-164934. So the compiler is entirely correct; your number is smaller than the smallest subnormal.

ARM GCC unaligned access

if TStruct is packed, then this code ends with Str.D == 0x00223344 (not 0x11223344). Why? ARM GCC 4.7
#include <string.h>
typedef struct {
unsigned char B;
unsigned int D;
} __attribute__ ((packed)) TStruct;
volatile TStruct Str;
int main( void) {
memset((void *)&Str, 0, sizeof(Str));
Str.D = 0x11223344;
if(Str.D != 0x11223344) {
return 1;
}
return 0;
}
I guess your problem has nothing to do with unaligned access, but with structure definition. int is not necessarily 32 bit long. According to the C standard, int is at least 16 bit long, and char is at least 8 bits long.
My guess is, Your compiler optimizes TStruct so it looks like this:
struct {
unsigned char B : 8;
unsigned int D : 24;
} ...;
When you are assigning 0x11223344 to Str.D, than according to the C standard, the compiler must only make sure that at least 16 bits (0x3344) are written to Str.D. You didn't specify that Str.D is 32 bit long, only that it is at least 16 bits long.
Your compiler may also arrange the struct like this:
struct {
unsigned char B : 16;
unsigned int D : 16;
} ...;
B is at least 8 bits long, and D is at least 16 bits long, all ok.
Probably, what you want to do, is:
#include <stdint.h>
typedef struct {
uint8_t B;
uint32_t D;
} __attribute__((packed)) TStruct;
That way You can ensure a 32-bit value 0x11223344 properly writes to Str.D. It is a good idea to use size constrained types for __packed structs.
As for unaligned access of a member inside a struct, the compiler should take care of it. If a compiler knows the structure definition, then when you are accessing Str.D it should take care of any unaligned access and bit/byte operations.

Cocoa:NSUInteger vs unsigned int When the Range is Very Small?

I have an unsigned int variable and it can only have the values of 0 -> 30. What should I use: unsigned int or NSUInteger? (for both 32 and 64 bit)
I’d go with either NSUInteger (as the idiomatic general unsigned integer type in Cocoa) or uint8_t (if size matters). If I expected to be using 0–30 values in several places for the same type of data, I’d typedef it to describe what it represents.
Running this:
int sizeLong = sizeof(unsigned long);
int sizeInt = sizeof(unsigned int);
NSLog(#"%d, %d", sizeLong, sizeInt);
on 64bits gives:
8, 4
on 32 bits gives:
4, 4
So that yes, on 64 bits unsigned long (NSUInteger) takes twice as much memory as NSUInteger on 32 bits.
It makes very little difference in your case, there is no right or wrong. I might use an NSUInteger, just to match with Cocoa API stuff.
NSUInteger is defined like this:
#if __LP64__ || TARGET_OS_EMBEDDED || TARGET_OS_IPHONE || TARGET_OS_WIN32 || NS_BUILD_32_LIKE_64
typedef long NSInteger;
typedef unsigned long NSUInteger;
#else
typedef int NSInteger;
typedef unsigned int NSUInteger;
#endif
When u really want to use some of the unsigned type, choosing between unsigned int and NSUInteger does not matter because those types are equal(comparing the range and size in 32 & 64 bit). The same applies to int and NSInteger:
#if __LP64__ || (TARGET_OS_EMBEDDED && !TARGET_OS_IPHONE) || TARGET_OS_WIN32 || NS_BUILD_32_LIKE_64
typedef long NSInteger;
typedef unsigned long NSUInteger;
#else
typedef int NSInteger;
typedef unsigned int NSUInteger;
#endif
Personally, I have just been bitten by this choice. I went down the NSUInteger route and have just spent HOURS looking into an obscure bug.
I had code that picked a random number and returned an NSUInteger. The code for the relied on the overflow of the number. However I did not anticipate that the size of the number varied between 32bit and 64bit systems. The rest of my code assumed (incorrectly) that the number would be up to 32 bit in size. The result is the code worked perfectly under 32 bit devices, but on iPhone 5S, it all fell apart.
There is nothing wrong with using NSUInteger, however its worth remembering that the number range is significantly higher so factor this dynamicism into any maths you do with that number.

Resources