Cocoa:NSUInteger vs unsigned int When the Range is Very Small? - cocoa

I have an unsigned int variable and it can only have the values of 0 -> 30. What should I use: unsigned int or NSUInteger? (for both 32 and 64 bit)

I’d go with either NSUInteger (as the idiomatic general unsigned integer type in Cocoa) or uint8_t (if size matters). If I expected to be using 0–30 values in several places for the same type of data, I’d typedef it to describe what it represents.

Running this:
int sizeLong = sizeof(unsigned long);
int sizeInt = sizeof(unsigned int);
NSLog(#"%d, %d", sizeLong, sizeInt);
on 64bits gives:
8, 4
on 32 bits gives:
4, 4
So that yes, on 64 bits unsigned long (NSUInteger) takes twice as much memory as NSUInteger on 32 bits.

It makes very little difference in your case, there is no right or wrong. I might use an NSUInteger, just to match with Cocoa API stuff.
NSUInteger is defined like this:
#if __LP64__ || TARGET_OS_EMBEDDED || TARGET_OS_IPHONE || TARGET_OS_WIN32 || NS_BUILD_32_LIKE_64
typedef long NSInteger;
typedef unsigned long NSUInteger;
#else
typedef int NSInteger;
typedef unsigned int NSUInteger;
#endif

When u really want to use some of the unsigned type, choosing between unsigned int and NSUInteger does not matter because those types are equal(comparing the range and size in 32 & 64 bit). The same applies to int and NSInteger:
#if __LP64__ || (TARGET_OS_EMBEDDED && !TARGET_OS_IPHONE) || TARGET_OS_WIN32 || NS_BUILD_32_LIKE_64
typedef long NSInteger;
typedef unsigned long NSUInteger;
#else
typedef int NSInteger;
typedef unsigned int NSUInteger;
#endif

Personally, I have just been bitten by this choice. I went down the NSUInteger route and have just spent HOURS looking into an obscure bug.
I had code that picked a random number and returned an NSUInteger. The code for the relied on the overflow of the number. However I did not anticipate that the size of the number varied between 32bit and 64bit systems. The rest of my code assumed (incorrectly) that the number would be up to 32 bit in size. The result is the code worked perfectly under 32 bit devices, but on iPhone 5S, it all fell apart.
There is nothing wrong with using NSUInteger, however its worth remembering that the number range is significantly higher so factor this dynamicism into any maths you do with that number.

Related

ARM GCC compiler "buggy" conversion

Problem
I am working with flash memory optimization of STM32F051. It's revealed, that conversion between floatand int types consumes a lot of flash.
Digging into this, it turned out that the conversion to int takes around 200 bytes of flash memory; while the conversion to unsigned int takes around 1500 bytes!
It’s known, that both int and unsigned int differ only by the interpretation of the ‘sign’ bit, so such behavior – is a great mystery for me.
Note: Performing the 2-stage conversion float -> int -> unsigned int also consumes only around 200 bytes.
Questions
Analyzing that, I have such questions:
1) What is a mechanism of the conversion of float to unsigned int. Why it takes so many memory space, when in the same time conversion float->int->unsigned int takes so little memory? Maybe it’s connected with IEEE 754 standard?
2) Are there any problems expected when the conversion float->int->unsigned int is used instead of a direct float ->int?
3) Are there any methods to wrap float -> unsigned int conversion keeping the low memory footprint?
Note: The familiar question has been already asked here (Trying to understand how the casting/conversion is done by compiler,e.g., when cast from float to int), but still there is no clear answer and my question is about the memory usage.
Technical data
Compiler: ARM-NONE-EABI-GCC (gcc version 4.9.3 20141119 (release))
MCU: STM32F051
MCU's core: 32 bit ARM CORTEX-M0
Code example
float -> int (~200 bytes of flash)
int main() {
volatile float f;
volatile int i;
i = f;
return 0;
}
float -> unsigned int (~1500 bytes! of flash)
int main() {
volatile float f;
volatile unsigned int ui;
ui = f;
return 0;
}
float ->int-> unsigned int (~200 bytes of flash)
int main() {
volatile float f;
volatile int i;
volatile unsigned int ui;
i = f;
ui = i;
return 0;
}
There is no fundamental reason for the conversion from float to unsigned int should be larger than the conversion from float to signed int, in practice the float to unsigned int conversion can be made smaller than the float to signed int conversion.
I did some investigations using the GNU Arm Embedded Toolchain (Version 7-2018-q2) and
as far as I can see the size problem is due to a flaw in the gcc runtime library. For some reason this library does not provide an specialized version of the __aeabi_f2uiz function for Arm V6m, instead it falls back on a much larger general version.

Repeating values for in a random bytes generator in c++

I have made a random bytes generator for intialization vector of CBC mode AES implementation,
#include <iostream>
#include <random>
#include <climits>
#include <algorithm>
#include <functional>
#include <stdio.h>
using bytes_randomizer = std::independent_bits_engine<std::default_random_engine, CHAR_BIT, uint8_t>;
int main()
{
bytes_randomizer br;
char x[3];
uint8_t data[100];
std::generate(std::begin(data), std::end(data), std::ref(br));
for(int i = 0; i < 100; i++)
{
sprintf(x, "%x", data[i]);
std::cout << x << "\n";
}
}
But the problem is it gives the same sequence over and over, I found a solution to on Stack which is to use srand() but this seems to work only for rand().
Any solutions to this, also is there a better way to generate nonce for generating an unpredictable Initialization Vector.
Error C2338: invalid template argument for independent_bits_engine: N4659 29.6.1.1 [rand.req.genl]/1f requires one of unsigned short, unsigned int, unsigned long, or unsigned long long
Error C2338 note: char, signed char, unsigned char, int8_t, and uint8_t are not allowed
You can't use uint8_t in independent_bits_engine, at least on Visual Studio 2017. I don't know where and how you managed to compile it.
As the answer DeiDei suggests, seeding the engine is an important part to get random values. It's also same with rand().
srand(time(nullptr)); is required to get random values by using rand().
You can use:
using bytes_randomizer = std::independent_bits_engine<std::default_random_engine, CHAR_BIT, unsigned long>;
std::random_device rd;
bytes_randomizer br(rd());
Some example output:
25
94
bd
6d
6c
a4
You need to seed the engine, otherwise a default seed will be used which will give you the same sequence every time. This is the same as the usage of srand and rand.
Try:
std::random_device rd;
bytes_randomizer br(rd());

c alignment of pointers

I'm wondering if it's possible to hint to gcc that a pointer points to an aligned boundary. if I have a function:
void foo ( void * pBuf ) {
uint64_t *pAligned = pBuf;
pAligned = ((pBuf + 7) & ~0x7);
var = *pAligned; // I want this to be aligned 64 bit access
}
And I know that pBuf is 64 bit aligned, is there any way to tell gcc that pAligned points to a 64 bit boundary? If I do:
uint64_t *pAligned __attribute__((aligned(16)));
I believe that means that the address of the pointer is 64 bit aligned, but it doesn't tell the compiler that the what it points to is aligned, and therefore the compiler would likely tell it to do an unaligned fetch here. This could slow things down if I'm looping through a large array.
There are several ways to inform GCC about alignment.
Firstly you can attach align attribute to pointee, rather than pointer:
int foo() {
int __attribute__((aligned(16))) *p;
return (unsigned long long)p & 3;
}
Or you can use (relatively new) builtin:
int bar(int *p) {
int *pa = __builtin_assume_aligned(p, 16);
return (unsigned long long)pa & 3;
}
Both variants optimize to return 0 due to alignment.
Unfortunately the following does not seem to work:
typedef int __attribute__((aligned(16))) *aligned_ptr;
int baz(aligned_ptr p) {
return (unsigned long long)p & 3;
}
and this one does not either
typedef int aligned_int __attribute__((aligned (16)));
int braz(aligned_int *p) {
return (unsigned long long)p & 3;
}
even though docs suggest the opposite.

long double subnormals/denormals get truncated to 0 [-Woverflow]

In the IEEE754 standarad, the minimum strictly positive (subnormal) value is 2−16493 ≈ 10−4965 using Quadruple-precision floating-point format. Why does GCC reject anything lower than 10-4949? I'm looking for an explanation of the different things that could be going on underneath which determine the limit to be 10-4949 rather than 10−4965.
#include <stdio.h>
void prt_ldbl(long double decker) {
unsigned char * desmond = (unsigned char *) & decker;
int i;
for (i = 0; i < sizeof (decker); i++) {
printf ("%02X ", desmond[i]);
}
printf ("\n");
}
int main()
{
long double x = 1e-4955L;
prt_ldbl(x);
}
I'm using GNU GCC version 4.8.1 online - not sure which architecture it's running on (which I realize may be the culprit). Please feel free to post your findings from different architectures.
Your long double type may not be(*) quadruple-precision. It may simply be the 387 80-bit extended-double format. This format has the same number of bits for the exponent as quad-precision, but many fewer significand bits, so the minimum value that would be representable in it sounds about right (2-16445)
(*) Your long double is likely not to be quad-precision, because no processor implements quad-precision in hardware. The compiler can always implement quad-precision in software, but it is much more likely to map long double to double-precision, to extended-double or to double-double.
The smallest 80-bit long double is around 2-16382 - 63 ~= 10-4951, not 2-164934. So the compiler is entirely correct; your number is smaller than the smallest subnormal.

ARM GCC unaligned access

if TStruct is packed, then this code ends with Str.D == 0x00223344 (not 0x11223344). Why? ARM GCC 4.7
#include <string.h>
typedef struct {
unsigned char B;
unsigned int D;
} __attribute__ ((packed)) TStruct;
volatile TStruct Str;
int main( void) {
memset((void *)&Str, 0, sizeof(Str));
Str.D = 0x11223344;
if(Str.D != 0x11223344) {
return 1;
}
return 0;
}
I guess your problem has nothing to do with unaligned access, but with structure definition. int is not necessarily 32 bit long. According to the C standard, int is at least 16 bit long, and char is at least 8 bits long.
My guess is, Your compiler optimizes TStruct so it looks like this:
struct {
unsigned char B : 8;
unsigned int D : 24;
} ...;
When you are assigning 0x11223344 to Str.D, than according to the C standard, the compiler must only make sure that at least 16 bits (0x3344) are written to Str.D. You didn't specify that Str.D is 32 bit long, only that it is at least 16 bits long.
Your compiler may also arrange the struct like this:
struct {
unsigned char B : 16;
unsigned int D : 16;
} ...;
B is at least 8 bits long, and D is at least 16 bits long, all ok.
Probably, what you want to do, is:
#include <stdint.h>
typedef struct {
uint8_t B;
uint32_t D;
} __attribute__((packed)) TStruct;
That way You can ensure a 32-bit value 0x11223344 properly writes to Str.D. It is a good idea to use size constrained types for __packed structs.
As for unaligned access of a member inside a struct, the compiler should take care of it. If a compiler knows the structure definition, then when you are accessing Str.D it should take care of any unaligned access and bit/byte operations.

Resources