I need to store the maximum value of an NSInteger into an NSInteger? What is the correct syntax to do it?
Thanks.
The maximum value of an NSInteger is NSIntegerMax.
The maximum value for an NSInteger is NSIntegerMax
(from Foundation Constants Reference)
For 32-bit & 64 bit, there are two conventions:
a)ILP32
b)LP64
The 32-bit runtime uses a convention called ILP32, in which integers, long integers, and pointers are 32-bit quantities. The 64-bit runtime uses the LP64 convention; integers are 32-bit quantities, and long integers and pointers are 64-bit quantities. These conventions match the ABI for apps running on OS X (and similarly, the Cocoa Touch conventions match the data types used in Cocoa), making it easy to write interoperable code between the two operating systems.
Table 1-1 all of the integer types commonly used in Objective-C code. Each entry includes the size of the data type and its expected alignment in memory. The highlighted table entries indicate places where the LP64 convention differs from the ILP32 convention. These size differences indicate places where your code’s behavior changes when compiled for the 64-bit runtime. The compiler defines the LP64 macro when compiling for the 64-bit runtime.
for 64 bit max range for NSInteger is : LONG_MAX : 9223372036854775807
Took me a little while for me to realise why I was getting a different value from NSIntegerMax when using NSUInteger!!
And the maximum for a NSUInteger is NSUIntegerMax
(also from http://developer.apple.com/library/ios/#documentation/cocoa/reference/foundation/Miscellaneous/Foundation_Constants/Reference/reference.html)
Related
The X11 protocol defines an atom as a 32-bit integer, but on my system, the Atom type in is a typedef for unsigned long, which is a 64-bit integer. The manual for Xlib says that property types have a maximum size of 32 bits. There seems to be some conflict here. I can think of three possible solutions.
If Xlib treats properties of type XA_ATOM as a special case, then you can simply pass 32 for 'format' and an array of atoms for 'data'. This seems unclean and hackish, and I highly doubt that this is correct.
The manual for Xlib appears to be ancient. Since Atom is 64 bits long on my system, should I pass 64 for the 'format' parameter even though 64 is not listed as an allowed value?
Rather than an array of Atoms, should I pass an array of uint32_t values for the 'data' parameter? This seems like it would most likely be the correct solution to me, but this is not what they did in some sources I've looked up that use XChangeProperty, such as SDL.
SDL appears to use solution 1 when setting the _NET_WM_WINDOW_TYPE property, but I suspect that this may be a bug. On systems with little endian byte order (LSB first), this would appear to work if the property has only one element.
Has anyone else encountered this problem? Any help is appreciated.
For the property routines you always want to pass an array of 'long', 'short' or 'char'. This is always true independent of the actual bit width. So, even if your long or atom is 64 bits, it will be translated to 32 bits behind the scenes.
The format is the number of server side bits used, not client side. So, for format 8, you must pass a char array, for format 16, you always use a short array and for format 32 you always use a long array. This is completely independent of the actual lengths of short or long on a given machine. 32 bit values such as Atom or Window always are in a 'long'.
This may seem odd, but it is for a good reason, the C standard does not guarantee types exist that have exactly the same widths as on the server. For instance, a machine with no native 16 bit type. However a 'short' is guaranteed to have at least 16 bits and a long is guaranteed to have at least 32 bits. So by making the client API in terms of 'short' and 'long' you can both write portable code and always have room for the full X id in the C type.
I am working on ARM optimizations using the NEON intrinsics, from C++ code. I understand and master most of the typing issues, but I am stuck on this one:
The instruction vzip_u8 returns a uint8x8x2_t value (in fact an array of two uint8x8_t). I want to assign the returned value to a plain uint16x8_t. I see no appropriate vreinterpretq intrinsic to achieve that, and simple casts are rejected.
Some definitions to answer clearly...
NEON has 32 registers, 64-bits wide (dual view as 16 registers, 128-bits wide).
The NEON unit can view the same register bank as:
sixteen 128-bit quadword registers, Q0-Q15
thirty-two 64-bit doubleword registers, D0-D31.
uint16x8_t is a type which requires 128-bit storage thus it needs to be in an quadword register.
ARM NEON Intrinsics has a definition called vector array data type in ARM® C Language Extensions:
... for use in load and store operations, in
table-lookup operations, and as the result type of operations that return a pair of vectors.
vzip instruction
... interleaves the elements of two vectors.
vzip Dd, Dm
and has an intrinsic like
uint8x8x2_t vzip_u8 (uint8x8_t, uint8x8_t)
from these we can conclude that uint8x8x2_t is actually a list of two random numbered doubleword registers, because vzip instructions doesn't have any requirement on order of input registers.
Now the answer is...
uint8x8x2_t can contain non-consecutive two dualword registers while uint16x8_t is a data structure consisting of two consecutive dualword registers which first one has an even index (D0-D31 -> Q0-Q15).
Because of this you can't cast vector array data type with two double word registers to a quadword register... easily.
Compiler may be smart enough to assist you, or you can just force conversion however I would check the resulting assembly for correctness as well as performance.
You can construct a 128 bit vector from two 64 bit vectors using the vcombine_* intrinsics. Thus, you can achieve what you want like this.
#include <arm_neon.h>
uint8x16_t f(uint8x8_t a, uint8x8_t b)
{
uint8x8x2_t tmp = vzip_u8(a,b);
uint8x16_t result;
result = vcombine_u8(tmp.val[0], tmp.val[1]);
return result;
}
I have found a workaround: given that the val member of the uint8x8x2_t type is an array, it is therefore seen as a pointer. Casting and deferencing the pointer works ! [Whereas taking the address of the data raises an "address of temporary" warning.]
uint16x8_t Value= *(uint16x8_t*)vzip_u8(arg0, arg1).val;
It turns out that this compiles and executes as should (at least in the case I have tried). I haven't looked at the assembly code so I cannot grant it is implemented properly (I mean just keeping the value in a register instead of writing/read to/from memory.)
I was facing the same kind of problem, so I introduced a flexible data type.
I can now therefore define the following:
typedef NeonVectorType<uint8x16_t> uint_128bit_t; //suitable for uint8x16_t, uint8x8x2_t, uint32x4_t, etc.
typedef NeonVectorType<uint8x8_t> uint_64bit_t; //suitable for uint8x8_t, uint32x2_t, etc.
Its a bug in GCC(now fixed) on 4.5 and 4.6 series.
Bugzilla link http://gcc.gnu.org/bugzilla/show_bug.cgi?id=48252
Please take the fix from this bug and apply to gcc source and rebuild it.
Few months ago i get myself a laptop with cpu intel i7-2630qm with a 64-bit windows. While practising my programming skils under this system , I encountered some difference in terms of integer size which makes me think that it's probably due to my new 64-bit system.
Let's take a look at a code.
The C Code :
#include <stdio.h>
int main(void)
{
int num = 20;
printf("%d %lld\n" , num , num);
return 0;
}
The Question :
1.) I remember before getting this new laptop , which mean that time i'm still using my old 32-bit system , when i run this code , the program will print the integer 20 while some random number next to it due to the %lld specifier.
2.)But this phenomena no longer happen when i'm using my new laptop , it will instead print both integer correctly , even if i change the variable num to type short.
3.)Is it on a 64-bit system , there's new integer promotion which will promote int to long long when it's use as an argument??Or is it short integer can be promoted to long long which is 64-bit too when pass as an argument??
4.)Besides that I'm quite confuse with one thing , on 16-bit system , int would be 16-bit and it would be 32-bit when it's on a 32-bit system.But why isn't it become 64-bit when it's on a 64-bit??
==================================================================================
Addon :
1.)I choose "console program(64-bit)" as my project on the IDE while using my new laptop but "console program" on my 32-bit old PC system.
2.)I've check the size of int under "console program(64-bit)" project using sizeof operator and it returns 32-bit while short still remain 16-bit.The only change is long type , it's 64-bit and long long still remain its usual 64-bit size.
You are seeing this side-effect because the calling convention is different for x64 code. The function arguments in 32-bit x86 code are passed on the stack. The printf() function will read a word from the stack that isn't part of the activation frame. The odds that it contains a value of 0 are extremely low.
In x64 code, the first 4 arguments for a function are passed through cpu registers, not the stack. The odds that the high word of the 64-bit register is zero by chance are quite good. Left there by a previous 64-bit operation that worked with small numbers. But certainly not guaranteed.
Trying to reason out the defined behavior of undefined behavior is otherwise not useful. Other than trying to guess how the language is implemented for the core that's in your machine. There are better resources for that. Learning the machine code that's applicable to your compiler is an excellent shortcut. Together with the decent debugger that shows you how your C code got translated into machine code. Machine code has no undefined behavior.
I do not have access to an windows 64-bit compiler right now, but my guess is the following.
Your question is not about integer promotion, but regarding how parameters are passed from the function caller to the called function. This is beyond the C specification, but it is interesting to know.
In 32-bit, all parameters are divided into 32-bit blocks as all registers can hold 32 bits. So in this case we have the following stack layout:
[ 32-bit format string pointer ][ num as 32-bit ][ num as 32-bit ] junk...
In 64-bit, all parameters are divided into 64-bit blocks as all registers can hold 64 bits. So the stack will contain the following:
[ 64-bit format string pointer ][ num as 64-bit ][ num as 64-bit ] junk...
The upper 32 bits of the 64-bit registers holding 32-bit values are conveniently set to zero.
So when printf is reading a 64-bit number, it will load the equivalent of two 32-bit registers on a 32-bit platform but only one 64-bit register, with high bits cleared, on a 64-bit platform.
(1 and 2) As already stated, the behaviour in this situation is undefined, so the compiler is allowed to behave differently for any reason or indeed no reason at all.
(3) The compiler is allowed to define int as 64-bit, in which case no promotion would be necessary because all the variables in question would be the same size. But it almost certainly doesn't.
(4) On most or all 64-bit compilers, int is 32-bits. This is because int has been 32 bits for so long that programmers have come to expect it and changing it would break existing code. As far as I know this isn't officially part of the standard, but it's one of those de-facto standards that are even harder to change. :-)
Everything you are describing is specific to whatever spec your compiler is using and the platform you are on (with the exception that long is guaranteed to be at least the same size as int):
Wikipedia entries:
long long
int
The c99 standard seeks to end this ambiguity by adding specific types; int32_t, uint64_t, etc. There's also a POSIX spec that defines u_int32_t, etc.
Edit: I missed the question about printf(), sorry. As #nos points out in the comments on your question, passing something other than a long long to %lld results in undefined behavior. This means there is no rhyme or reason as to what it will do; unicorns spontaneously appearing would not be out of the question.
Oh - and on every compiler and OS I know, int is 32 bit. Changing that has the potential to break things that depend on it being 32 bit.
Fujitsu microcontroller used is 32bit.
Hence enum storage is also 32bit. But in my project actually enum elements do not exceed more than 256.
Is there any compiler options to size down the storage for enums?
You could use a bit field to be able to store 256 unique values in 8 words (256 bits / 32 bit words = 8), but then the compiler will no longer be able to enforce that only a single bit is set at a time. But, you could easily write a wrapper function to clear out all the previous bits before setting one. It would probably end up kind of messy, but that's what tends to happen when you start using these kinds of tricks at this level to save memory.
You could use preprocessor macros (#define) to map symbolic names to values. without knowing what your application is, it's hard to predict if this is sensible :)
I'm trying to port code over to compile using Microchip's C18 compiler for a PIC microcontroller. The code includes enums with large values assigned (>8-bit). They are not working properly, indicating that, for example, 0x02 is the same as 0x2002.
How can I force the enumerated values to be referenced as 16-bit values?
In the DirectX headers, every enum has a FORCE_DWORD value in it with a value of 0xffffffff. I guess that's basically what you want, it forces to compiler to let the enum have at least 32 bits. So try adding a FORCE_WORD with a value of 0xffff.
This won't solve your problem, of course, if that compiler just does not support enums greater than 8 bits.
I found the problem.
For future reference, the C18 compiler will NOT promote variables OR constants when performing a math operation, even though it is ANSI C standard. This is to increase speed while running on 8-bit processors.
To force ANSI compliance, use the "-Oi" compiler option.
See page 92 of the C18 manual.