C++ 11: Adding an int to a wchar_t - c++11

I naively added an int to a wchar_t resulting in a Visual Studio 2013 warning.
L'A' + 1 // next letter
warning C4244: 'argument' : conversion from 'int' to 'wchar_t', possible loss of data
So the error is concerned that a 4 byte int is being implicitly cast to a 2 byte wchar_t. Fair enough.
What is the C++ 11 standards safe way of doing this? I'm wondering about cross-platform implications, code-point correctness and readability of doing things like: L'A' + (wchar_t)1 or L'A' + \U1 or whatever. What are my coding options?
Edit T+2: I presented this question to a hacker's group. Unsurprisingly, no one got it correct. Everyone agreed this is a great interview question when hiring C/C++ Unicode programmers because it's very terse and deserves a meaty conversation.

When you add two integral values together, such that both values can fit within an int, they are added as ints.
If you require an unsigned int to fit one of them, they are instead added as unsigned ints.
If those are not big enough, bigger types can be used. It gets complicated, and it changes by standard revision if I remember rightly (there where some rough spots).
Now, addition with ints is unspecified if it overflows. Addition with unsigned ints is guaranteed to wrap mod some power of two.
When you convert an int or an unsigned int to a signed type, if it doesn't fit the result is unspecified. If it does fit, it fits.
If you convert an int or unsigned int to an unsigned type, the value that can be represented equal to the source mod some power of two (fixed for the given unsigned type) is the result.
Many popular C++ compilers and hardware return the same bit pattern for int as they would for unsigned int interpreted by 2s complement logic, but that is not required by the standard.
So L'A' + 1 involves converting L'A' to an int, adding 1 as an int.
If we add the missing bit:
wchar_t bob = L'A' + 1;
we can see where the warning occurs. The compiler sees someone converting an int to a wchar_t and warns them. (this makes more sense when the values in question are not compile time constants)
If we make it explicit:
wchar_t bob = static_cast<wchar_t>(L'A' + 1);
the warning (probably? hopefully?) goes away. So long as the right hand side results in being in the range of valid wchar_t values, you are golden.
If instead you are doing:
wchar_t bob = static_cast<wchar_t>(L'A' + x);
where x is an int, if wchar_t is signed you could be in trouble (unspecified result if x is large enough!), and if it unsigned you could still be somewhat surprised.
A nice thing about this static_cast method is that unlike (wchar_t)x or wchar_t(x) casts, it won't work if you accidentally feed pointers into the cast.
Note that casting x or 1 is relatively pointless, unless it quiets the compiler, as the values are always up-converted (logically) into ints prior to + operating (or unsigned ints if wchar_t is unsigned and the same size as an int). With int significantly larger than wchar_t this is relatively harmless if wchar_t is unsigned, as the back-conversion is guaranteed to do the same thing as adding in wchar_t mod its power of two, and if wchar_t is signed leaving the gamut gives an unspecified result anyhow.
So, cast the result using static_cast. If that doesn't work, use a bitmask to explicitly clear bits you won't care about.
Finally, VS2013 uses 2s complement math for int. So static_cast<wchar_t>(L'A' + x) and static_cast<wchar_t>( L'A' + static_cast<wchar_t>(x)) always produce the same values, and would do so if wchar_t was replaced with unsigned short or signed short.
This is a poor answer: it needs curation and culling. But I'm tired, and it might be illuminating.

Until I see a more elegant answer, which I hope there is, I'll go with this pattern:
(wchar_t)(L'A' + i)
I like this pattern because i can be negative or positive and it will evaluate as expected. My original notion to use L'A' + (wchar_t)i is flawed if i is negative and wchar_t is unsigned. I'm assuming here that wchar_t is implementation dependent and could be signed.

Related

Why does unsafe.Sizeof return a uintptr?

As per the documentation (https://golang.org/pkg/unsafe/#Sizeof) unsafe.Sizeof returns the size of the given expression in bytes. A size of any given expression can ideally be denoted by a uint32 or uint64. Then why does Golang return a uintptr instead? Isn't that confusing? A uintptr is supposed to hold a pointer to some data value but in this case it is not actually a pointer it is just a number right?
There are a lot of good answers in the comments, which boil down to "because that's big enough, yet not too big". I think, though, it might be helpful to view this from a historical perspective, with particular attention to how this all came about in the C programming language.
In very old (pre-standard) C, if you go far back enough in time, there was not even an explicit unsigned integer type. The PDP-11 had:
char, which was 8 bits and signed;
int, which was 16 bits and signed; and
pointers, which were 16 bits and unsigned.
That is:
int i;
int *u;
was how you made two integers, i being signed, and u being unsigned. Setting i to 32767 (0x7fff) and then incrementing it gave you -32768 (0x8000), which gradually increased to -1 (0xffff) and then zero. Setting u to 32767 and then incrementing it gave you 32768, which gradually increased to 65535, and then rolled over to zero.
The lack of distinction between integers and pointers meant that device drivers could read:
struct {
int csr;
int blk;
int bar;
int bcr;
};
0177440->bcr = count;
0177440->blk = block;
0177440->bar = addr;
0177440->csr = READ | GO;
which might be how one told a device to read some bytes or blocks.
(This is also why struct member names, like st_ino in struct stat, were all prefixed like this: st_ino just meant "some integer offset" and you could use the st_ino member with any pointer, or even with an ordinary variable. The prefix meant you could #include multiple headers without having their struct member names collide.)
All of this turned untenable when C was made to work on 32-bit and other machines. C grew an unsigned integer type, rather than pressing pointers into service as unsigned integers, and Steve Johnson's PCC compiler turned unsigned into a modifier, that could be applied to char and short as well as int. A lot of experimentation occurred. Eventually, in 1989, C was first standardized with most of the syntax and semantics that we have now (though new standards have added new types, and many functions, and so on).
Some of the early C pioneers were involved with creating Go, with particular influence from Ken Thompson. There is a quote on the Wikipedia page that is appropriate here:
When the three of us [Thompson, Rob Pike, and Robert Griesemer] got started, it was pure research. The three of us got together and decided that we hated C++. [laughter] ... [Returning to Go,] we started off with the idea that all three of us had to be talked into every feature in the language, so there was no extraneous garbage put into the language for any reason.
As we see from the early days of C, a pointer-as-integer is a suitable unsigned type that can not only hold any pointer, but, if treated as unsigned, can also hold any object size. A pointer-as-integer is not directly usable as a pointer, of course, and with a GC system and concurrency, we need the language itself to have pointers. But we also need to be able to write the runtime support for the language,1 for which we need integer-ized pointers, which also covers all of our needs for object sizes. So one type, built in to the compiler, covers all the requirements. That is as simple as possible, but no simpler.
1I say "we" as if I had anything to do with it. It's just obvious, once you have implemented a few runtime systems.

In Go, when should you use uint vs int?

At first glance, it seems like one might opt for uint when you need an int that you don't want to be negative. However, in practice it seems that int is nearly always preferred.
I see general recommendations like:
"Generally if you are working with integers you should just use the int type."
"uint should generally only be used for doing binary operations"
"Don't use unsigned types to enforce or suggest that a number must be positive. That's not what they're for."
"this is what The Go Programming Language recommends, with the specific example of uints being useful when you want to do bitwise operations"
I also noticed that Go will let you convert a negative int to uint and give some odd results:
x := -5
y := uint(x)
fmt.Println(y)
>> 18446744073709551611
So, my understanding is that I should always use int when dealing with whole numbers, regardless of sign, unless I find myself needing uint, and I'll know it when that's the case (I think???).
My questions:
Is this the right takeaway?
If so, why is this the case?
What's an example of when one should use uint? -- maybe a specific example, as opposed to "when doing binary operations", as I'm not sure I know what that means :)
Also, I'm asking specific to Go's implementation.
This answer is for C but it's relevant here.
Generally if you are working with integers you should just use the int type.
This is recommended generallly because most of the code that we "generally" encounter deals with type int. And it's also not generally required for you to choose between the use of an int and a uint type.
Don't use unsigned types to enforce or suggest that a number must be positive. That's not what they're for.
This is quite subjective. You can very well use it to keep your program and data type-safe and needn't be bothered with dealing with the occasional errors that come due to the case of a negative integer.
"this is what The Go Programming Language recommends, with the specific example of uints being useful when you want to do bitwise operations"
This looks vague. Please add the source for this, I would like to read up on it.
x := -5
y := uint(x)
fmt.Println(y)
>> 18446744073709551611
This is typical of a number of languages. Logic behind this is that when you convert an int type to a uint, the binary representation used for the int is kind of shoved into the uint type. In the end, everything is just an abstraction over binary.
For example, take a look at this code and it's output:
a := int64(-123)
byteSliceRev := *(*[8]byte)(unsafe.Pointer(&a)) // The byte slice representation we get is LTR in increasing order of significance
u := uint(a)
byteSliceRevU := *(*[8]byte)(unsafe.Pointer(&u))
byteSlice, byteSliceU := make([]byte, 8), make([]byte, 8)
for i := 0; i < 8; i++ {
byteSlice[i], byteSliceU[i] = byteSliceRev[7-i], byteSliceRevU[7-i]
}
fmt.Println(u)
// 18446744073709551493
fmt.Printf("%b\n", byteSlice)
// [11111111 11111111 11111111 11111111 11111111 11111111 11111111 10000101]
fmt.Printf("%b\n", byteSliceU)
// [11111111 11111111 11111111 11111111 11111111 11111111 11111111 10000101]
The byte representation of both the int64 type of -5 is the same as for uint type of 18446744073709551493.
So, my understanding is that I should always use int when dealing with whole numbers, regardless of sign, unless I find myself needing uint, and I'll know it when that's the case (I think???).
But isn't this more or less true of every code that "we" write.?!
Is this the right takeaway?
If so, why is this the case?
I hope I have answered these two questions. Feel free to ask me if you still have any doubts.
What's an example of when one should use uint? -- maybe a specific example, as opposed to "when doing binary operations", as I'm not sure I know what that means :)
Imagine a scenario in which you have a table in your database with a lot of entries with an integer for an id, which is always positive. If you store this data as an int one bit of every entry is effectively useless and when you scale this, you are losing a lot of space when you could have just used a uint and saved it. Similar scenario can be thought of while transmitting data, transmitting tons of integers to be precise. Also, uint has double the range for positive integers compared to their counterpart signed integers due to the extra bit, so it will take you longer to run out of numbers. Storage is cheap now so people generally ignore this supposedly minor gain.
The other usecase is type-safety. A uint can never be negative so if a part of your code is delicate to negative numbers, it can prove to be pretty handy. It's better to get the error before wasting resource on the data just to find out it's impermissible because it's negative.
Package Image uses uint and so crypto/tls, so when you use these packages you must use uint.
I use it logically at first but I don't fight about it and if it became an issue I use a practical approach.
like why using int for len()

How can I get the position of the most significant bit in a number?

I know that when getting the least significant bit in a number is by doing x &= -x. This clears out all the other bits except the least set one. I'm just wondering now how can I easily get the most significant bit. I can come up with a bit shifting code, but probably not ideal in terms of complexity.
#include <climits>
// have the compiler determine how many bits we need to right shift so only
// the biggest will be left
constexpr unsigned shiftby = 1<<(CHAR_BIT*sizeof(unsigned)-1);
// then at runtime: shift it.
unsigned result = static_cast<unsigned>(x)>>shiftby;
While this appears complex, at runtime it's literally just a single shift operation. This only works for types int and unsigned.

In Golang, uint16 VS int, which is less cost?

I am using a 64Bit server. My golang program needs integer type.
SO, If I use uint16 and uint32 type in source code, does it cost more than use most regular int type?
I am considering both computing cost and developing cost.
For the vast majority of cases using int makes more sense.
Here are some reasons:
Go doesn't implicitly convert between the numeric types, even when you think it should. If you start using some unsigned type instead of int, you should expect to pepper your code with multiple type conversions, because of other libraries or APIs preferring not to bother with unsigned types, because of untyped constant numerical expressions returning int values, etc.
Unsigned types are more prone to underflowing than signed types, because 0 (an unsigned type's boundary value) is much more of a naturally occurring value in computer programs than, for example, -9223372036854775808.
If you want to use an unsigned type because it restricts the values that you can put in it, keep in mind that when you combine silent underflow and compile time-only constant propagation, you probably aren't getting the bargain you were looking for. For example, while you cannot convert the constant math.MinInt64 to a uint, you can easily convert an int variable with value math.MinInt64 to a uint. And arguably it's not a bad Go style to have an if check whether the value you're trying to assign is valid for your program.
Unless you are experiencing significant memory pressure and your value space is somewhere slightly over what a smaller signed type would offer you, I'd think that using int will be much more efficient even if only because of development cost.
And even then, chances are that either there's a problem somewhere else in your program's memory footprint, or a managed language like Go is not the best fit for your needs.

long double (GCC specific) and __float128

I'm looking for detailed information on long double and __float128 in GCC/x86 (more out of curiosity than because of an actual problem).
Few people will probably ever need these (I've just, for the first time ever, truly needed a double), but I guess it is still worthwile (and interesting) to know what you have in your toolbox and what it's about.
In that light, please excuse my somewhat open questions:
Could someone explain the implementation rationale and intended usage of these types, also in comparison of each other? For example, are they "embarrassment implementations" because the standard allows for the type, and someone might complain if they're only just the same precision as double, or are they intended as first-class types?
Alternatively, does someone have a good, usable web reference to share? A Google search on "long double" site:gcc.gnu.org/onlinedocs didn't give me much that's truly useful.
Assuming that the common mantra "if you believe that you need double, you probably don't understand floating point" does not apply, i.e. you really need more precision than just float, and one doesn't care whether 8 or 16 bytes of memory are burnt... is it reasonable to expect that one can as well just jump to long double or __float128 instead of double without a significant performance impact?
The "extended precision" feature of Intel CPUs has historically been source of nasty surprises when values were moved between memory and registers. If actually 96 bits are stored, the long double type should eliminate this issue. On the other hand, I understand that the long double type is mutually exclusive with -mfpmath=sse, as there is no such thing as "extended precision" in SSE. __float128, on the other hand, should work just perfectly fine with SSE math (though in absence of quad precision instructions certainly not on a 1:1 instruction base). Am I right in these assumptions?
(3. and 4. can probably be figured out with some work spent on profiling and disassembling, but maybe someone else had the same thought previously and has already done that work.)
Background (this is the TL;DR part):
I initially stumbled over long double because I was looking up DBL_MAX in <float.h>, and incidentially LDBL_MAX is on the next line. "Oh look, GCC actually has 128 bit doubles, not that I need them, but... cool" was my first thought. Surprise, surprise: sizeof(long double) returns 12... wait, you mean 16?
The C and C++ standards unsurprisingly do not give a very concrete definition of the type. C99 (6.2.5 10) says that the numbers of double are a subset of long double whereas C++03 states (3.9.1 8) that long double has at least as much precision as double (which is the same thing, only worded differently). Basically, the standards leave everything to the implementation, in the same manner as with long, int, and short.
Wikipedia says that GCC uses "80-bit extended precision on x86 processors regardless of the physical storage used".
The GCC documentation states, all on the same page, that the size of the type is 96 bits because of the i386 ABI, but no more than 80 bits of precision are enabled by any option (huh? what?), also Pentium and newer processors want them being aligned as 128 bit numbers. This is the default under 64 bits and can be manually enabled under 32 bits, resulting in 32 bits of zero padding.
Time to run a test:
#include <stdio.h>
#include <cfloat>
int main()
{
#ifdef USE_FLOAT128
typedef __float128 long_double_t;
#else
typedef long double long_double_t;
#endif
long_double_t ld;
int* i = (int*) &ld;
i[0] = i[1] = i[2] = i[3] = 0xdeadbeef;
for(ld = 0.0000000000000001; ld < LDBL_MAX; ld *= 1.0000001)
printf("%08x-%08x-%08x-%08x\r", i[0], i[1], i[2], i[3]);
return 0;
}
The output, when using long double, looks somewhat like this, with the marked digits being constant, and all others eventually changing as the numbers get bigger and bigger:
5636666b-c03ef3e0-00223fd8-deadbeef
^^ ^^^^^^^^
This suggests that it is not an 80 bit number. An 80-bit number has 18 hex digits. I see 22 hex digits changing, which looks much more like a 96 bits number (24 hex digits). It also isn't a 128 bit number since 0xdeadbeef isn't touched, which is consistent with sizeof returning 12.
The output for __int128 looks like it's really just a 128 bit number. All bits eventually flip.
Compiling with -m128bit-long-double does not align long double to 128 bits with a 32-bit zero padding, as indicated by the documentation. It doesn't use __int128 either, but indeed seems to align to 128 bits, padding with the value 0x7ffdd000(?!).
Further, LDBL_MAX, seems to work as +inf for both long double and __float128. Adding or subtracting a number like 1.0E100 or 1.0E2000 to/from LDBL_MAX results in the same bit pattern.
Up to now, it was my belief that the foo_MAX constants were to hold the largest representable number that is not +inf (apparently that isn't the case?). I'm also not quite sure how an 80-bit number could conceivably act as +inf for a 128 bit value... maybe I'm just too tired at the end of the day and have done something wrong.
Ad 1.
Those types are designed to work with numbers with huge dynamic range. The long double is implemented in a native way in the x87 FPU. The 128b double I suspect would be implemented in software mode on modern x86s, as there's no hardware to do the computations in hardware.
The funny thing is that it's quite common to do many floating point operations in a row and the intermediate results are not actually stored in declared variables but rather stored in FPU registers taking advantage of full precision. That's why comparison:
double x = sin(0); if (x == sin(0)) printf("Equal!");
Is not safe and cannot be guaranteed to work (without additional switches).
Ad. 3.
There's an impact on the speed depending what precision you use. You can change used the precision of the FPU by using:
void
set_fpu (unsigned int mode)
{
asm ("fldcw %0" : : "m" (*&mode));
}
It will be faster for shorter variables, slower for longer. 128bit doubles will be probably done in software so will be much slower.
It's not only about RAM memory wasted, it's about cache being wasted. Going to 80 bit double from 64b double will waste from 33% (32b) to almost 50% (64b) of the memory (including cache).
Ad 4.
On the other hand, I understand that the long double type is mutually
exclusive with -mfpmath=sse, as there is no such thing as "extended
precision" in SSE. __float128, on the other hand, should work just
perfectly fine with SSE math (though in absence of quad precision
instructions certainly not on a 1:1 instruction base). Am I right under
these assumptions?
The FPU and SSE units are totally separate. You can write code using FPU at the same time as SSE. The question is what will the compiler generate if you constrain it to use only SSE? Will it try to use FPU anyway? I've been doing some programming with SSE and GCC will generate only single SISD on its own. You have to help it to use SIMD versions. __float128 will probably work on every machine, even the 8-bit AVR uC. It's just fiddling with bits after all.
The 80 bit in hex representation is actually 20 hex digits. Maybe the bits which are not used are from some old operation? On my machine, I compiled your code and only 20 bits change in long
mode: 66b4e0d2-ec09c1d5-00007ffe-deadbeef
The 128-bit version has all the bits changing. Looking at the objdump it looks as if it was using software emulation, there are almost no FPU instructions.
Further, LDBL_MAX, seems to work as +inf for both long double and
__float128. Adding or subtracting a number like 1.0E100 or 1.0E2000 to/from LDBL_MAX results in the same bit pattern. Up to now, it was my
belief that the foo_MAX constants were to hold the largest
representable number that is not +inf (apparently that isn't the
case?).
This seems to be strange...
I'm also not quite sure how an 80-bit number could conceivably
act as +inf for a 128-bit value... maybe I'm just too tired at the end
of the day and have done something wrong.
It's probably being extended. The pattern which is recognized to be +inf in 80-bit is translated to +inf in 128-bit float too.
IEEE-754 defined 32 and 64 floating-point representations for the purpose of efficient data storage, and an 80-bit representation for the purpose of efficient computation. The intention was that given float f1,f2; double d1,d2; a statement like d1=f1+f2+d2; would be executed by converting the arguments to 80-bit floating-point values, adding them, and converting the result back to a 64-bit floating-point type. This would offer three advantages compared with performing operations on other floating-point types directly:
While separate code or circuitry would be required for conversions to/from 32-bit types and 64-bit types, it would only be necessary to have only one "add" implementation, one "multiply" implementation, one "square root" implementation, etc.
Although in rare cases using an 80-bit computational type could yield results that were very slightly less accurate than using other types directly (worst-case rounding error is 513/1024ulp in cases where computations on other types would yield an error of 511/1024ulp), chained computations using 80-bit types would frequently be more accurate--sometimes much more accurate--than computations using other types.
On a system without a FPU, separating a double into a separate exponent and mantissa before performing computations, normalizing a mantissa, and converting a separate mantissa and exponent into a double, are somewhat time consuming. If the result of one computation will be used as input to another and discarded, using an unpacked 80-bit type will allow these steps to be omitted.
In order for this approach to floating-point math to be useful, however, it is imperative that it be possible for code to store intermediate results with the same precision as would be used in computation, such that temp = d1+d2; d4=temp+d3; will yield the same result as d4=d1+d2+d3;. From what I can tell, the purpose of long double was to be that type. Unfortunately, even though K&R designed C so that all floating-point values would be passed to variadic methods the same way, ANSI C broke that. In C as originally designed, given the code float v1,v2; ... printf("%12.6f", v1+v2);, the printf method wouldn't have to worry about whether v1+v2 would yield a float or a double, since the result would get coerced to a known type regardless. Further, even if the type of v1 or v2 changed to double, the printf statement wouldn't have to change.
ANSI C, however, requires that code which calls printf must know which arguments are double and which are long double; a lot of code--if not a majority--of code which uses long double but was written on platforms where it's synonymous with double fails to use the correct format specifiers for long double values. Rather than having long double be an 80-bit type except when passed as a variadic method argument, in which case it would be coerced to 64 bits, many compilers decided to make long double be synonymous with double and not offer any means of storing the results of intermediate computations. Since using an extended precision type for computation is only good if that type is made available to the programmer, many people came to conclude regard extended precision as evil even though it was only ANSI C's failure to handle variadic arguments sensibly that made it problematic.
PS--The intended purpose of long double would have benefited if there had also been a long float which was defined as the type to which float arguments could be most efficiently promoted; on many machines without floating-point units that would probably be a 48-bit type, but the optimal size could range anywhere from 32 bits (on machines with an FPU that does 32-bit math directly) up to 80 (on machines which use the design envisioned by IEEE-754). Too late now, though.
It boils down to the difference between 4.9999999999999999999 and 5.0.
Although the range is the main difference, it is precision that is important.
These type of data will be needed in great circle calculations or coordinate mathematics that is likely to be used with GPS systems.
As the precision is much better than normal double, it means you can retain typically 18 significant digits without loosing accuracy in calculations.
Extended precision I believe uses 80 bits (used mostly in maths processors), so 128 bits will be much more accurate.
C99 and C++11 added types float_t and double_t which are aliases for built-in floating-point types. Roughly, float_t is the type of the result of doing arithmetic among values of type float, and double_t is the type of the result of doing arithmetic among values of type double.

Resources