is there a predefined constant available in VS2010 that specifies code is compiled for Windows 64 bit? Currently I would check if a specific type has a length of 4 or 8 bits but I wonder if there is a more elegant way to find this out?
Thanks!
Just more convenient (than comment) for readers:
http://msdn.microsoft.com/en-us/library/b0084kay.aspx
Related
I made a simple experiment in win7 to test its organization of heaps in memory allocations, using the following code:
char *pointer[50];
for(i=0;i<=49;i++) pointer[i]=new char[64];
for(i=0;i<=49;i++) printf("0x%X\n",pointer[i]);
The output was:
0x572F00
0x572F48
0x572F90
......
Obviously, the space between two adjacent pointers has been 72 bytes rather than 64 bytes. There must be some information kept in the first few bytes of every heap chunk. I printed out the values in the 8 extra bytes and found them to be:
71 39 19 36 B3 9F 00 08
Can anyone please tell me how to tell the size of the heap chunk from these values? Thanks!
Just the idea you want to do this is pretty scary. This is undocumented information, liable to change without notice, liable to vary between debug and non-debug builds, etc, etc. I strongly suggest you find another way, such as storing the length using your own allocator.
In answer to your question, the information I know is stored is a forward and backward link and some flags. The links are probably stored in a single pointer using an XOR scheme. There is probably a sentinel as well.
If you really have to know the answer to this question, it's very easy to find. Simply compile and run your program in Visual Studio and step into the C run-time library code for new. All the declarations and code are there for you to read. Fully commented, very straightforward stuff.
Please note: this is nothing to do with the Windows 7 API. This is the runtime library associated with the C++ compiler (which I assume is Visual Studio).
There are several memory allocators internal to Windows 7, but that's an entirely different story.
I am looking for an equivalent to drand48 on Windows. To all who do not know, the following is not equivalent:
(double)rand()/RAND_MAX;
Firstly, rand returns values including RAND_MAX
Secondly, on Windows RAND_MAX=32767 which is a too short period for my application.
My purpose is to generate noise for a simulation. It is desirable to use a pseudo-random generator with the same period as drand48.
Firstly, note that you appear to be confusing the resolution with the period. On Windows, rand will return values from 0 to 32767, but this does not mean that the same values will repeat every 32768 calls. So rand should be perfectly adequate unless you need more than 16 bits of resolution. (The resolution and the period are the same in drand48, but not in all pseudorandom number generators.)
If you do not need the exact behaviour of drand48, rand_s would be the simplest option. It has a 32-bit resolution, less than drand48 but enough for most purposes. It generates a cryptographically secure random number, so there is no fixed period. One possible issue is that it will be much slower than drand48.
If you want the same behaviour of drand48, the algorithm is documented and should be easy to re-implement, or you could use the source code from FreeBSD (link to source code browser on http://fxr.watson.org/).
7 years late to the party. Sorry.
Gnu Scientific Library is a good solution to this problem. The library employs several high-quality generating algorithms.
https://www.gnu.org/software/gsl/doc/html/rng.html
It might not be an exact answer to your question, but still it is a solution. Use the CryptGenRandom(it's form the WinAPI).
if we take 32-bit CRC then the data word size will be 2 to the power of 32(2**32) plus 32 bit for CRC.... or not? Am I missing something?
If I want to write a code in Microsoft Visual C++ for implementing 32-bit CRC then what is the data type I can use? Maybe I am missing the point and talking rubbish.
Basically it is my assignment to implement 32-bit CRC and I am completely at a loss how to go about it.
Sorry if the question is vague. Any help toward implementation, logic, or basic fundamentals will be greatly appreciated.
CRC-32 is basically the act of dividing two polynomials and returning the remainder.
Recommended introductory reading:
http://en.wikipedia.org/wiki/Cyclic_redundancy_check
http://www.mathpages.com/home/kmath458.htm
http://www.ross.net/crc/download/crc_v3.txt
I'm trying to get this done in a C++ program on Windows, using visual C++. I only need to support 64-bit targets. I know about hacks that use division or multiplication to get the info, but I'd like to know if there's a faster non-generic way to do this... I would even consider inline assembly but you can't do that in VS for 64-bit.
If code portability is not an issue you should try _BitScanForward64 and _BitScanReverse64. They're compiler intrinsics and map to a single, efficient assembler instruction.
I've been reviewing the year 2038 problem (Unix Millennium Bug).
I read the article about this on Wikipedia, where I read about a solution for this problem.
Now I would like to change the time_t data type to an unsigned 32bit integer, which will allow me to be alive until 2106. I have Linux kernel 2.6.23 with RTPatch on PowerPC.
Is there any patch available that would allow me to change the time_t data type to an unsigned 32bit integer for PowerPC? Or any patch available to resolve this bug?
time_t is actually defined in your libc implementation, and not the kernel itself.
The kernel provides various mechanisms that provide the current time (in the form of system calls), many of which already support over 32-bits of precision. The problem is actually your libc implementation (glibc on most desktop Linux distributions), which, after fetching the time from the kernel, returns it back to your application in the form of a 32-bit signed integer data type.
While one could theoretically change the definition of time_t in your libc implementation, in practice it would be fairly complicated: such a change would change the ABI of libc, in turn requiring that every application using libc to also be recompiled from sources.
The easiest solution instead is to upgrade your system to a 64-bit distribution, where time_t is already defined to be a 64-bit data type, avoiding the problem altogether.
About the suggested 64-bit distribution suggested here, may I note all the issues with implementing that. There are many 32-bit NONPAE computers in the embedded industry. Replacing these with 64-bit computers is going to be a LARGE problem. Everyone is used to desktop's that get replaced/upgraded frequently. All Linux O.S. suppliers need to get serious about providing a different option. It's not like a 32-bit computer is flawed or useless or will wear out in 16 years. It doesn't take a 64 bit computer to monitor analog input , control equipment, and report alarms.