I've got some data files that were written by a Visual C++ program (32 bit) with the structure alignment set to /Zp2. (Just a dump of a big typedef structure). Now I want to read those files using a 64 bit program, also in Visual C (but 2017 vs 2010). If I set the alignment here to /Zp2 for some reason it is crashing another library that I use, so the alignment is set to default, and I have put #pragma pack (2) in front of all the structures that I need. This does not seem to work however, when I try to access structure members, the data is off. Is there something I am missing or is this not going to be possible?
Related
I'm not a hardware guy, but I know that Visual Studio in a 64 bit version issue request was declined by Microsoft stating that a 64 bit version would not have good performance.
Two noticeable differences between the two that I feel are obvious is the code base. One began it's life in 1997, one would think that means more baggage on the Visual Studio side, less opportunities to have very modern application architecture and code and that may make it harder and possibly stuff may be built to perform on 32 bit and for some reason is not suitable for 64 bit? I don't know.
Visual Studio Code on the other hand is an modern Electron app which means it pretty much just compiled HTML. CSS and JavaScript. I'm betting making a version of Visual Studio Code has little in the way of obstructions and although performance may not be something truly noticeable, why not?
P.S.
I still would like to understand what areas may be improved in performance and if that improvement is negligible to the a developer. Any additional info or fun facts you may know would be great I would like to have as much info as possible and I will update the question with any hard facts I uncover that are not mentioned.
The existence of 64-bit Visual Studio Code is largely a side-effect of the fact that the Node.js- and Chromium-based runtimes of Electron support both 32- and 64-bit architectures, not a primary design goal for the application. Microsoft developed VS Code with Electron, a framework used to build desktop applications with web technologies.
Because Electron already includes runtimes for both architectures (and for different operating systems), VS Code can provide both versions with little additional effort—Electron abstracts the differences between machines from the JavaScript code.
By contrast, Microsoft distributes much of Visual Studio as compiled binaries that contain machine-specific instructions, and the cost of rewriting and maintaining the source code for 64-bits historically outweighed any benefits. In general, a 64-bit program isn't noticeably faster to the end user than its 32-bit counterpart if it never exceeds the limitations of a 32-bit system. Visual Studio's IDE shell doesn't do much heavy-lifting—the bulk of the expensive processing in a typical workflow is performed by the integrated toolchains (compilers, etc.) which usually support 64-bit systems.
With this in mind, any benefits we may notice from running a 64-bit version of VS Code are similar to those we would see from using a 64-bit web browser. Most significantly, a 64-bit version can address more than 4 GB of memory, which may matter if we need to open a lot of files simultaneously or very large files, or if we use many heavy extensions. So—most important to us developers—the editor won't run out of memory when abused.
While this sounds like an insurance policy worth signing, even if we never hit those memory limits, remember that 64-bit applications generally consume more memory than their 32-bit counterparts. We may want to choose the 32-bit version if we desire a smaller memory footprint. Most developers may never hit that 4 GB wall.
In rare cases, we may need to choose either a 32-bit or 64-bit version if we use an extension that wraps native code like a DLL built for a specific architecture.
Any other consequences, positive or negative, that we experience from using a 64-bit version of VSCode depend on the versions of Electron's underlying runtime components and the operating system they run on. These characteristics change continuously as development progresses. For this reason, it's difficult to state in a general manner that the 32-bit or 64-bit versions outperform the other.
For example, the V8 JavaScript engine historically disabled some optimizations on 64-bit systems that are enabled today. Certain optimizations are only available when the operating system provides facilities for them.
Future 64-bit versions on Windows may take advantage of address space layout randomization for improved security (more bits in the address space increases entropy).
For most users, these nuances really don't matter. Choose a version that matches the architecture of your system, and reserve switching only if you encounter problems. Updates to the editor will continue to bring optimizations for its underlying components. If resource usage is big concern, you may not want to use a GUI editor in the first place.
I haven't worked much on windows but have interacted with x86, x64 and ARM (Both 32-bit and 64-bit instruction set size) processors. Based on my experience, before writing the code in 64-bit format we thought: Do we really need 64-bit size instructions? If our operation can be performed within 32 bits, then why shall we need another 32 bits?
Think of it like this: You have a processor with 64-bit address and 64-bit data buses and 64-bit size registers. Almost all of the instructions of your program requires maximum 32 bits. What will you do? Well, I think there are two ways now:
Create a 64-bit version of your program and run all the 32-bit instructions on your 64-bit processor. (Wasting 32-bits or your processor in each instruction cycle, and filling the Program Counter with an address which is 4 bytes ahead). Your application / program which could have been executed in 256 MB of RAM now requires 512 MB, due to which other programs or processes running on the RAM will suffer.
Keep the program format to 32-bit and combine 2 32-bit instructions to be pushed into your 64-bit processor for execution.
Obviously, second approach will run faster with the same resources.
But yes, if your program is containing more instructions which are really 64-bit in size; For eg. Processing 4K videos (Better on 64-bit processor with 64-bit instruction set) or performing floating-points operations with up to 15 decimal digit precision, etc. Then, it is better to create 64-bit program file.
Long story in short: Try to write compact software and leverage the hardware as much as possible.
So far, what I have read Here, Here and Here; I came to know that most of the components of VS require only 32-bits instruction size.
Hope it explains.
Thanks
4 years later, in 2021, you now have:
"Microsoft's Visual Studio 2022 is moving to 64-bit" from Mary Jo Foley
It references the official publication "Visual Studio 2022" from Amanda Silver, CVP of Product, Developer Division
Visual Studio 2022 is 64-bit
Visual Studio 2022 will be a 64-bit application, no longer limited to ~4gb of memory in the main devenv.exe process. With a 64-bit Visual Studio on Windows, you can open, edit, run, and debug even the biggest and most complex solutions without running out of memory.
While Visual Studio is going 64-bit, this doesn’t change the types or bitness of the applications you build with Visual Studio. Visual Studio will continue to be a great tool for building 32-bit apps.
I find it really satisfying to watch this video of Visual Studio scaling up to use the additional memory that’s available to a 64-bit process as it opens a solution with 1,600 projects and ~300k files.
Here’s to no more out-of-memory exceptions. 🎉
I'm going to upgrade my VC++ 2008 into VC++ 2010 or 2012. Before upgrading I have some questions:
My compiler only supports the largest __int64 integers. Is __int128 supported in version VC++ 2010, 2012 and so on?
My current compiler doesn't support something like _WIN64 or _WIN32 to check the Windows platform. But I doubt _INTEGRAL_MAX_BITS is an another solution; It will be 64 if it's run in Win32, or 128 otherwise. Is it true?
For all of the compilers that you mention, 64 bit integer types exist for both 32 and 64 bit targets. However, there are no 128 bit integer types. So, _INTEGRAL_MAX_BITS evaluates to 64 for all of the listed compilers, and for both 32 and 64 bit targets.
The best you can do is probably to use the SSE2 intrinsic __m128i but that depends on the presence of an SSE2 unit on the processor. But you don't need to upgrade to be able to use that. It's available in VS2008 also.
In the C++ standard there is a dedicated section for what you want, in short there is <limits> and the included methods that are a solution to your problem.
EDIT: I'm assuming that you want some kind of standardize support, otherwise you should simply refer to the online docs for your compiler
I am developing and deploying on 64 bit computers. Unfortunately, due to a bug in the 64 bit JIT compiler that has existed and remains unfixed by Microsoft since the introduction of the .NET 64 bit version, my code scales quadratically and breaks. Here is a link to the documentation on the bug: http://connect.microsoft.com/VisualStudio/feedback/details/508748 The old 32 bit compiler works fine. I need to compile one dll to 32 bits and make sure it runs as 32 bits.
In the workarounds someone has written:
Since Microsoft has not figured out in 3 years how to create a < 400MB
image from a 20K XSLT compiled transform script we have been surviving
by setting the assemblies that are implementing any XSLT
transformation to 32 bit.
How is that done? Thanks!
Note: I need this to compile a regex into an assembly using Regex.CompileToAssembly method.
In Visual Studio, select the Build menu and open the Configuration Manager. Look at the "Platform" column of the Project Contexts, and ensure that the selected platform for the project is x86, and that should ensure compilation to a 32-bit target.
I have an application which has many services and one UI module. All these are developed in VC++ 6.0. The total KLOC would be 560 KLOC.
It uses Mutltithreading,MFC and all datatypes like word,int, long.
Now we need to support 64bit OS. What would be the changes we would need to make to the product.
By support i mean both like running the application on a 64bit OS and also making use of the 64bit memory.
Edit: I am ruling out migration to VS2005 or anything higher than VC6.0 due to time constraints.
So what changes need to be done.
64bit Windows includes 32bit via WOW. Any 32bit application should just continue to work.
(It is only drivers that have to match the bitness of the OS.)
[Note to commenters: plugins—of whatever type—are not separate applications but dlls used by other applications which do need to match the host. In that case you also get the same problem where 64bit extensions are incompatible with 32bit hosts.]
As Richard says, the 32-bit version should continue to work unless you've got a driver or a shell extension or something.
However if you do need to upgrade the code you're going to have to upgrade the compiler too: I don't think MFC got good 64-bit support until VS2005 or later. I'd suggest you get the 32-bit code building in VS2010 - this will not be trivial - and then start thinking about converting it to 64-bit. You can of course leave the production 32-bit builds in VC6 but then you add maintainership burden.
You'll probably get most of the way converting by flipping the compiler to 64-bit and turning on full warnings - particularly given the size of your code it may be impractical to review it all. One thing to watch out for is storing pointers in ints, dwords, etc. which may now be too short to hold the pointer - you need DWORD_PTR etc. now - but I think the warnings do catch that.
Or if this is in many components then you might get away with only migrating a few components to 64-bit. Then, unfortunately, you've got data length issues for communication between the two versions.
You must convert to a newer compiler. Time constraints are pretty much irrelevant. The VC6 compiler simply cannot generate 64 bits code. Every pointer it generates is 32 bits, for starters. If you need to access "64 bit memory", i.e. memory above 0x00000000FFFFFFFF, then 32 bits is simply not enough.
If you're ruling out changing your IDE to one that intrinsically supports 64-bit compiling and debugging, you're making your job unnecessarily more complex. Are you sure it's not worth the hit?
Just for running on a 64bit OS, you won't need to make any changes. That's what WOW64 is for.
If, however, you want to run 64bit natively (i.e., access the 64bit memory space) you will have to compile as 64bit. That means using an IDE that supports 64bit. There is no way around this.
Most programs should have no problem converting to 64bit if they are written with decent coding standards (mainly, no assumptions about the size of a pointer, like int-pointer conversions). You'll get a lot of warnings about things like std::size_t conversions, but they will be fairly meaningless.
For fun, I'm working on a compiler for a small language, and I'm targeting the ARM instruction set first due to its ease. Currently, I'm able to compile the code so I have ARM machine code for the body of each method. At this point I need to start tying a few things together:
What format should I persist my machine code to so I can...
Run it in what debugger?
Currently there's no I/O support, etc., so debugging will be heavily keyed to my ability to step through the disassembly and view processor registers/memory.
I'm running Windows and my compiler only runs in Windows, so having some sort of emulator on Windows would be preferable.
Edit: It appears I can use the Visual Studio Windows Mobile 6 emulator. For now, I might be able to simply save the results in a simple binary format and load it into emulator memory via a tiny C++ console application, then jump into it with a function pointer. Later, it appears I would need to support the ELF and PE formats.
Regarding file formats... the most simple would be:
Motorola S-record
Intel hex file
Those formats can record the binary data and the target address range(s) for the data to be loaded. That's about it.
A more capable format to contain more information:
ELF
for maximum information, include DWARF debug information
ELF is fairly widely supported, and not too complex. DWARF allows you to record very expressive debug information for debugging of complex language constructs. However, to achieve that expressiveness, it can be a very complex format to write.