Should I provide an x64 build of my application? - windows

Perhaps I'm missing a major point of the x64 platform here, but my perception was that x64 applications were only better performing than x86 versions (on an x64 OS and hardware, obviously) when large amounts of memory, large pointers, or other demanding factors were involved.
However, I've started to notice some smaller applications offering x64 versions of their installers in addition to the standard x86 versions. Since x86 runs just fine on Windows x64 using WoW, is there any benefit to me releasing an x64-compiled version of my application? As I see it:
Pros:
Potentially higher performance (in what conditions, though)
Cons:
Additional build to create/support
Potential bugs in x64 target that aren't present in the x86 target
Dependence on x64 versions of vendor/OS DLLs, requiring different install checklist and introducing additional troubleshooting complications
What are some compelling reasons that should cause me to reconsider adding an x64-compiled version of my app?

Another potential reason for compiling and debugging an x64 version is that it may expose hidden bugs in the x86 version. For example, this may expose improper conversions between 32-bit integers and (now) 64-bit pointers. This also positions you to support x64 in the future.
However, unless your application would benefit from 64-bit integers, additional cpu registers, a larger memory space (beyond 3.5Gb), or implement a device driver, staying with 32-bit application is fine. All major operating systems support running both x32 and x64 applications at the same time so there will not be a major push towards 64-bit only applications.
BTW. Applications based on .NET automatically benefit from being executed on a 64-bit system without any code changes. No additional testing required.

Potential performance improvement relates mostly to usage of 64-bit integers (around 4 times as fast in an x64 build on my machine than in x86 on the same) and the fact that compilers may assume some CPU features to be universally present in CPUs supporting x64, such as SSE2, &c.; this can lead to more optimized code.
For many applications, especially small ones it isn't too difficult to be 64-bit clean, for larger ones it's a major headache, granted. Compelling reasons are few, usually. But some platforms don't install their 32-bit support in 64-bit versions by default (I think FreeBSD needs to be explicitly told to do so, but I may err on that).

Your program will benefit if it uses a lot of long longs and WOW does of course mean a minor performance hit(very minor though because the CPU has a compatibility mode for such this reason)...
Windows support for 32 bit programs will be degrading in the future(albeit slowly) so I say in another year or 2, you can just about wonder why you would want to deploy a 32 bit application...
Also, a 64 bit build of your application, can actually be much more optimized than a 32 bit build because with 64 bit, you are guaranteed to have quite a few things, such as SSE2.

Performance; x86_64 is faster than x86 by a small amount (that also depends on your compiler), for the reasons already stated. Also, it's easier to support really huge data sets, but of course many applications never need go there.
Certainly on Linux and OS X, you really should support x86_64; I'm typing this to a 64 bit browser on OS X, and my Linux box in the corner is also 64 bit almost exclusively. 64 bit Windows is a bit more of a question, but that is now coming (with the drivers).

This is often based on human factors rather than objective technical reasoning. 64-bit is the latest and greatest and must be better than 32-bit. If customers want it the customer is always right. Have spoke with windows users saying their goal was to make it such that when they view their process list in windows *32 does not appear next to any of their apps.
Often times this is also fueled by confusion on the point of compatibility where people have 64-bit operating systems and just want to make sure the software will work on their computers. Expecting average people to understand the technical demarcation line between 32-bit processes on a 64-bit OS is unrealistic. Unless explicitly stated on the packaging it can be a point of confusion/concern for a customer purchasing new software. Often times you will see 64-bit mentioned for this reason. Typically this really means only 64-bit compatibility.
Now there are a few 64-bit applications (flash player and google earth top my list) that can't come soon enough.

Here's one reason: integrating with 64-bit apps, like writing a shell extension. A 64-bit process cannot load a 32-bit DLL directly, so you have to make yours 64-bit as well.

One factor that hasn't been mentioned yet: WoW64 Is Now an Optional Feature for Server Core. Only an issue if your application needs to run on server systems, of course.
Similarly, Windows PE, Windows RE, etc., do not include Wow64.

Related

What is the quantifiable benefit of 64 bit for Visual Studio Code over 32 bit

I'm not a hardware guy, but I know that Visual Studio in a 64 bit version issue request was declined by Microsoft stating that a 64 bit version would not have good performance.
Two noticeable differences between the two that I feel are obvious is the code base. One began it's life in 1997, one would think that means more baggage on the Visual Studio side, less opportunities to have very modern application architecture and code and that may make it harder and possibly stuff may be built to perform on 32 bit and for some reason is not suitable for 64 bit? I don't know.
Visual Studio Code on the other hand is an modern Electron app which means it pretty much just compiled HTML. CSS and JavaScript. I'm betting making a version of Visual Studio Code has little in the way of obstructions and although performance may not be something truly noticeable, why not?
P.S.
I still would like to understand what areas may be improved in performance and if that improvement is negligible to the a developer. Any additional info or fun facts you may know would be great I would like to have as much info as possible and I will update the question with any hard facts I uncover that are not mentioned.
The existence of 64-bit Visual Studio Code is largely a side-effect of the fact that the Node.js- and Chromium-based runtimes of Electron support both 32- and 64-bit architectures, not a primary design goal for the application. Microsoft developed VS Code with Electron, a framework used to build desktop applications with web technologies.
Because Electron already includes runtimes for both architectures (and for different operating systems), VS Code can provide both versions with little additional effort—Electron abstracts the differences between machines from the JavaScript code.
By contrast, Microsoft distributes much of Visual Studio as compiled binaries that contain machine-specific instructions, and the cost of rewriting and maintaining the source code for 64-bits historically outweighed any benefits. In general, a 64-bit program isn't noticeably faster to the end user than its 32-bit counterpart if it never exceeds the limitations of a 32-bit system. Visual Studio's IDE shell doesn't do much heavy-lifting—the bulk of the expensive processing in a typical workflow is performed by the integrated toolchains (compilers, etc.) which usually support 64-bit systems.
With this in mind, any benefits we may notice from running a 64-bit version of VS Code are similar to those we would see from using a 64-bit web browser. Most significantly, a 64-bit version can address more than 4 GB of memory, which may matter if we need to open a lot of files simultaneously or very large files, or if we use many heavy extensions. So—most important to us developers—the editor won't run out of memory when abused.
While this sounds like an insurance policy worth signing, even if we never hit those memory limits, remember that 64-bit applications generally consume more memory than their 32-bit counterparts. We may want to choose the 32-bit version if we desire a smaller memory footprint. Most developers may never hit that 4 GB wall.
In rare cases, we may need to choose either a 32-bit or 64-bit version if we use an extension that wraps native code like a DLL built for a specific architecture.
Any other consequences, positive or negative, that we experience from using a 64-bit version of VSCode depend on the versions of Electron's underlying runtime components and the operating system they run on. These characteristics change continuously as development progresses. For this reason, it's difficult to state in a general manner that the 32-bit or 64-bit versions outperform the other.
For example, the V8 JavaScript engine historically disabled some optimizations on 64-bit systems that are enabled today. Certain optimizations are only available when the operating system provides facilities for them.
Future 64-bit versions on Windows may take advantage of address space layout randomization for improved security (more bits in the address space increases entropy).
For most users, these nuances really don't matter. Choose a version that matches the architecture of your system, and reserve switching only if you encounter problems. Updates to the editor will continue to bring optimizations for its underlying components. If resource usage is big concern, you may not want to use a GUI editor in the first place.
I haven't worked much on windows but have interacted with x86, x64 and ARM (Both 32-bit and 64-bit instruction set size) processors. Based on my experience, before writing the code in 64-bit format we thought: Do we really need 64-bit size instructions? If our operation can be performed within 32 bits, then why shall we need another 32 bits?
Think of it like this: You have a processor with 64-bit address and 64-bit data buses and 64-bit size registers. Almost all of the instructions of your program requires maximum 32 bits. What will you do? Well, I think there are two ways now:
Create a 64-bit version of your program and run all the 32-bit instructions on your 64-bit processor. (Wasting 32-bits or your processor in each instruction cycle, and filling the Program Counter with an address which is 4 bytes ahead). Your application / program which could have been executed in 256 MB of RAM now requires 512 MB, due to which other programs or processes running on the RAM will suffer.
Keep the program format to 32-bit and combine 2 32-bit instructions to be pushed into your 64-bit processor for execution.
Obviously, second approach will run faster with the same resources.
But yes, if your program is containing more instructions which are really 64-bit in size; For eg. Processing 4K videos (Better on 64-bit processor with 64-bit instruction set) or performing floating-points operations with up to 15 decimal digit precision, etc. Then, it is better to create 64-bit program file.
Long story in short: Try to write compact software and leverage the hardware as much as possible.
So far, what I have read Here, Here and Here; I came to know that most of the components of VS require only 32-bits instruction size.
Hope it explains.
Thanks
4 years later, in 2021, you now have:
"Microsoft's Visual Studio 2022 is moving to 64-bit" from Mary Jo Foley
It references the official publication "Visual Studio 2022" from Amanda Silver, CVP of Product, Developer Division
Visual Studio 2022 is 64-bit
Visual Studio 2022 will be a 64-bit application, no longer limited to ~4gb of memory in the main devenv.exe process. With a 64-bit Visual Studio on Windows, you can open, edit, run, and debug even the biggest and most complex solutions without running out of memory.
While Visual Studio is going 64-bit, this doesn’t change the types or bitness of the applications you build with Visual Studio. Visual Studio will continue to be a great tool for building 32-bit apps.
I find it really satisfying to watch this video of Visual Studio scaling up to use the additional memory that’s available to a 64-bit process as it opens a solution with 1,600 projects and ~300k files.
Here’s to no more out-of-memory exceptions. 🎉

Are games/programs compiled for multiple architectures?

This might be a big broad and somewhat stupid, but it is something I've never understood.
I've never dealt with code that needs to be compiled except Java, which I guess falls between two chairs, so here goes.
I don't understand how games and programs are compiled. Are they compiled for multiple architectures? Or are they compiled during installation (it does not look that way)?
As far as I've understood, code needs to be compiled based on the local architecture in order to make it work. Meaning that you can't compile something for AMD and "copy" the binaries and execute them on a computer running Intel (or similar).
Is there something I've misunderstood here, or does they use an approach which differs from the example I am presenting?
AMD and Intel are manufacturers. You might be thinking of amd64 (also known as x86_64) versus x86. x86_64 is, as the name suggests, based on x86.
Computers running a 64-bit x86_64 OS can normally run x86 apps, but the reverse is not true. So one possibility is to ship 32 bit x86 games, but that limits the amount of RAM that can be accessed per process. That might be OK for a game though.
A bigger issue is shipping for different platforms, such as Playstation and (Windows) PC. The Playstation not only has a completely different CPU architecture (Cell), but a different operating system.
In this case you can't simply cross-compile - and that is because of the operating system difference. You have to have two separate versions of the game - sharing a bunch of common code and media files (also known as assets) - one version for PC and one for Playstation.
You could use Java to overcome that problem, in theory... but that only works when a JVM is available for all target platforms. Also, there is now fragmentation in the Java market, with e.g. Android supporting a different API from JME. And iPhones and iPads don't support Java at all.
Many games publishers do not in fact use Java. An exception is Mojang.

Supporting 64bit OS which currently works in 32 bit OS

I have an application which has many services and one UI module. All these are developed in VC++ 6.0. The total KLOC would be 560 KLOC.
It uses Mutltithreading,MFC and all datatypes like word,int, long.
Now we need to support 64bit OS. What would be the changes we would need to make to the product.
By support i mean both like running the application on a 64bit OS and also making use of the 64bit memory.
Edit: I am ruling out migration to VS2005 or anything higher than VC6.0 due to time constraints.
So what changes need to be done.
64bit Windows includes 32bit via WOW. Any 32bit application should just continue to work.
(It is only drivers that have to match the bitness of the OS.)
[Note to commenters: plugins—of whatever type—are not separate applications but dlls used by other applications which do need to match the host. In that case you also get the same problem where 64bit extensions are incompatible with 32bit hosts.]
As Richard says, the 32-bit version should continue to work unless you've got a driver or a shell extension or something.
However if you do need to upgrade the code you're going to have to upgrade the compiler too: I don't think MFC got good 64-bit support until VS2005 or later. I'd suggest you get the 32-bit code building in VS2010 - this will not be trivial - and then start thinking about converting it to 64-bit. You can of course leave the production 32-bit builds in VC6 but then you add maintainership burden.
You'll probably get most of the way converting by flipping the compiler to 64-bit and turning on full warnings - particularly given the size of your code it may be impractical to review it all. One thing to watch out for is storing pointers in ints, dwords, etc. which may now be too short to hold the pointer - you need DWORD_PTR etc. now - but I think the warnings do catch that.
Or if this is in many components then you might get away with only migrating a few components to 64-bit. Then, unfortunately, you've got data length issues for communication between the two versions.
You must convert to a newer compiler. Time constraints are pretty much irrelevant. The VC6 compiler simply cannot generate 64 bits code. Every pointer it generates is 32 bits, for starters. If you need to access "64 bit memory", i.e. memory above 0x00000000FFFFFFFF, then 32 bits is simply not enough.
If you're ruling out changing your IDE to one that intrinsically supports 64-bit compiling and debugging, you're making your job unnecessarily more complex. Are you sure it's not worth the hit?
Just for running on a 64bit OS, you won't need to make any changes. That's what WOW64 is for.
If, however, you want to run 64bit natively (i.e., access the 64bit memory space) you will have to compile as 64bit. That means using an IDE that supports 64bit. There is no way around this.
Most programs should have no problem converting to 64bit if they are written with decent coding standards (mainly, no assumptions about the size of a pointer, like int-pointer conversions). You'll get a lot of warnings about things like std::size_t conversions, but they will be fairly meaningless.

Is the 64bit Windows platform immature? (even if comparing 32bit binaries with 64bit binaries running on it)

I compiled an 64bit binary of ioquake3 and an SDL binary to go along with it and I noticed on Windows 7 64bit, operation, while relatively stable, it doesn't have top notch performance.
An equivalent binary on 64bit Debian, runs definitely faster, and perfectly stable.
And I'm thinking: with all the major manufacturers still dispatching 32bit binaries predominately - major exception I can think of is Autodesk's Autocad - is Windows still immature on its 64bit libraries?
I would answer no. 64-bit support in the operating system has been around since Windows XP was released in 64-bit edition, although pre-vista drivers and 3rd-party software were very much experimental. Windows is a fully capable 64-bit operating system.
However, you have to remember that Microsoft's success is built on the fact Windows runs pretty well on any x86 based processor with any other combination of hardware, thanks to HAL. When 64-bit XP first came out drivers were scare in their 64-bit form until traction was gained. As you've observed, most manufacturers still develop 32-bit applications only for Windows; Visual Studio and Microsoft Office proof that it is not only 3rd-party vendors. Why? Ease. Take a walk around any pc-shop and you'll hear all sorts of praise about 64-bit cpus in todays modern laptops but what you'll actually find is that their OSes are shipped 32-bit. It's a standard and it works on 32-bit.
Linux, by contrast, has always been a programmer's platform. Most distributions support at least i386 and "x86-64"; some support ppc architectures. The kernel can be built upon just about every chipset known to man. Why? Linux isn't constrained by needing commercial product stability to survive and is often a researcher's platform. Not only that, but even if the core devs had no interest in porting it, you could. Much of the rest of the gnu system is written for portability so it's trivial to compile elsewhere. And not writing your code in a portable manner isn't considered good etiquette.
Take a look at Flash support in linux - 64-bit alpha. Even Mozilla provide 32-bit only builds although they allow distributors to build official 64-bit versions. Skype is 32-bit only.
Basically, many software developers don't need to support 64-bit yet, or don't see it that way at least. As such, I'd say both operating systems are mature - the eco system around them is what is different.
While Windows is slowly making strides towards 64-bitness, one could easily say that Linux has a massive, perhaps even crushing, advantage due to the wide variety of platforms that it has been made to work on. Issues that Windows developers are only coming across now have been long solved under Linux (although of course there are Linux developers who choose to ignore these solutions; their code tends to be brittle, and sometimes non-portable).
Define "relatively stable"?
All Windows API calls end up in the to the same 64 bit functions if you are running a x64 bit Windows, so there is no stability difference at all.
You should profile. Most certainly something inside the user space application is causing the performance degregation, not the kernel.
Are you sure that you are running the same optimization level when compiling the 64 bit binary? And what compiler did you use?

Why go 64 bit OS? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
On these questions:
Which Vista edition is best for a developer machine?
Vista or XP for Dev Machine
People are recommending 64 bit, can you explain why? Is it just so you can have more then 3GB of addressable RAM that 32 bit gives you?
And how does Visual Studio benefit from all this extra RAM?
I went from 64 bit XP back to 32 bit due to 90% of the software I was using only being 32 bit anyway and I had issues with drivers and some software with 64 bit.
Vista, as far as I know, has much better 64 bit support than XP. It is more well advertised than 64 bit XP, and more popular. Driver and software support should be much better for 64-bit Vista.
The 64-bit switch is in progress right now in the computing industry. You might as well switch. Microsoft made the serious leap to 64-bit already, and many have already followed suit. Those who haven't switched, will soon, most likely.
As for the technical benefits, there aren't many aside from the higher memory limits. Vista will certainly allow you to take advantage of the 4GB+ of RAM if you have it on 64-bit though.
A number of reasons.
Yes, you're right it is so you can have more than 3 gig of ram
More and more systems are going to be 64 bit soon so it makes sense to develop on what you're going to be running on
Some bugs can only be observed when running in 64 bit mode
"There are some gotchas in terms of p/invoke calls not always working across 32/64, as well as Managed DirectX not working well under 64-bit, but on the whole I think its something people are going to be doing more as time goes by."
This is caused, in .net, by having the AnyCPU flag set. AnyCPU on an x64 machine will run the process as a x64 process, which proceeds to explode when attempting to call/load a 32 bit dll. Since those libraries are 32 bit you need to set the build to x86, to ensure the app will run as an x86 process, if on an x64 machine it will run in WoW.
Signed Drivers. No more "Unknown Device Driver" blue screens, drivers that cause issues are found out, and rightly blamed for their crashes.
Signed drivers also means the drivers are current. Manufacturers that used to get away with updating a driver once every 2-3 years had to get signed/certified. Which means the driver is relatively current and had to pass basic "is this total crap" test at Microsoft.
This "lack of driver support" I've always seen as a boon. Forcing manufacturer certification.
More address space. Others have mentioned that this allows more RAM, which is true. But it has more impact on memory management performance. It also means having 4 gigs RAM and a graphics card with 512MB on it will be fully used by the system. On a 32 bit OS the system has to decide, out of the limited addresses, what hardware gets what range, physical RAM loses.
Then there is always the possibility of using more than 4 gigs RAM, good for when you have lots of VMs
x64 Vista loads core OS processes/services, during boot, into random addresses. Giving some exploits a 1/256 chance of picking the right memory location, instead of 100% on a 32 machine.
No kernel patching. None. Nada. Zilch. It does mean some Sysinternal tools do not work, however it means xyz spyware/virus cant maliciously apply the same techniques as sysinternals to hide forever, intercept calls, etc. (this is what keeps out some anti-virus software... as well as viruses)
Another technical benefit, aside from the increased address space, is that 64bit apps always use DEP, so you are forced to fix those bugs and potential security holes.
64-bit won't be mainstream before most programs are availiable in 64 bit versions. And who make programs? Developers, developers, developers!
See my point? If developers don't make the shift, how is 64-bit programs going to be mainstream?
Other than that, there is of cource more reasons:
Signed drivers
More memory, as you mentioned
You get the possibility to test your programs on 64-bit (obviously)
It's the future. =)
I switched from 32 bit Vista to 64 bit and haven't looked back. I have only had a problem with one device (a multi-track firewire mixing board) - but everything else that has worked for 32-bit works for 64. Throw in the ability to add piles of cheap RAM, and I don't see any reason why anyone would stick with 32 if the processor supports it.
If you're really unsure, use Vista's much improved multi-boot functionality and install 32 bit XP and 64 bit Vista on the same machine on different partitions. I did, but to tell you the truth, I haven't gone back into XP for at least 9 months now.
Another advantage of 64 bit:
All the registers associated with the microprocessors are 64-bit. This enables High- precision computations and 64-bit arithmetic to be performed in fewer clock-cycles as compared to 32-bit microprocessors. In certain cases like 64-bit multiplication, it is twice as fast.
XP 64bit wasn't ready for prime time, there were no drivers for it. In Windows Vista 64-bit this isn't the case. So if you are looking to install Windows Vista go 64-bit if you are keeping XP stay at 32-bit.
Bigger is always best? The RAM thing is the major advantage, and the increased address space. I guess as long as drivers aren't an issue, then why NOT 64bit?
People are recommending 64 bit, can you explain why? Is it just so you can have more then 3Gb of addressable RAM that 32 bit gives you?
This addressable RAM limit is not a problem for a regular user, but it is pretty critical on DB configuration, scientific computing, etc...
And how does Visual Studio benefit from all this extra RAM?
Does it??? If you want to compile faster you can gain up to 20% compilation time compiling directly from a ramdisk partition. I went from 64 bit XP back to 32 bit due to 90% of the software I was using only being 32 bit anyway and I had issues with drivers and some software with 64 bit.
Switching 64 bits for a regular dev station is probably useless.
Vista x64 has been a very pleasant experience for me. There are a couple of edge cases, but most software and drivers work fine with it at this point. The biggest practical reason I see to use it is that you can load up on RAM (say 6GB or more) and then dedicate lots of it to virtual machines and other apps that require lots of memory (like Photoshop). If you are only using Visual Studio and maybe a couple other apps day to day, then it might not be as beneficial, but I find myself 0ften running 10 to 20 apps at a time (seriously) and the extra RAM is critical.
DotNet rocks had a recent show all about the benefits and pitfalls of going 64-bit from a .Net developer perspective.
http://www.dotnetrocks.com/default.aspx?showNum=341
There are the obvious benefits of having access to more RAM in windows, as well as the obvious possible downside presented by unavailable drivers (which not only have to be 64-bit, but signed and certified as well).
Other points made are in that if you ever need to test anything you are developing under 64-bit, the only way you can do that is on a 64-bit OS. You can always create VM image to test under 32-bit from a 64-bit OS.
There are some gotchas in terms of p/invoke calls not always working across 32/64, as well as Managed DirectX not working well under 64-bit, but on the whole I think its something people are going to be doing more as time goes by.

Resources