I'm currently providing 32 bit Windows music software. Some of my users are asking for 64-bit support. I plan to eventually, but porting is a big job, and I have plenty of other important feature requests too. I need to allocate my limited time wisely.
How much market share do 64-bit operating systems hold?, and what's the trend.
No better time than now. As the need for more ram increases 64 bit windows versions will get more and more prevalent. Play around a bit with Google trends and you will see a clear uptick in people looking into it. As explained in "Dude, Where's My 4 Gigabytes of RAM?" the need for the every day user to go to a 64bit OS is just going to keep growing.
Edit in response to Jeff's comment
I understand, any team will have to balance upgrades/bug fixes by priority. That will always be a difficult balance to strike. The benefits of a 64 bit version will only continue to grow!
Good luck striking the right balance!
Why are they asking for 64-bit support? Does your 32-bit software not work on Win64, or are they assuming they need a special version when in fact they'd be fine with the 32-bit version? In my experience, Win64's support for 32-bit programs is excellent, and it's likely to continue to exist for the foreseeable future.
If your software doesn't work, and it's not due to a fundamental limitation like half the logic being in a device driver, then making it work as 32-bit executable may be easier than you think.
(Forgive me if I'm teaching you to suck eggs. 8-)
There are 3 common things that would be good reasons to port to Win64:
your product includes a driver - in this case to work at all on a Win64 system, at least the driver must be ported.
your product has shell or IE integration - since on a Win64 system the user is likely using the 64-bit version of Explorer and IE, you'll need 64-bit plug-ins to integrate with those. (you should continue to package and install the 32-bit versions so things will still work if the user finds themselves in a 32-bit file manager or IE instance).
your product would noticeably benefit from increased address space - if your product consumes a lot of data (like database or number crunching apps often do), your application will have far more virtual address space available on a Win64 system and can often use that to advantage.
Note that there may be other good reasons to port, but these are the common ones. Also note that porting for one of the above reasons does not necessarily mean that everything has to be ported. For example, you may be able to get away with just porting your device driver.
If none of these reasons fit, then it might just be your users wanting something for no good reason - educating them may help. But if it starts impacting sales, you might find yourself in a position where you have to port just to make them happy even if there's no good technical reason (hopefully your customers aren't that unreasonable and will listen to sound technical advice).
But even if you don't port your code to Win64 there's no reason not to test and support your application on Win64 systems.
Music software is a bit vague. If you are developing music encoding/decoding software professionally, then 64 bit is something you should take seriously, since it can have noticeable impact on encoding/decoding performance.
Otherwise, while 64 bit is getting increasingly popular, your 32 bit app will still run perfectly so other features are more important in the meantime. However, you should think about 64 bit porting too and refactor your code to be more portable as you go forward.
I would agree with others here that now is a great time to start supporting 64 bit operating systems. With Windows 7 just around the corner you will see a much larger portion of users running with 64 bit operating systems. Even if your software is not 100% optimized for 64 bit processors, the port will gain access to the additional registers and such that are associated with running 64 bit code and could see a performance increase. Not to mention not running up against the 4gb wall and all that.
Just keep in mind your data structures may change in size and your application will likely use more memory.
If I'm wrong about any of this, please, someone correct me!
It's not time to port yet just be sure to also test your software on 64 bit systems. The emulator on Vista or 7 is good enough and should not cause any trouble.
The main advantage is the bigger amount of ram which can be allocated. If there is a lot of ram used and there is a lot of caching going on then you should port.
x64 PC and OS market share will only rise. It's the future. Best to support the future early on.
Related
At work, my PC is slow. I feel that I can be way more productive if I just wasn't waiting for Visual Studio and everything else to respond. My PC isn't bad (dual-core, 3GB of RAM), but there is a lot of corporate software and whatnot to slow everything down and sometimes lock it up.
Now, some developers have begun getting Windows 7 machines with 8 GB of RAM. Of course, I start salivating at this. However, I was told that I "had to justify" why I should get a new machine.
I can think of a lot of different things, but I am curious as to what every one else on SO would have to say.
NOTE: Ideally, these reasons should be specifically related to .NET development in Visual Studio on a Windows machine. This isn't a "how can I make my machine faster" question.
I would ask myself "What am I waiting on?" And then let the answer to that question drive whether or not I felt like I could justify it.
For example, right now, I'm dealing with 90 minute compiles of the project I'm working on. Would a faster machine help that? A little. But, more impactful would be sane configuration management. So, I'm pushing that way (to no avail) rather than to the hardware route.
Bring in a chess clock.
If you are waiting start the clock
when you aren't stop the clock.
At the end of day, total up the time
multiply it by your pay rate,
multiply it by 2000,
and that is a reasonable upper limit as
to the amount of money the company is squandering on you
per year due to a slow machine
Most useful metric: How much time do you spend reading The Onion (or, these days, StackOverflow)?
This is item #9 on The Joel Test:
9. Do you use the best tools money can buy?
Writing code in a compiled language is one of the last things that still can't be done instantly on a garden variety home computer. If your compilation process takes more than a few seconds, getting the latest and greatest computer is going to save you time. If compiling takes even 15 seconds, programmers will get bored while the compiler runs and switch over to reading The Onion, which will suck them in and kill hours of productivity.
I agree with the "what is holding me up?" approach.
I start with improviing workflow by looking at repetitive things I do that can be automated or a little helper tool can fix. Helper tools don't take long to write and add a lot of productivity. Purchasing tools is also a good return on your time - a lot of things you could write, you shouldn't bother, concentrate on your core activity and let the tool makers concentrate on theirs, whether is is help software, screen grabing, SEO tools, debugging tools, whatever.
If you can't improve things by changing your workflow (and I'd be surprised if you can't), then look at your hardware.
Increase memory if you can. If you're at 3GB with a 32 bit OS, no point going any further.
Add independent disks. One disk for the OS another for your build drive. That way there is less contention for disk access from the OS and the compiler. Makes a difference.
Better CPU. Only valid if you are doing the work to justify it.
Example: What do I use?
Dual Xeon Quad Core (8 cores, total)
8 GB RAM
Dual Monitors
VMWare virtual machines
What are the benefits?
Dual Monitor is great, much better than a single 1920x1200 screen.
Having lots of memory when using Virtual Machines is great because you can realistically give the VM a realistic amount of memory (2GB) without killing the host machine.
Having 8 cores means I can do a build and mess about in a VM doing a build or a debug at the same time, no problems.
I've had this machine for some time. Its old hat compared to an iCore7 machine, but its more than fast enough for any developer. Very rarely have I seen all the cores close to maxing out (pretty much going to be held back by I/O with that much CPU power - which is why I commented on multiple disks).
For me (working in a totally different environment, where JBoss, Eclipse and Firefox are the main resource sinks), it was simple enough:
"I have 2GBs of RAM available. I'm spending most of my time with 1GB of swap in use: imagine what task switching and application building looks like there. Another 2GB of RAM costs 50 euro. Ignoring the fact that it's frustrating working like this, you do the productivity math."
I could have shown CPU load figures and application build times as well, but it didn't come to that. It took them a month or two, but boy is development a joy since then! Oh, and for performance, it's likely you'd do best with Windows XP, but you probably already know that. ;]
Use some performance monitor to determine the cause.
For me its the antivirus has some kind of critical resource leak the slows down IO after a few days requiring a reboot and no hardware upgrades will help much.
The justification will need hard data to back it. If their business software is causing the problem that "this is industry standard" obviously doesn't fly anymore. Maybe they'll realize their business software sucks and fix that instead.
We recently changed some of our system requirements on a light weight application (it is essentially a thin gui client that connects to a "mainframe" that runs IBM UniVerse). We didn't change our minimum requirements at all, but changed our recommended requirements to match those of Windows 7 and Vista (since we run on those machines).
Some system requirements are fairly easy to determine (ie: network card, hard drive space, etc...). But CPU and RAM are harder to nail down.
Our current list of minimum requirements for CPU and RAM both state that you have to meet the minimum's for your operating system. That seems fairly reasonable to us, since our app uses only 15MB or active memory and very little CPU (it's a simple GUI, in this case), so that works. This seems fine, no one complains about that.
When it comes to recommended requirements though, we've run into trouble nailing down specifics, especially nowadays, when saying minimum 1.6 gHz (or similar) can mean anything when you start talking about multi-core processors, atom processors, etc... The thin client is starting to do more intensive stuff (it now contains an embedded web browser to help display more user friendly html pages, for example).
What would be a good way to go about determining recommended values for CPU and RAM?
Do you take the recommended for an O/S and add your usage values on top (so do we then say 1GB for Vista machines?)?
Is there a better way to do so?
(Note: this is similar in nature to the server question here, but from an application base instead)
Let's try this from another perspective.
First, test your application on a minimum configuration machine. What bottlenecks if any exist?
Does it cause a lot of disk swapping? If so, you need more RAM.
Is it generally slow when performing regular operations (excluding memory usage) then increase processor requirements.
Does it require diskspace beyond the app footprint such as for file handling? List that.
Does your app depend on certain instruction sets to be on chip? (SSE, Execute Disable Bit, Intel Virtualization,.. as examples). If so, then you have to list what processors will actually work with the app.
Typically speaking, if the app works fine when using a minimum configuration for the OS; then your "recommended" configuration should be identical to the OS's recommended.
At the end of the day, you probably need to have a couple of machines on hand to profile. Virtual machines are NOT a good option in this case. By definition, the VM and the host OS will have an impact. Further, just because you can throttle a certain processor down doesn't mean that it is running at a similar level to a processor normally built for that level.
For example, a Dual Core 1.8 GHz processor throttled to only use one core is still a very different beast than a P4 1.8 GHz processor. There are architectural differences as well as L2 and L3 cache changes.
By the same token, a machine with a P4 processor uses a different type of RAM than one with a dual core (DDR vs DDR2). RAM speeds do have an impact.
So, try to stick to the OS recommendations as they've already done the hard part for you.
Come up with some concrete non-functional requirements relating to things like latency of response, throughput, and startup time, and then benchmark them on a few varied machines. The attempt to extrapolate to what hardware will allow a typical user to have an experience that matches your requirements.
For determining the CPU and RAM you could try using Microsoft Virtual PC which allows you to set your CPU and RAM settings. You can then test a few different setups to see what would be sufficient for a regular user.
As for the recommended requirements, adding them on top of the basic OS requirements would probably be the safe bet.
Microsoft introduced the Windows Experience Index in Vista to solve this exact problem.
UPDATE FOR MORE INFO
It takes into consideration the entire system. Bear in mind that they may have a minimum level processor, but if they have a crap video card then a lot of processor time is going to be spent just drawing the windows... If you pick a decent experience index number like 3.0 then you can be reasonably assured that they will have a good experience with your application. If you require more horsepower, bump up the requirements to 4.0.
One example is the Dell I'm using to type this on. It's a 2 year old machine but still registers 4.2 on the experience index. Most business class machines should be able to register at least a 3; which should be enough horsepower for the app you described.
Incidentally, my 5 year old laptop registers as a 2.0 and it was mid level at the time I purchased it.
Is there any reason in particular why it's recommended to run memcached on a Linux server? Is it really that bad an idea to run it on a Windows Server box? What about an OS X Server box?
The biggest reason that I read is about TCO. In other words, for each windows box that we run memcached on, we have to buy a copy of Windows Server and those costs add up. The thing is that we have several servers that have older processors but a lot of RAM - perfect for memcached use. All of these boxes already have Windows Server 2003 installed on them, so there's not really much savings to installing Linux. Are there any other compelling reasons to use Linux?
This question is really "what are the advantages of Linux as a server platform" I'll give a few of the standard answers:
Easier to administer remotely (no need for RDP, etc.) Everything can be scripted or done via CLI.
Distributions like the Ubuntu LTS (Long Term Support) versions guarantee security updates for years with zero software cost. Updates can easily be installed via command line, and generally don't require a reboot.
Higher performance. Linux is generally considered to offer "more bang for the buck" on a given piece of hardware. This is generally due to lower resource requirements.
Lower resource requirements. Linux runs just great on 256MB or less of RAM, and on very small CPUs
Breadth of available software & utilities.
It's free. (As in beer)
It's free. (As in freedom) This means you can see, change, and file bugs against the code you're running, and talk directly with the developers.
Remember, that TCO includes the amount of time that you (the administrator) are spending to maintain the machine. Linux has a lower TCO because it's easier to maintain, and you can spend your time doing something other than administering a server...
Almost all of the FAQs and HOWTOs are written from Linux point of view. Memcache was originally created only for Linux, the ports came later. There is port to Windows, but it's not yet in the official memcache distribution. Memcache on Windows is still guerrilla style. For example there is no memcache for x64 Windows.
As of memcache on MacOS X on servers: niche of niche of niche.
There doesn't seem to be any technical disadvantage to running it in windows. It's mainly a cost thing. If the licenses are just sitting around unused, there's probably no disadvantage at all. I do recall problems on older windows with memory leaks in older windows APIs, particularly the TCP stuff -- but presumably that stuff is all fixed in modern windows.
If you are deploying memcached you probably have a fairly significant infrastructure (many, many machines already deployed). Even if you are dedicating new machines to memcached, you'll want to run some other software on them for system management, monitoring, hardware support etc. This software may be customised by your team for your infrastructure.
Therefore, your OS platform choice will be guided by what your operations team and hardware vendor will support for use in production.
The cost of a few Windows licences is probably fairly immaterial and you probably have a bulk subscription already - in fact the servers may be ordered with Windows licences already on them.
Having said that, you will definitely want a 64-bit OS if you're running memcached - using a 32-bit OS is not clever and will mean that most of your RAM cannot be used (you'll be limited to around 3G depending on the OS).
I'm assuming that if you're deploying memcached, you'll be doing so on hardware with LOTS of ram - it's pretty pointless otherwise, after all.
The Free MS Windows replacement operating system ReactOS has just released a new version. They have a large and active development team.
Have you tried your software with it yet?
if so what is your recommendation?
Is it time to start investigating it as a serious Windows replacement?
Targeting ReactOS specifically is a bit too narrow IMO -- perhaps a better focus is to target compatibility with WINE. Because ReactOS shares so many of its usermode DLLs with WINE, targeting WINE should result in the app running just fine on ReactOS.
Of course, there will always be things that WINE can't emulate well (hence the need for ReactOS). In this way, it seems that if something runs in WINE, it will run in ReactOS, whereas the fact that something runs in ReactOS doesn't mean that it will necessarily run in WINE.
Targeting WINE is well documented, perhaps easier to test, and by definition, should make your app compatible with ReactOS as a matter of course. In this way, you're not only gathering the rather large user base of current WINE users, but you're future-proofing yourself for whenever anyone wants to use your app with ReactOS.
In their homepage, at the Tour you can see a partial list of office, tools and games that already run OK (or more or less) at ReactOS. If you subscribe to the newsletter, you'll receive info about much more - for instance, I was quite surprised when I read most SQL Server 2000 tools actually work on ReactOS!! Query Analyzer, OSQL and Books Online work fine, Enterprise Manager and Profiler are buggy and the DBMS won't work at all.
At a former workplace (an all MS shop) we investigated seriously into it as a way to reduce our expenditure in licenses whilst keeping our in-house developed apps. Since it couldn't run MSDE fine, we had to abandon the project - hope in the future this will be solved and my ex-coworkers can push it again.
These announcements might as well be also on their homepage - I couldn't find them after 5 mins. of searching, though. Probably the easiest way to know all these compatibility issues is to join the newsletter, or look for its archives.
I have been tracking this OS' progress for quite some time. I believe it has all the potential to really bring an OSS operating system to the masses for it breaks the "chicken and egg" problem: it has applications and drivers from the very beginning (since it aims to have full ABI compatibility with MS Windows).
Just wait for their first beta, I won't be surprised if they surpass Linux in popularity really soon after that...
Post Edit: Found it! Look at section Support Database, it's the web place to go look for whether a particular piece of hardware of some program works on ReactOS.
ReactOS has been under development for a long long time.
They were in some hot water earlier because some of their code appeared to be line by line dissasembly of some NT kernel code, I think they have replaced all of it.
I wouldn't bother with cross platform testing until they hit the same market penetration as Linux, which I would wager is never.
Until ReactOS doesn't randomly crash just sitting there within 5 minutes of booting, I won't worry about testing my code on it. Don't get me wrong, I like ReactOS, but it's just not stable enough for any meaningful testing yet!
No, I do not think it is time to start thinking of it as a Windows replacement.
As the site states, it's still in the Alpha stages. More importantly, whos Windows replacement? Yours? Your users? The former is one thing, the latter is categorically a no-go.
As an aside, I'm not really sure who this OS is targetting. It has to be people who rely on Windows software but don't want to pay, because people who simply don't want Windows can use MacOS / Linux, and the support (community or otherwise) for these choices is good.
Moreover, if you use Linux you already have some amounts of Windows software support via Wine.
Back to people who rely on Windows software but don't want to pay. If they are home users they can just simply pirate it, if they are large business users they already have support contracts and trained people etc. It's hard enough for large businesses to be OK to update to new versions of Windows, let alone an open source replacement.
So I suppose that leaves small businesses who don't want to obtain illegal copies of MS software, can't afford the OS licences and rely on software that only runs on Windows and has bad of non-existent Wine compatibility.
It is a useful replacement for Windows when it runs 'your' software without crashing. At the moment it is not a general purpose os as it is too unstable (being only alpha) but people have used ReactOS successfully in anger for specific tasks already. As a windows replacement it has multiple potential uses, sandbox systems, test and development systems, multiple virtual instances, embedded devices, even packaging/bundling legacy apps with their own compatible o/s. Driver and application compatibility, freed from Microsoft's policy of planned obsolescence and regular GUI renewal, what's not to like?
Would I expect to see any performance gain by building my native C++ Client and Server into 64 bit code?
What sort of applications benefit from having a 64 bit specific build?
I'd imagine anything that makes extensive use of long would benefit, or any application that needs a huge amount of memory (i.e. more than 2Gb), but I'm not sure what else.
Architectural benefits of Intel x64 vs. x86
larger address space
a richer register set
can link against external libraries or load plugins that are 64-bit
Architectural downside of x64 mode
all pointers (and thus many instructions too) take up 2x the memory, cutting the effective processor cache size in half in the worst case
cannot link against external libraries or load plugins that are 32-bit
In applications I've written, I've sometimes seen big speedups (30%) and sometimes seen big slowdowns (> 2x) when switching to 64-bit. The big speedups have happened in number crunching / video processing applications where I was register-bound.
The only big slowdown I've seen in my own code when converting to 64-bit is from a massive pointer-chasing application where one compiler made some really bad "optimizations". Another compiler generated code where the performance difference was negligible.
Benefit of porting now
Writing 64-bit-compatible code isn't that hard 99% of the time, once you know what to watch out for. Mostly, it boils down to using size_t and ptrdiff_t instead of int when referring to memory addresses (I'm assuming C/C++ code here). It can be a pain to convert a lot of code that wasn't written to be 64-bit-aware.
Even if it doesn't make sense to make a 64-bit build for your application (it probably doesn't), it's worth the time to learn what it would take to make the build so that at least all new code and future refactorings will be 64-bit-compatible.
Before working too hard on figuring out whether there is a technical case for the 64-bit build, you must verify that there is a business case. Are your customers asking for such a build? Will it give you a definitive leg up in competition with other vendors? What is the cost for creating such a build and what business costs will be incurred by adding another item to your accounting, sales and marketing processes?
While I recognize that you need to understand the potential for performance improvements before you can get a handle on competitive advantages, I'd strongly suggest that you approach the problem from the big picture perspective. If you are a small or solo business, you owe it to yourself to do the appropriate due diligence. If you work for a larger organization, your superiors will greatly appreciate the effort you put into thinking about these questions (or will consider the whole issue just geeky excess if you seem unprepared to answer them).
With all of that said, my overall technical response would be that the vast majority of user-facing apps will see no advantage from a 64-bit build. Think about it: how much of the performance problems in your current app comes from being processor-bound (or RAM-access bound)? Is there a performance problem in your current app? (If not, you probably shouldn't be asking this question.)
If it is a Client/Server app, my bet is that network latency contributes far more to the performance on the client side (especially if your queries typically return a lot of data). Assuming that this is a database app, how much of your performance profile is due to disk latency times on the server? If you think about the entire constellation of factors that affect performance, you'll get a better handle on whether your specific app would benefit from a 64-bit upgrade and, if so, whether you need to upgrade both sides or whether all of your benefit would derive just from the server-side upgrade.
Not much else, really. Though writing a 64-bit app can have some advantages to you, as the programmer, in some cases. A simplistic example is an application whose primary focus is interacting with the registry. As a 32-bit process, your app would not have access to large swaths of the registry on 64-bit systems.
Continuing #mdbritt's comment, building for 64-bit makes far more sense [currently] if it's a server build, or if you're distributing to Linux users.
It seems that far more Windows workstations are still 32-bit, and there may not be a large customer base for a new build.
On the other hand, many server installs are 64-bit now: RHEL, Windows, SLES, etc. NOT building for them would be cutting-off a lot of potential usage, in my opinion.
Desktop Linux users are also likely to be running the 64-bit versions of their favorite distro (most likely Ubuntu, SuSE, or Fedora).
The main obvious benefit of building for 64-bit, however, is that you get around the 3GB barrier for memory usage.
According to this web page you benefit most from the extra general-purpose registers with 64 bit CPU if you use a lot of and/or deep loops.
You can expect gain thanks to the additional registers and the new passing parameters convention (which are really linked to the additional registers).