My current instance is 32 Bit Windows 2008 server with 613 MB memory. I create an ami and then try to launch new instance from that AMI. I want create an large instance with(7.5 GB Memory etc) from that AMI. But there is a problem. The choices are micro, small and high cpu with max 1.7 GB Ram.
Micro (t1.micro)Up to 2 ECUs1 Core613 MB
Small (m1.small)1 ECU1 Core1.7GB
High-CPU Medium (c1.medium) 5 ECUs2 Cores1.7 GB...
Why? What should I do?
I think the issue here is that the large instances are 64 bit only. You can't just spin up a 64 bit virtual machine with a 32 bit server image.
So what to do? You need to start up an new 64 bit machine and configure it in the same way as you currently have you 32 bit machine.
64 bit instances cannot be generated from 32 bit AMI.
You need a manual operation, something like "create a 64 bit instance and turn off a 32 bit instance".
Related
For my learning I am planning to create a 2 node Cloudera Hadoop cluster. I have a 32 bit Windows XP machine and now I have bought a 64 bit Windows 8 machine (as now all machines are found in 64 bit mostly).
So I have 2 options:
Create virtual cluster in the 64 bit machine (which is i5 processor and 8 GB RAM). But, I am not sure if it will hang (I am not trying to process millions of records. My motto is to simply process few files and check Hadoop functionality also dump some data from Oracle and play around).
I can create a physical Hadoop cluster between say the 64 bit and 32 bit machines. But my question is that, is it viable option (can I create Hadoop cluster between two machines: one 32 bit and another 64 bit)? If so, what is the process? I don't have much idea on networking.
I have also a basic question, what should be basic RAM and processor configuration for running a 2 node virtual cluster with simple operation like loading few data and checking the functionality?
It depends upon hadoop version you are using if version supports 32 bit then it wont work along with a 64 bit machine but if it supports 64 bit then it will run on that machine. Apart from that you should also check with your jdk version install. It might be possible that if both the machines have 32 bit jdk then version of hadoop supporting 32 bit will work irrespective of the machine.
I am not sure but it should totally depends upon jdk as it will be at top of the OS.
1. install 32 bit jdk on both the machines.
2. install older 32 bit hadoop version on both the machines.
I think this will work fine for you and
The standard instance of EC2 is 1.7GB Memory default 32bit Linux OS for example,
My question is if one day i want to upgrade or "scale-up" to 7.5GB memory server without reinstalling the OS. To better utilise more than 3GB memory, we definitely need a 64 bit server ? But if i would like to start from a small instance, will it create a lot of trouble if were to upgrade it in the future?
You can move an EBS boot instance from a 32-bit m1.small to a 32-bit c1.medium without reinstalling.
Above that you have to start over with a 64-bit AMI.
Update: EC2 now supports 64 bit on all instance sizes. You life will be much easier if you only use 64 bit across the board.
If you need more than 3GB of RAM you need 64bit server. I think there can be some problems in switching from 32bit to 64bit because even compiled binaries by you are different from system arch (in 64b linux uses ELF64 format for binaries). I don\t know what are your needs but i will choose microinstances (they support 64bit) and get 2 of them to make "balanced" architecture.
I think you should consider RackSpace services:
http://www.rackspace.com/cloud/cloud_hosting_products/servers/pricing/
Price/Performance ratio is same but you will get 64bit from the start, so I don't expect so much trouble with upgrade.
I am developing a 32 bit application in .NET that for various reasons cannot be compiled as a 64 bit application.
I need to run many of these concurrently and they use a lot of memory. I want to load up a Windows 7 box with tonnes of memory and consequently would like to use the 64 bit version of Windows 7 so that we can put many gigabytes of RAM on those boxes.
My question is this: The maximum memory used by each instance of my app is ~500mb. In Windows 7 64bit, these 32-bit applications will run (I assume) using the WOW64 emulation layer in Windows. As I begin to run more and more of these instances concurrently, will they all be stuck running in the bottom 2gb of ram, or will Windows allocate memory for them using all of the higher-address range of memory possible within 64-bit Windows? Is the addressable-memory limitation of 32 bit software only a per-instance limitation in this case, or will all the instances be limited to the bottom 2gb of ram?
You are confusing memory (physical address space) with virtual address space. You can put more than 4GB of memory into a 32-bit system; you don't need to move to 64-bit to gain physical address space. Each process gets its own virtual address space, so each one will get its own 2GB of user-mode address space to play with. (Or 3GB if /3GB or 4GB if running on WOW64 with /LARGEADDRESSAWARE.)
I need some help understanding how 32 bit applications use memory on a 64 bit OS.
A 32 bit application can use 2 gb of memory on 64 bit OS, correct?
Does this mean that 3 32 bit applications running in parrallel could address 6 gb of memory...
Or do the 3 32 bit applications have to share the 2-4 gb of 32 bit memory that the os has?
Likewise, If I have a webservice that is compiled as 32 bits, running under IIS on a 64 bit machine. As long as a single request to that webservice always stays under 2gb of memory usage, is there any point in recompiling to 64 bit? My theory is that IIS creates a new process for each request, so the whole pool of processes will be able to make use of all the memory the 64bit machine has , 8 or 15 or 20 gig or whatever.
Let me know your thoughts, thanks
Yes, the total usage of all the 32-bit programs can exceed 2 GB. So yes you can have a bunch of 32-bit processes using all the memory in a 64-bit machine.
Actually, there's a compiler option that lets 32-bit programs use up to 3GB in Windows.
If performance isn't important, then there isn't much of a reason to use 64-bit.
I hope someone with a bit of knowledge can clear this up. There's many discussions about the reasons to run a 64-bit OS (e.g. Windows 7 x64), but many people seem to think that their old x86 apps will be able to take advantage of any RAM greater than 3.5GB.
As I understand it, though, x86 apps cannot address memory that high... unless they've been specifically programmed to (which very few will have).
Can someone knowledgeable clear this up for me, once and for all? Can 32-bit apps take advantage of a system running 8GB of RAM?
E.g. If a user decided (for whatever reason) to run several x86 apps at once, filling the RAM as much as possible, would the extra addressable memory available in Windows 7 x64 be used?
Thanks!
On a 64 bit system, 32 bit applications are able to use the full 4GB virtual address space, minus about 64K. A default 32 bit windows system will only allow a 32 bit process to use 2 GB of virtual address space. By specially configuring the OS it's possible to push that limit up to 3 GB, but it's still not as good as what you would get on a 64 bit version of windows.
If you have 8GB of ram, that 8 GB can be divided up between multiple 32 bit processes, and the entire 8 GB will be utilized if necessary. However, no single 32 bit process will be allocated more than 4 GB of memory.
Although i don't have sources to cite, but from my knowledge: 32bit app will not be able to address more than 4GB of memory itself, unless it uses some tricks(that is very unlikely), but if you have some 32bit apps running at the same time, they can all have 4GB each, and thus two 32bit apps should be able to use all 8GB of memory. Though I'm not 100% sure.
Yes. x86 apps cannot use more than 2GB of memory at once without special tricks, but they can use any memory available.
Adding to the other (correct) answers:
Instead of the term "application" the word "process" should be used. Applications often consist of multiple processes whereas the limits discussed here apply to single processes.
Thus applications benefit from x64 that either are linked with the LARGEADDRESSAWARE flag (they can use 4 GB instead of 2 GB) or that share the load between multiple processes.
32-bit processes can work with more than 4 GB RAM even on 32-bit systems by using AWE. But a 32-bit process can only ever use 2 GB at once (4 GB with LARGEADDRESSAWARE on 64 bit respectively). AWE is primarily used by databases where it is essential for performance that the entire database fit into RAM. It works by providing a 2 GB window into a larger chunk of memory.
Here are some articles for further reading:
Windows x64 – All the Same Yet Very Different, Part 1: Virtual Memory
Windows x64 – All the Same Yet Very Different, Part 2: Kernel Memory, /3GB, PTEs, (Non-) Paged Pool
x64? My Terminal Servers Run Just Fine With 32 Bits and 8/12/16 GB RAM!
E.g. If a user decided (for whatever
reason) to run several x86 apps at
once, filling the RAM as much as
possible, would the extra addressable
memory available in Windows 7 x64 be
used?
The answer is yes. That's one of the benefits a virtual address space gives us--the ability for each process to appear (to the process) as though it's executing in a linear address space that starts at 0 and goes up from there.
As far as each of the 32-bit applications is concerned, it has its own address space from 0 to 2 gigabytes (without special tricks). The operating system handles the virtual-to-physical address translation.