I am using 64 bit ARM processor based EC2 instance from AWS and overall performance is satisfactory. But the server reboots randomly without any apparent reason. Is this expected with ARM processors? I am asking because I never experienced this on my other EC2 instances (with x86 processors).
Related
I'm testing the speed of different sorting methods for a CS class, and although our professor said we didn't need to be terribly precise, he still wants us to be careful not to run apps in the background while testing, use a different machine, or do anything to throw the speed of the sorts off.
If I ran the tests in a VM, would the environment outside of the VM affect the speed? Would that help make the tests accurate without having to worry about changes in the apps I have open alongside the VM?
In short, yes.
In most scenarios, hosts share their resources with the VM. If you bog down/freeze/crash the host then the VM will be affected.
For those that have more robust servers with better resources, processes running in the host won't affect the VM as much. Because if you have more resources on the host you can assign better RAM and Virtual Processors to the VM so it runs smoothly.
For instance, let's say our host has 64GB of RAM a processor that has 4 cores and 8 threads (such as an Intel® Xeon® E3-1240 Processor).
We can tell VirtualBox, VMware or Hyper-V to assign 32GB of RAM and 4 virtual processors to the VM, essentially cutting the host's power by half.
With this in mind, any processes you run on the host will usually be separate from the VM but if the processes freeze, crash or cause a hard reboot on the host then the VM will be affected regardless of RAM or virtual processors assigned.
In enterprises environments, a hyper-v server should only be used for that purpose and installing/running a lot of processes in the host is usually frowned upon (such as installing/running DHCP, DNS, Web Server (IIS), etc).
So your professor is right to advise against running processes on the host while testing your VM.
I'm working on a number-crunching application and I'm trying to squeeze all possible performance out of it that I can. I'm designing it to work for both Windows and *nix and even for multi-CPU machines.
The way I have it currently set up, it asks the OS how many cores there are, sets affinity on each core to a function that runs a CPUID ASM command (yes, it'll get run multiple times on the same CPU; no biggie, it's just initialization code) and checks for HyperThreading in the Features request of CPUID. From the responses to the CPUID command it calculates how many threads it should run. Of course, if a core/CPU supports HyperThreading it will spawn two on a single core.
However, I ran into a branch case with my own machine. I run an HP laptop with a Core 2 Duo. I replaced the factory processor a while back with a better Core 2 Duo that supports HyperThreading. However, the BIOS does not support it as the factory processor didn't. So, even though the CPU reports that it has HyperThreading it's not capable of utilizing it.
I'm aware that in Windows you can detect HyperThreading by simply counting the logical cores (as each physical HyperThreading-enabled core is split into two logical cores). However, I'm not sure if such a thing is available in *nix (particularly Linux; my test bed).
If HyperTreading is enabled on a dual-core processor, wil the Linux function sysconf(_SC_NPROCESSORS_CONF) show that there are four processors or just two?
If I can get a reliable count on both systems then I can simply skip the CPUID-based HyperThreading checking (after all, it's a possibility that it is disabled/not available in BIOS) and use what the OS reports, but unfortunately because of my branch case I'm not able to determine this.
P.S.: In my Windows section of the code I am parsing the return of GetLogicalProcessorInformation()
Bonus points: Anybody know how to mod a BIOS so I can actually HyperThread my CPU ;)? Motherboard is an HP 578129-001 with the AMD M96 chipset (yuck).
I am planning to develop an automated test solution with multiple windows machines and multiple ubuntu machines that perform related/interdependent tasks. To start the project, I'd like to have one or two windows machines (virtual) and a few ubuntu machines (virtual) running on a single desktop. It seems likely that I will be pushing a single desktop to the limit here so I am trying to guess if I will have better luck if my host OS is ubuntu or if it is Windows 7. I would be able to use the host OS as one of the machines in my environment. The desktop is some sort of above average Dell, but nothing really impressive.
Does anyone have any insight here? I've worked mostly with VMWare in the past and my host was windows along with my VMs.
Note: VirtualBox is a type-2 hypervisor (it runs on the host OS, not on the hardware like a type-1 hypervisor) and tends to offer weaker performance than, for example, Hyper-V, ESX or XEN (type-1 hypervisors).
Therefore, if performance is a considerable concern, you may squeeze more juice out of Win8 or Windows Server 2012 box running, for example, Hyper-V. Further reading on this here and here (YMMV).
How your environment will run when hosted by a Windows vs. a Linux box is, frankly impossible to tell. I suggest you build your VM's and try dual-booting your machine in Windows and Linux and measuring your scenario. Be sure to have enough RAM in the host to allocate enough working RAM to each VM and enough IO throughput that your host doesn't end up dragging the perf of all VM's down if one VM saturates the machine's IO.
One last note of caution though: Don't completely trust fine-grained perf results measured on VM's - even the best hypervisors cannot truly replicate the perf' characteristics of code running on bare-metal. Treat your measurements as a guideline only.
Measure, then measure again. Measure again just to be sure ... and THEN tweak your config and re-measure, measure, measure ;)
My $0.02:
If its VirtualBox you are using I would go with Ubuntu for certain. I have an AMD 945 Phenom with 16GB of Ram with 12.04LTS 64bit . I can usually have 2 VM's running Windows and / or Ubuntu guests and never consume more than 7 GBs of RAM . If your running them in a testing solution you could expect to probably see 12 maybe 13 GBs of RAM, but the CPU might be your problem. My AMD Phenom runs great, but would be maxed out for sure. I use VMWare at work and on my Laptop and would recommend that if you were running a Windows Host. I also have VMWare on my Ubuntu host, but it just does not run as well as it does on Windows., at least for me.
The standard instance of EC2 is 1.7GB Memory default 32bit Linux OS for example,
My question is if one day i want to upgrade or "scale-up" to 7.5GB memory server without reinstalling the OS. To better utilise more than 3GB memory, we definitely need a 64 bit server ? But if i would like to start from a small instance, will it create a lot of trouble if were to upgrade it in the future?
You can move an EBS boot instance from a 32-bit m1.small to a 32-bit c1.medium without reinstalling.
Above that you have to start over with a 64-bit AMI.
Update: EC2 now supports 64 bit on all instance sizes. You life will be much easier if you only use 64 bit across the board.
If you need more than 3GB of RAM you need 64bit server. I think there can be some problems in switching from 32bit to 64bit because even compiled binaries by you are different from system arch (in 64b linux uses ELF64 format for binaries). I don\t know what are your needs but i will choose microinstances (they support 64bit) and get 2 of them to make "balanced" architecture.
I think you should consider RackSpace services:
http://www.rackspace.com/cloud/cloud_hosting_products/servers/pricing/
Price/Performance ratio is same but you will get 64bit from the start, so I don't expect so much trouble with upgrade.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
While this many not seem like a programming question directly, it impacts my development activities and so it seems like it belongs here.
It seems that more and more developers are turning to virtual environments for development activities on their computers, SharePoint development being a prime example. Also, as a trainer, I have virtual training environments for all of the classes that I teach.
I recently purchased a new Dell E6510 to travel around with. It has the i7 620M (Dual core, HyperThreaded cpu running at 2.66GHz) and 8 GB of memory. Reading the spec sheet, it sounded like it would be a great laptop to carry around and run virtual machines on.
Getting the laptop though, I've been pretty disappointed with the user experience of developing in a virtual machine. Giving the Virtual Machine 4 GB of memory, it was slow and I could type complete sentences and watch the VM "catchup".
My company has training laptops that we provide for our classes. They are Dell Precision M6400 Intel Core 2 Duo P8700 running at 2.54Ghz with 8 GB of memory and the experience on these laptops is night and day compared to the E6510. They are crisp and you are barely aware that you are running in a virtual environment.
Since the E6510 should be faster in all categories than the M6400, I couldn't understand why the new laptop was slower, so I did a component by component comparison and the only place where the E6510 is less performant than the M6400 is the graphics department. The M6400 is running a nVidia FX 2700m GPU and the E6510 is running a nVidia 3100M GPU. Looking at benchmarks of the two GPUs suggest that the FX 2700M is twice as fast as the 3100M.
http://www.notebookcheck.net/Mobile-Graphics-Cards-Benchmark-List.844.0.html
3100M = 111th (E6510)
FX 2700m = 47th (Precision M6400)
Radeon HD 5870 = 8th (Alienware)
The host OS is Windows 7 64bit as is the guest OS, running in Virtual Box 3.1.8 with Guest Additions installed on the guest. The IDE being used in the virtual environment is VS 2010 Premium.
So after that long setup, my question is:
Is the GPU significantly impacting the virtual machine's performance or
are there other factors that I'm not
looking at that I can use to boost the
vm's performance? Do we now have to
consider GPU performance when
purchasing laptops where we expect to
use virtualized development
environments?
Thanks in advance.
Cheers,
Dave
EDIT:
The HDDs in the two systems are 7200 RPM, the E6510 having 500GB vs. the M6400 have 2x 250GB in a non-RAID configuration.
Also, when I turn off some of the graphics features of Windows 7 (host and guest) by going to non-Aero themes, VM performance visibly increases.
Just to closer off this question with my findings, what we have discovered was that driver performance was limiting the perceived performance of that virtual machine. With the default Dell drivers, which are built for "stabilty" the virtual machines would be visibly impacted in "visual" applications like IDEs (Visual Studio 2010) such that VS 2010 could not keep up with my typing. When we had some nVidia reference drivers installed, the IDEs were crisp and you couldn't really tell that you were in a VM anymore, which was my experience with the M6400s.
Thanks to everyone who threw out some ideas on the subject.
I am running two VMs on my development system simultaneously, one for development, and one for TeamCity. My graphics card on my Dell Optiplex is an ATI 2450, which is, quite honestly, complete crap. Personally, I have found RAM and CPU to make the most significant impact on my desktop. But since you are on a laptop, have you thought about the disk? Our M6400 has an SSD, and perhaps that is the biggest difference for your two laptops. I would not expect GPU to affect anything, unless of course you are trying to use the experimental Direct3D features in VirtualBox.
You guys are looking in the wrong places. Go to bios look for virturalization extensions AMD-v or VT-X. Off by default on most system. if it dosent have that option take a look at Sun Virtual box runs good on my older laptop with out virt support.
A GPU can significantly impact performance of any system. Visual Studio, for example, has a huge performance difference between on board video vs dedicated graphics.
That said, I would expect there are other differences. First, how do the two hard drives compare? notebook manufacturers love putting slow disks in machines in order to beef up their battery longevity numbers; and other the other side, sometimes they put in the faster drives to boost performance numbers. It really depends on what the new machine was marketed towards. Along these lines some hard drives also have configuration settings to determine their power / performance / noise levels. Depending on the drive you might be able to tweak this.
Another expected difference is the quality of memory. Nearly every dell I've used has had second or third tier ram installed. Sure they might both be DDR3 of a certain Ghz, but the quality of the chips is going to determine how they really perform. Sometimes by 200% of more.
Beyond those you start getting into chipset differences, mainly in the hard drive controllers. You can't do anything about this though.
The next thing I can think of is drivers. Make sure your up to date on everything you can. Also, test both Dell and nvidia supplied drivers. Sometimes nvidia has better drivers, sometimes the original ones from dell are better. That part is a crap shoot.
And, finally, consider blowing away the new machine and doing a complete reinstall from the bare metal up. Before installing any anti-virus or CPU sucking software, test your VM performace.