How does memory usage in Windows affect performance - windows

I'm running windows 10 with 4GBs of DDR3 1066 on Intel second generation i5 mobile architecture.
I come from a OSX background mostly and memory has always been a concern for me because I prefer to have many tabs open. I noticed on OSX that the memory usage didn't relate that much to the performance of the applications so long as it wasn't fully saturated but easily on my iMac I can run 80% of memory and find no noticeable lag or stuttering. However on Windows I'm finding memory to be the major bottleneck in my system, I understand that upgrading to 8 or 16GBs of memory would be the upgrade path for me. However I would love to understand why my system slows down noticeably when I saturate 80% of the memory unlike OSX that seems to handle it just fine. Is it a bandwidth limitation? I know that Windows NT and Darwin are completely different Kernels and I would love to be educated in exactly how that affects the same usage scenario so differently.
Thank you in advance.

Related

Low hardware simulation for performance profiling

I need to optimize the app I'm working on and I can't get reliable profiling data on my development machine. The app should run on low end ARM hardware on QNX, but from logistic reasons I don't have access to the final hardware for profiling.
I've tried to do profiling on my development machine, but as you can imagine everything is so fast that I can't pin point the slow parts. I've created a Linux virtual machine with reduced memory and CPU cores count, but they are still too fast compared to the final hardware.
Is it possible to reduce the CPU clock speed/ram speed/disk speed in a virtual machine to simulate low performance hardware or is there any other way to get relevant profiling data on my development machine?
Considering the app is processing several gigabytes of data I assume disk access is a major bottleneck and limiting disk speed might help
I can use any (as in most open source and commercially available) tool/approach that runs on Windows/Linux/MacOS on real or virtual machine.
This URL describes how to limit disk bandwidth on VirtualBox images. You could run a Linux VM on Virtualbox and use this method to limit disk access speeds, turn off Disk Caching using suggestions from this answer and profile your application. Alternatively you can download QNX SDP, which comes with the option of a prebuilt x86_64 Virtual Machine image that can be run using VMWare/Virtualbox/qemu
My previous experiences with QNX on armv7 and x86_64 suggest that the devb-sdmmc driver is possibly a bottleneck when working with a lot of big files being read from flash storage. devb-sdmmc and io-blk often require fine tuning of the drivers with proper cache, block, read-ahead size and other parameters helps improve disk access performance.

Can Xcode utilize 64GB RAM or greater?

I have a MBP with 16GB of RAM. As projects grow in Xcode, the compile time does take longer. I'm looking into starting a hackintosh project purely for shortening Xcode compilation time. Since RAM is cheap, I wanna push the normal boundaries. But the biggest question is will Xcode be capable of using all the RAM greater than 32GB? I know there will be some diminishing marginal returns at some point of RAM increase.
RAM usage is mostly governed by the OS, because the Mac Pro does support up to 64GB of RAM, so should OSX (and by extension XCode).
Although I wonder if your compile time issues are actually RAM-related. I have Xcode projects that take minutes to build and it's all because my CPU is pegged at 100% (using a mid-2015 15" retina MBP). Not many software projects are RAM-constrained past 16GB.

Does JVM memory management work the same on Windows and Linux?

My original question is that, is this technically rational to check the required heap-size of my Java program on Windows 7, via VisualVM, and come to this conclusion that the program will require the same amount of heap on Linux(RedHat) as well?
I don't know how the system(OS or even CPU and RAM), affect memory management of JVM.
well, the windows is my development system with 4GB of RAM and a Core 2 Due CPU, however the
Linux is the production system with 32GB of RAM and multiple powerfull processors,
Actually, my concern is that the program on Linux might need more memory. less is ok.

Decreasing performance of dev machine to match end-user's specs

I have a web application, and my users are complaining about performance. I have been able to narrow it down to JavaScript in IE6 issues, which I need to resolve. I have found the excellent dynaTrace AJAX tool, but my problem is that I don't have any issues on my dev machine.
The problem is that my users' computers are ancient, so timings which are barely noticable on my machine are perhaps 3-5 times longer on theirs, and suddenly the problem is a lot larger. Is it possible somehow to degrade the performance of my dev machine, or preferrably of a VM running on my dev machine, to the specs of my customers' computers?
I don't know of any virtualization solutions that can do this, but I do know that the computer/CPU emulator Bochs allows you to specify a limit on the number of emulated instructions per second, which you can use to simulate slower CPUs.
I am not sure if you can cpu bound it, but in VirutalBox or Parallel, you can bound the memory usage. I assume if you only give it about 128MB then it will be very slow. You can also limit the throughput on the network with a lot of tools. I guess the only thing I am not sure about is the CPU. That's tricky. Curious to know what you find. :)
You could get a copy of VMWare Workstation and choke the CPU of your VM.
With most virtual PC software you can limit the amount of RAM, but you are not able to set the CPU to a slower speed as it does not emulate a CPU, but uses the host CPU.
You could go with some emulation software like bochs that will let you setup an x89 processor environment.
You may try Fossil Toys
* PC Speed
PC CPU speed monitor / benchmark. With logging facility.
* Memory Load Test
Test application/operating system behaviour under low memory conditions.
* CPU Load Test
Test application/operating system behaviour under high CPU load conditions.
Although it doesn't simulate a specific CPU clock speed.

Can GPU capabilities impact virtual machine performance? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
While this many not seem like a programming question directly, it impacts my development activities and so it seems like it belongs here.
It seems that more and more developers are turning to virtual environments for development activities on their computers, SharePoint development being a prime example. Also, as a trainer, I have virtual training environments for all of the classes that I teach.
I recently purchased a new Dell E6510 to travel around with. It has the i7 620M (Dual core, HyperThreaded cpu running at 2.66GHz) and 8 GB of memory. Reading the spec sheet, it sounded like it would be a great laptop to carry around and run virtual machines on.
Getting the laptop though, I've been pretty disappointed with the user experience of developing in a virtual machine. Giving the Virtual Machine 4 GB of memory, it was slow and I could type complete sentences and watch the VM "catchup".
My company has training laptops that we provide for our classes. They are Dell Precision M6400 Intel Core 2 Duo P8700 running at 2.54Ghz with 8 GB of memory and the experience on these laptops is night and day compared to the E6510. They are crisp and you are barely aware that you are running in a virtual environment.
Since the E6510 should be faster in all categories than the M6400, I couldn't understand why the new laptop was slower, so I did a component by component comparison and the only place where the E6510 is less performant than the M6400 is the graphics department. The M6400 is running a nVidia FX 2700m GPU and the E6510 is running a nVidia 3100M GPU. Looking at benchmarks of the two GPUs suggest that the FX 2700M is twice as fast as the 3100M.
http://www.notebookcheck.net/Mobile-Graphics-Cards-Benchmark-List.844.0.html
3100M = 111th (E6510)
FX 2700m = 47th (Precision M6400)
Radeon HD 5870 = 8th (Alienware)
The host OS is Windows 7 64bit as is the guest OS, running in Virtual Box 3.1.8 with Guest Additions installed on the guest. The IDE being used in the virtual environment is VS 2010 Premium.
So after that long setup, my question is:
Is the GPU significantly impacting the virtual machine's performance or
are there other factors that I'm not
looking at that I can use to boost the
vm's performance? Do we now have to
consider GPU performance when
purchasing laptops where we expect to
use virtualized development
environments?
Thanks in advance.
Cheers,
Dave
EDIT:
The HDDs in the two systems are 7200 RPM, the E6510 having 500GB vs. the M6400 have 2x 250GB in a non-RAID configuration.
Also, when I turn off some of the graphics features of Windows 7 (host and guest) by going to non-Aero themes, VM performance visibly increases.
Just to closer off this question with my findings, what we have discovered was that driver performance was limiting the perceived performance of that virtual machine. With the default Dell drivers, which are built for "stabilty" the virtual machines would be visibly impacted in "visual" applications like IDEs (Visual Studio 2010) such that VS 2010 could not keep up with my typing. When we had some nVidia reference drivers installed, the IDEs were crisp and you couldn't really tell that you were in a VM anymore, which was my experience with the M6400s.
Thanks to everyone who threw out some ideas on the subject.
I am running two VMs on my development system simultaneously, one for development, and one for TeamCity. My graphics card on my Dell Optiplex is an ATI 2450, which is, quite honestly, complete crap. Personally, I have found RAM and CPU to make the most significant impact on my desktop. But since you are on a laptop, have you thought about the disk? Our M6400 has an SSD, and perhaps that is the biggest difference for your two laptops. I would not expect GPU to affect anything, unless of course you are trying to use the experimental Direct3D features in VirtualBox.
You guys are looking in the wrong places. Go to bios look for virturalization extensions AMD-v or VT-X. Off by default on most system. if it dosent have that option take a look at Sun Virtual box runs good on my older laptop with out virt support.
A GPU can significantly impact performance of any system. Visual Studio, for example, has a huge performance difference between on board video vs dedicated graphics.
That said, I would expect there are other differences. First, how do the two hard drives compare? notebook manufacturers love putting slow disks in machines in order to beef up their battery longevity numbers; and other the other side, sometimes they put in the faster drives to boost performance numbers. It really depends on what the new machine was marketed towards. Along these lines some hard drives also have configuration settings to determine their power / performance / noise levels. Depending on the drive you might be able to tweak this.
Another expected difference is the quality of memory. Nearly every dell I've used has had second or third tier ram installed. Sure they might both be DDR3 of a certain Ghz, but the quality of the chips is going to determine how they really perform. Sometimes by 200% of more.
Beyond those you start getting into chipset differences, mainly in the hard drive controllers. You can't do anything about this though.
The next thing I can think of is drivers. Make sure your up to date on everything you can. Also, test both Dell and nvidia supplied drivers. Sometimes nvidia has better drivers, sometimes the original ones from dell are better. That part is a crap shoot.
And, finally, consider blowing away the new machine and doing a complete reinstall from the bare metal up. Before installing any anti-virus or CPU sucking software, test your VM performace.

Resources