Question Related to Vram In Windows server 2019 - windows

I have had created VM instances using google cloud platform (using console). The VM is a based on WINDOWS SERVER 2019! I have been successful in making one but unable to get some virtual ram in the instances. It (VRAM) shows zero. Does adding GPU not increase the vram? If not then what increases them? I am looking to increase the same for gaming purposes and using software like ADOBE AND AUTODESK too...

Instances created with additional GPU's (Like Tesla K80 and other) have all specified amount of GPU memory (VRAM).
You can find list of all GPU's in the documentation.
Every GPU has an amount of memory specified in the table.
If you create a VM with one K80 GPU it will kave 16GB of DDR6 memory available (nothing to do with the type of the machine or actual RAM assigned).
You can find how much of VRAM a GPU has in the Device manager; find "Display adapters" and expand it and find your card; it's all in the "general" tab.
And regarding any Adobe or Autodesk software I can't really tell if having Tesla will be of advantage..

Related

Low hardware simulation for performance profiling

I need to optimize the app I'm working on and I can't get reliable profiling data on my development machine. The app should run on low end ARM hardware on QNX, but from logistic reasons I don't have access to the final hardware for profiling.
I've tried to do profiling on my development machine, but as you can imagine everything is so fast that I can't pin point the slow parts. I've created a Linux virtual machine with reduced memory and CPU cores count, but they are still too fast compared to the final hardware.
Is it possible to reduce the CPU clock speed/ram speed/disk speed in a virtual machine to simulate low performance hardware or is there any other way to get relevant profiling data on my development machine?
Considering the app is processing several gigabytes of data I assume disk access is a major bottleneck and limiting disk speed might help
I can use any (as in most open source and commercially available) tool/approach that runs on Windows/Linux/MacOS on real or virtual machine.
This URL describes how to limit disk bandwidth on VirtualBox images. You could run a Linux VM on Virtualbox and use this method to limit disk access speeds, turn off Disk Caching using suggestions from this answer and profile your application. Alternatively you can download QNX SDP, which comes with the option of a prebuilt x86_64 Virtual Machine image that can be run using VMWare/Virtualbox/qemu
My previous experiences with QNX on armv7 and x86_64 suggest that the devb-sdmmc driver is possibly a bottleneck when working with a lot of big files being read from flash storage. devb-sdmmc and io-blk often require fine tuning of the drivers with proper cache, block, read-ahead size and other parameters helps improve disk access performance.

How to split RAM for each Windows OS and my application?

I want to split RAM in my PC into two parts; half for my Windows OS and the other half for an image buffer for my application. For example, my desktop has 32GB memory, and I want to assign 16GB for Windows and assign another 16GB for my application access only. Windows doesn't touch the other 16GB but my application should use that 16GB image buffer. I know how to do this in Linux, but I need to do this in Windows OS. I think I have to configure the BIOS and need to implement a page remap Windows driver of image buffer for my application access.
Is there any good way to do this?
You can do this with the Address Windowing Extensions API. Although this was originally designed for 32-bit applications, it is still available to 64-bit applications, and memory allocated this way is not available to the virtual memory management system.
However, you should note that in most cases allowing the virtual memory manager to do its job will result in better overall performance than explicitly locking down memory will.

ARM big.LITTLE Performance Counters

I have recently purchased a development board utilizing Samsung Exynos5422 application processor (Cortex™-A15 2.0Ghz quad core and Cortex™-A7 quad core CPUs). I have tried to extract the performance counters in android using perf v3.0.8; however, none of the counters outputs a value (They are all "not counted"). Does anyone know how to solve this issue?
(The kernel version is 3.10.9)

how to enable cuda only for computing purpose, not for display

Iam using nvidia gt 440 gpu. It is used for both display and computational purpose which leads to less performance while computation. can i enable it only for computational purpose? if so how can i disable it from using display.
It depends -- are you working on Windows or Linux? Do you have any other display adapters (graphics cards) in the machine?
If you're on Linux, you can run without the X Windows Server (i.e., from a terminal) and SSH into the box (or attach your display to another adapter).
If you're on Windows, you need to have a second display adapter. As long as your display is connected to your GeForce 440 GT, there's no way to use it only for computational purposes. That also includes Remote Desktop, which won't work at all unless you have a Tesla card because of the way the WDDM (Windows Display Driver Model) was designed (it can't be accessed from within Session 0, which is where the RDP service runs).
I'm using Intel integrated graphics for display purposes and GPU for compute purpose on Linux.
You'll need to setup from bios to use the integrated graphics on mobo. This will leave your GPU free. It depends on your hardware available. =)
How much does it affects the performance? I did checked before, the display in windows did takes up some memory (less than 10mb).
Check that you have write permission on the /dev/nvidia* devices. The CUDA C Getting Started Guide for Linux contains a script that automatically sets the correct permissions at startup.

Can GPU capabilities impact virtual machine performance? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
While this many not seem like a programming question directly, it impacts my development activities and so it seems like it belongs here.
It seems that more and more developers are turning to virtual environments for development activities on their computers, SharePoint development being a prime example. Also, as a trainer, I have virtual training environments for all of the classes that I teach.
I recently purchased a new Dell E6510 to travel around with. It has the i7 620M (Dual core, HyperThreaded cpu running at 2.66GHz) and 8 GB of memory. Reading the spec sheet, it sounded like it would be a great laptop to carry around and run virtual machines on.
Getting the laptop though, I've been pretty disappointed with the user experience of developing in a virtual machine. Giving the Virtual Machine 4 GB of memory, it was slow and I could type complete sentences and watch the VM "catchup".
My company has training laptops that we provide for our classes. They are Dell Precision M6400 Intel Core 2 Duo P8700 running at 2.54Ghz with 8 GB of memory and the experience on these laptops is night and day compared to the E6510. They are crisp and you are barely aware that you are running in a virtual environment.
Since the E6510 should be faster in all categories than the M6400, I couldn't understand why the new laptop was slower, so I did a component by component comparison and the only place where the E6510 is less performant than the M6400 is the graphics department. The M6400 is running a nVidia FX 2700m GPU and the E6510 is running a nVidia 3100M GPU. Looking at benchmarks of the two GPUs suggest that the FX 2700M is twice as fast as the 3100M.
http://www.notebookcheck.net/Mobile-Graphics-Cards-Benchmark-List.844.0.html
3100M = 111th (E6510)
FX 2700m = 47th (Precision M6400)
Radeon HD 5870 = 8th (Alienware)
The host OS is Windows 7 64bit as is the guest OS, running in Virtual Box 3.1.8 with Guest Additions installed on the guest. The IDE being used in the virtual environment is VS 2010 Premium.
So after that long setup, my question is:
Is the GPU significantly impacting the virtual machine's performance or
are there other factors that I'm not
looking at that I can use to boost the
vm's performance? Do we now have to
consider GPU performance when
purchasing laptops where we expect to
use virtualized development
environments?
Thanks in advance.
Cheers,
Dave
EDIT:
The HDDs in the two systems are 7200 RPM, the E6510 having 500GB vs. the M6400 have 2x 250GB in a non-RAID configuration.
Also, when I turn off some of the graphics features of Windows 7 (host and guest) by going to non-Aero themes, VM performance visibly increases.
Just to closer off this question with my findings, what we have discovered was that driver performance was limiting the perceived performance of that virtual machine. With the default Dell drivers, which are built for "stabilty" the virtual machines would be visibly impacted in "visual" applications like IDEs (Visual Studio 2010) such that VS 2010 could not keep up with my typing. When we had some nVidia reference drivers installed, the IDEs were crisp and you couldn't really tell that you were in a VM anymore, which was my experience with the M6400s.
Thanks to everyone who threw out some ideas on the subject.
I am running two VMs on my development system simultaneously, one for development, and one for TeamCity. My graphics card on my Dell Optiplex is an ATI 2450, which is, quite honestly, complete crap. Personally, I have found RAM and CPU to make the most significant impact on my desktop. But since you are on a laptop, have you thought about the disk? Our M6400 has an SSD, and perhaps that is the biggest difference for your two laptops. I would not expect GPU to affect anything, unless of course you are trying to use the experimental Direct3D features in VirtualBox.
You guys are looking in the wrong places. Go to bios look for virturalization extensions AMD-v or VT-X. Off by default on most system. if it dosent have that option take a look at Sun Virtual box runs good on my older laptop with out virt support.
A GPU can significantly impact performance of any system. Visual Studio, for example, has a huge performance difference between on board video vs dedicated graphics.
That said, I would expect there are other differences. First, how do the two hard drives compare? notebook manufacturers love putting slow disks in machines in order to beef up their battery longevity numbers; and other the other side, sometimes they put in the faster drives to boost performance numbers. It really depends on what the new machine was marketed towards. Along these lines some hard drives also have configuration settings to determine their power / performance / noise levels. Depending on the drive you might be able to tweak this.
Another expected difference is the quality of memory. Nearly every dell I've used has had second or third tier ram installed. Sure they might both be DDR3 of a certain Ghz, but the quality of the chips is going to determine how they really perform. Sometimes by 200% of more.
Beyond those you start getting into chipset differences, mainly in the hard drive controllers. You can't do anything about this though.
The next thing I can think of is drivers. Make sure your up to date on everything you can. Also, test both Dell and nvidia supplied drivers. Sometimes nvidia has better drivers, sometimes the original ones from dell are better. That part is a crap shoot.
And, finally, consider blowing away the new machine and doing a complete reinstall from the bare metal up. Before installing any anti-virus or CPU sucking software, test your VM performace.

Resources