Visual Studio Load test CPU Usage - visual-studio

I'm running visual studio 2013 ultimate load test in my local system which has i7-3840QM processor, SSD with 2.8GHz processor.
My load scenario is ramping up 50 users for every 30 seconds upto 500 users. When I checked my CPU usage, it shows 100% and looks like its using all 4 cores. (Please see the attached screenshot)
Here are my questions:
1. Is it OK to continue the test when CPU usage shows 100%?
2. Do we have anything like "Load Test Virtual User Pack 2010" for VS 2013 version?
3. What are other options available ? (I'm planning for Test Rigs, if the single system doesn't work)
Appreciate any help.

Could you do some profiling? First profile CPU and find which processes are hogging the CPU, For the CPU hogging process, find the function(s) that cost most of the CPU cycles. Intel VTune is a great commercial profile. For open source one, you can try valgrind.
Good luck.

Related

Visual Studio 2019 IDE Performance

I'm using Visual Studio 2019 on my Laptop that is connected to an external monitor. Both my laptop and the external monitor have a 4k resolution.
I'm facing a weird performance issue, where after every click/right-click the whole UI freezes for a couple of seconds. For eg. if I right click on a project, the UI completely freezes, not even the mouse moves and after 1-2 seconds I see the context menu.
My laptop is new and this happens only with VS 2019 and only when my laptop is connected to an external 4k monitor.
Anyone else faced a similar problem or knows a solution for this?
My Laptop has the following configuration:
Intel i7-10750H 2.60GHz
16GB RAM
SK Hynix PC601 M.2 SSD
GTX 1650 Ti Graphics Card
After a little digging, I found these options in Visual Studio. Turning them off seemed to do the trick:
Tools > Options and then under Environment > General deselect the following options:
Automatically adjust visual experience based on client performance
Use hardware graphics acceleration if available
If you want to go one step further, you could also disable the following option along with those above. (but in my case, just disabling those two seemed to suffice)
Enable rich client visual experience
Here's the GPU usage before and after disabling the options (for similar UI inputs).
Before:
After:
In the Before picture, the duration for which the GPU 3D usage remained consistently high is when the UI froze.
What I still don't seem to understand is why VS would need so much of GPU 3D computational power.

Different execution speed with idle vs heavy-load CPU

Fellow colleagues,
I'm currently working on a PowerPC emulator written in C++. In order to evaluate its performance, I'm using std::chrono:high_resolution_clock to measure execution time of a guest code block for that the number of CPU cycles is known. The corresponding code is located here: https://github.com/maximumspatium/dingusppc/commit/11b4e99376e23f46f4cd8ee6223c5788ab963a37
While doing the above tests, I noticed that my MacBook Pro reports different numbers depending on CPU load. That is, when I run the above code with idle CPU I'm getting about 230000 ns execution time while with heavy-loaded CPU (neural net training, for example) I'm getting much better performance (< 70000 ns).
I suppose it's related to threads and scheduling in macOS. I'd like to utilize the full CPU power in my emulator. Is there a way to change thread performance to run at full speed, just like it does when the CPU is running under heavy load?
Thanks in advance!
P.S.: The machine in question is MacBook Pro 17´´ Mid 2010 with 2,53 GHz Intel Core i5 and 8GB RAM running MacOS 10.13.6.

Slow xamarin build in visual studio 2017

I am of recently developing a Xamarin based app in Visual Studio 2017 and I am not sure whether the performance I see at a build and debug time is what can be expected or if something is wrong.
Environment: imac late 2015, quad core i5 #3.5GHz, 24GB RAM.
I am executing visual studio (latest) under parallels 13 in windows 10 and have assigned all four cores and 20GB RAM to the VM (it doesn't make a difference though if I assign less).
The solution is a standard xamarin based solution with 3 projects and about 10 classes with roughly 300loc (yes really, there's almost nothing in there yet).
A rebuild takes about 1 Minute. Starting the application in debug mode takes about 30s for the simulator showing up.
Looking at the code size and hardware specs I was expecting build and simulation to be a matter of seconds.
Am I wrong? Even considering the VM I'd not have expected these numbers.
Is anybody able to share experiences/thoughts?
Your problem isn't simply compile time. Every time you build your project, your shared code gets compiled into a dll, code dependencies get checked, then linked into the native project, which is being compiled, resources get packed, integrity-checked and signed and is finally being bundled (not speaking of included nuget Packages and other plugins) and then the whole package gets packed into an app archive, which also needs time to be written.
Also your app gets transmitted to your device via USB or network (default would be USB).
Considering what is happening "under the hood", 30 seconds is quite fast.
However, I have found that the performance is less based upon cpu and ram (at least if your dev machine has a decent amount of both) but on the performance of your hard disk.
If you really want to speed things up, you might consider running visual studio and doing your compiling on a nvme drive (an alternative might be a SSD raid).
For instance I once had a xamarin app, which had a lot of dependencies on various nuget packages. Compiling the iOS Version took about 25 minutes (full rebuild) on a Mac Mini (2011 model improved with an aftermarket Samsung 850 Pro), switching to a VM solution running on a skull canyon NUC equipped with a Samsung 950 Pro nvme drive did speed up the process to incredible 2.5 minutes.

Load generator machine (windows) reaching to 100% CPU

I am running 2250 users test from AWS windows VM, Following are the details.
Windows
RAM: 32GB
CPU: 8 Core
Once test reaches to 600 concurrent users the cpu is going 100% utilization. The action taken to resolve this,(Using Jmeter for test)
Increased the Heap size (HEAP=-Xms512m -Xmx12288m)
Removed lisners from the test.
Running test from NON GUI mode.
Still Load generator machine reaching to 100%. What whould be the best solution to fix this issue.
First check you follow best practices in your test:
http://jmeter.apache.org/usermanual/best-practices.html
http://www.ubik-ingenierie.com/blog/jmeter_performance_tuning_tips/
Then it would be better to use a linux machine instead of Windows as it usually scales better.
Finally try increasing the machine type to give it more CPUs.
The best option would be switching from single machine having 32 GB of RAM to 3 machines having 12 GB of RAM and run JMeter in distributed mode as your test seems to be very CPU intensive.
See What’s the Max Number of Users You Can Test on JMeter? for more comprehensive explanation of JMeter virtual users limits and what needs to be done to overcome them.

Visual studio on windows xp

i need to run a few visual studios on windows XP and it seems to take up a lot of memory. i am also running resharper which is a memory hog.
i am running 32 bit XP. How much memory can i put into my machine until i get to the point where the OS hits its limit.
Also, any other ways of running multiple visual studio without such slow performance.
32-bit Operating Systems are limited to 4 GB of RAM, which may or may not be enough for you. Also, I think Windows shows 3 GB of RAM if you install 4 GB.
I suggest you switch to 64-bit and upgrade to 8 GB if you can.
UPDATE: See Jeff's blog post on the subject: Dude, Where's My 4 Gigabytes of RAM?
The maximum amount of memory that can be seen by 32bit WinXP is somewhere between 3 and 4 gigabytes depending on your chipset.
I have also run into issues running multiple instances of VS when I had resharper installed. The only thing you can do is run 64bit XP with more memory, or not use resharper (which is a bummer).
32-bit Windows kernel divides the 4GB virtual addressing space in 2GB/2GB partitions. If you feed the /3GB switch to NTLDR it will offer 1GB kernel space / 3GB user mode space. Note that this NOT implies that you can't write software to take advantage of machines with 32-bit CPUs and address more than 4GB at once.
A workaround is the hardware-supported feature to access the remaining memory in banks or "windows" since the CPU still sees a maximum of 4GB addressable space at once. Some database and GIS software offer this possibility. This is called Physical Address Extensions and allows to use (not addressing at once) up to 64GB with 36-bit addresses. WinXP offers AWE, an API built on top of PAE.
That's the theory. For using Visual Studio you can get the full 4GB for your system or upgrade to a 64-bit OS with more RAM. This only if VS offers a 64-bit version.
"Also, any other ways of running multiple visual studio without such slow performance."
+1 trick: you should use a RAM disk (download) to accelerate I/O.
If you're using - and hopefully do - source-managament system (ie. Subversion), you must just checkout your projects there. VS.NET makes tons of I/O calls, and RAM disks are much faster than real disks.
CAUTION! If you turn off your computer, RAM Disk disappers.

Resources