Windows Azure: how to measure the execution time of a code - windows

I wanted to measure the execution time of my code running on windows azure cloud across multiple instances. Can anyone tell me how to do it.

You can enable the diagnostics logging and put the intrumentation/logging from the ap into an Azure Table. Then download into Excel or whatever for analysis. You can also capture perfmon data to do correlation (e.g. CPU vs workingset).

You have to compute this manually.
Just in case you are requesting this to derive billing information, remember that in Windows Azure, you are charged by reserved instance not necesarily "running".
If your application is "suspended" you still pay. If your code application is running, but "idling" (like 99% CPU free), you still pay the same.

Related

Unexplained memory usage on Azure Windows App Service Plan - Drill down missing

We have a memory problem with our Azure Windows App Service Plan (service level is P1v3 with 1 instance – this means 8 GB memory).
We are running two small .NET 6 App Services on it (some web APIs), that use custom containers – without problems.
They’re not in production and receive a very low number of requests.
However, when looking at the service plan’s memory usage in Diagnose and Solve Problems / Memory Analysis, we see an unexplained 80% memory percent usage – in a stable way:
And the real problem occurs when we try to start a third app service on the plan. We get this "out of memory" error in our log stream :
ERROR - Site: app-name-dev - Unable to start container.
Error message: Docker API responded with status code=InternalServerError,
response={"message":"hcsshim::CreateComputeSystem xxxx:
The paging file is too small for this operation to complete."}
So it looks like docker doesn’t have enough mem to start the container. Maybe because of the 80% mem usage ?
But our apps actually have very low memory needs. When running them locally on dev machines, we see about 50-150M memory usage (when no requests occur).
In Azure, the private bytes graph in “availability and performance” shows very moderate consumption for the biggest app of the two:
Unfortunately, the “Memory drill down” is unavailable:
(needless to say, waiting hours doesn’t change the message…)
Even more strange, stopping all App Services of the App Service Plan still show a Memory Percentage of 60% in the Plan.
Obviously some memory is being retained by something...
So the questions are:
Is it normal to have 60% memory percentage in an App Service Plan with no App Services running ?
If not, could this be due to a memory leak in our app ? But app services are ran in supposedly isolated containers, so I'm not sure this is possible. Any other explanation is of course welcome :-)
Why can’t we access the memory drill down ?
Any tips on the best way to fit "small" Docker containers with low memory usage in Azure App Service ? (or maybe in another Azure resource type...). It's a bit frustrating to be able to use ony 3GB out of a 8GB machine...
Further details:
First app is a .NET 6 based app, with its docker image based on aspnet:6.0-nanoserver-ltsc2022
Second app is also a .NET 6 based app, but has some windows DLL dependencies, and therefore is based on aspnet:6.0-windowsservercore-ltsc2022
Thanks in advance!
EDIT:
I added more details and changed the questions a bit since I was able to stop all app services tonight.

How can I automatically shut down an Azure VM?

I want to create an Azure Virtual Machine that I only need to run approximately for 1 or 2 hours once or twice per day. I don't want to pay for the server when I'm not using it.
I know I can just go to the dashboard and shut it off, but I often forget to do so, I'm getting senile! I would like for a timer to start when the system (Windows 10) is started, and when the timer reaches zero, the image is made inactive (no charges incurred) unless I request more time.
Any ideas would be appreciated.
The dashboard has an automatic shutdown feature. This can also be configured to send a webhook notification with a link which will delay the shutdown. Currently in-machine notifications are not supported.
Although there are many tools like PowerShell scripts and apps that you could use from within the VM to trigger automatic shutdown, there are some billing gotchas to be aware of.
With an Azure VM, you pay per second and cease paying only when the machine is completely deallocated and no longer reserving memory and cores on the platform. (There is still a nominal storage charge associated with storage of the VM image). You cannot deallocate the VM from within itself.
To ensure the VM isn't incurring charges, check that the status is 'Stopped (deallocated').

Which local machine components could affect a RDP-session performance-wise?

I've got the following totally reproducible scenario, which I'm unable to understand:
There is a very simple application, which does nothing else than calling CreateObject("Word.Application"), that is creating an instance of MS Word for COM interop. This application is located on a Windows Terminal Server. The test case is to connect via RDP, execute the application and the application will output the time taken for the CreateObject call.
The problem now is that the execution time is significantly longer, if I connect from a specific notebook (HP Spectre): It takes 1,7s (+/- 0.1s).
If I connect from any other machine (notebook or desktop computer), then the execution time is between 0,2-0,4s.
The execution times don't depend on the used RDP account, or screen resolution, or local printers. I even did a fresh install of Windows on that HP notebook to rule out any other side-effects. It doesn't matter if the HP notebook is connected via WLAN or an USB network card. I'm at a loss understanding the 4x to 8x execution time difference to any other machine.
Which reason (component/setting) could explain this big difference in execution time?
Some additional information: I tried debugging the process using an API monitor and could see that >90% of the execution time is actually being spent between a call to RpcSend and RpcReceive. Unfortunately I can't make sense of this information.
It could be the credential management somehow being in the way.
Open the .rdp file with notepad and add
enablecredsspsupport:i:0
This setting determines whether RDP will use the Credential Security Support Provider (CredSSP) for authentication if it is available
Related documentation
https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/ff393716%28v%3dws.10%29
According to your information about RpcSend and RpcReceive time consumption, it could be the case you have some service stopped on your client machine, like DCOM server or some other COM-related (they usually have "COM" or "transaction" in their names).
Some of that services could be started/stopped (if Manually mode selected) by system to/after transfer your request, but there is a time delay to starting service.
I suggest you to open Computer Management - Services or run -> services.msc and compare COM-related services running on your "slow" client and on your "fast" clients, and try to set Automatically running instead Manually or Disabled.
Also, try to run API Monitor on such processes to determine the time-consuming place more precisely.

Windows "system service", not "web service" performance

We have an image processing workflow product. Typically 10,000->100,000 images can be run though our processing in a job. More than one job may be pending.
Currently, all the image processing is performed in our home grown imaging library, a managed C++ library, .NET compatible. It is run in the user’s application space. What I mean by that is that if you log on as “PeteSmith” the images will be run on Pete Smiths’ account.
Currently, we only allow one instance of this image processing at a time. Customers are asking us for a new version, one that allows more than one instance to run at the same time, so the question of how we do this is now something we are examining.
The idea of getting processing off the “users account” and using a “system account” to do the processing in the background is appealing. It is appealing, because of the way windows services are naturally managed by OS events like logging in and logging out and other system resource utilization events alarms.
It appears to me that all we would need to do is manage a small number of well-defined events, well documented by Microsoft.
That’s all nice and wonderful. But what I need to understand is what going to a service implantation for our image processing code means for performance, from our customer’s point of view.
In their view, they need more processed, faster.
QUESTION How I should think about tradoffs:
1) Using a service to run a job vs. running N different “instances” of the software running only on Pete Smith (the users’) account?
2) Allowing N number of services to run N different jobs (no cross talk needed) in comparison to running N different “instances” of the software running only on Pete Smith (the users’) account?
Well, the image processing requires a certain amount of CPU and IO resources for the processing. That amount does not change by fiddling with how and where you start your process.
The difference between service or not should be governed by the required usage pattern. If you want the application to go on processing images automatically regardless if anyone is logged on you should run as a service, but if the usage is more like "choose an image, start processing and wait for the result" style you should go for a client app.
It is not entirely clear why your customer wants to run multiple instances. Is it because they want to have one instance do the heavy processing work while they configure the processing for the other? Or do they want to run multiple instances because the processing is heavy and they want to run multiple in parallel?
In both those cases I would consider running the calculation(s) on a background thread in the application. If it is not possible to use threads (maybe due to some global shared state in the library) my second best bet would be to start each processing in a new process and wait for the result on the main process.

CPU Utiliztion per process in Win32 API

I am doing a project on a centralized LAN management system. I need to know how many CPU cycles is each process of a remote PC consuming(as in a Task Manager )so that the network admin can close few processes,in case the CPU utilization of a system in network goes beyond acceptable rates..
I would like to know if there is a Win32 API for this requirement of mine n if so ,i request you to give me information about it..
thank you in advance..
Win32 API has lots of functions to find all kinds of information about currently running processes and threads, here's a link to the full list of them: http://msdn.microsoft.com/en-us/library/ms683223(VS.85).aspx
Explore the list and you should be able to find the function(s) there that meet your requirements, for example GetProcessTimes() returns structures that contain the amounts of time the process has executed in kernel mode, in user mode, etc.
You need to look at the performance monitor system. You can get the stats from there (in the Process counter).
Here's a (delphi) explanation of it, that's pretty good and simple to understand.
When you understand how it all works, you then need the Performance Counters API to read the data counters.

Resources