I was not able to find settings for GCF to setup CPU speed. In pricing calculator there is several options for GCF environment on which it running, and by default as I understand the least performant option is used (800MHz?). So, I wounder, is there any options in Cloud Console or during Functions deploy to setup CPU speed?
The memory size and CPU speed are connected. The details are on the pricing page:
https://cloud.google.com/functions/pricing
So, if you allocate 1024MB of memory to your function, you'll get a 1.4GHz CPU.
Related
I want to show custom metrics like CPU Type and CPU Utilization in stack driver monitoring.For this i am looking any monitoring api is available or not.
Please let me know any suggestions on this.
You can use a product from the GCP Marketplace if you are looking for these metrics from a non GCP resource, BindPlane. Like AWS, or on premise server or VM.
They have a source collector that is for Host info that collects info on things like CPU utilization, cpu temp, cpu voltage, disk, memory, network performance, process info, free space on a file store, etc.
Here's the docs, looks like it's in Alpha: https://docs.bindplane.bluemedora.com/docs/host
My HTTP server can't take load tests... It gives really high latency when multiple connections are made.
Server Configuration:
5 instances of (CPU 0.5vCore, Memory 512MB, Disk 20GB)
A load balancer
10G shared bandwidth
When I transfer a 3.5mb zip, it takes about 1second when there is only one connection. However, when over 30 connections are made, it goes up to 20~50 seconds.
I am testing with JMeter on my laptop. Is there a possibility that my testing environment interferes with the load-testing?
If so, what would be a solution to improve my testing environment?
First of all you need to monitor and pin down the problem(s).
Start off by picking up information on these four layers:
CPU Usage
Memory Usage
Network Usage
I/O Usage
All of them on the OS layer. (Monitoring tools will vary depending on your OS).
Once you have this data and you can narrow the problem path (CPU bound, network latency, I/O latency or whatever) an answer will kick in. Also doing this (if it is the first time you are trying to test your app) will help you get scaling information on your environment and your application in general.
I have a site hosted on Windows Azure shared websites. It just got suspended for going over memory usage limit of 512MB/hour.
I do use .net caching rather heavily (to prevent multiple calls to database/external APIs, etc...).
Is that caching a no-no in shared websites on Windows Azure?
Do you use System.Runtime.Cache? You should be able to limit the amount of caching e.g. the memorycache object uses. See http://msdn.microsoft.com/en-us/library/dd941874.aspx for more information.
Even if you will stop using Cache it still can be used by framework/libs. I also have same problem (interesting, that in free mode memory limit is 1024MB, but shared one is lowered to 512).
As I see, memory amount that Azure shows on portal seems very close to System.Diagnostics.Process.GetCurrentProcess().PrivateMemorySize value.
At this moment I'm experimenting with caching settings to set maximum memory:
<system.web>
<caching>
<cache privateBytesLimit="250000000" privateBytesPollTime="00:00:15"/>
</caching>
</system.web>
Several days ago I set 300MB but several minutes ago got suspended again :(, so lowering to 250MB.
But anyway, this is very unclear, strange and "wrong" solution imho.
UPDATE
Got suspended again this morning. Temporarily converted to standard mode with small instance (1.7 GB RAM).
My WorkingSet counter now is about 200 megs now (with PeakWorkingSet 330 megs). BUT! GC's CollectionCount is increased approx 8 times (Gen0 is 1800 times instead of 250 for less that a day).
My current theory is that in "shared" mode websites are running inside "big" VM with a lot of memory and Garbage Collector just not have a need to run often, leading to longer "garbage life" and more memory consumption.
Have no access to my developer computer right now for some verification, but planing to convert site to web role in cloud service ASAP - with extra small instance (cost is comparable to shared web site cost)...
Might be worth checking a profile using perfmon on your local machine to see if what if its hitting the limits normally first, then look at maybe configuring the logging on Azure and again digging through it.
Also ensuring everything is precompiled and that your not loading and modules etc you don't need can really effect performance etc on Azure.
I think what you might want to try here is scale our instead of up. If you add a second instance that will double your resource limit.
I am running a mobile website to get the live running status of any train in India. It is http://www.spoturtrain.com . The full code is written in PHP and Nginx is used as the webserver, php-fpm is used as the application server. All php requests are proxied to the app server. During peak traffic hours in the morning, the system load shoots up to 4 but the CPU% and the memory usage is low. Please take a look at the snapshot of the top command of the server.
Th %CPU displayed in the bottom section is per-thread, which means the percentage of one CPU core used by the indicated thread. The CPU(s) section indicates the total amount of available CPU that is being utilized, so it is possible to have one thread reporting that it is using 100% CPU, while only 25% (4 core) or 12.5% (8 core) of the overall CPU cycles are being consumed.
Analyzing thread CPU usage on Linux
You don't really ask a question, so it's hard to tell if you are wanting some advice or just asking to have the numbers explained. As #Charles states, a typical "acceptable" load is 1 per CPU core before noticeable performance degradation occurs, but in the case of PHP running on most web servers, you may (but probably won't in most cases) start noticing problems at anything above 1. Whether or not you do will largely depend on your disk and network I/O.
Whether or not the performance is acceptable for your application isn't something I can answer, but you can take a look at this thread for more places to jump into the options for getting your web server to thread requests.
What is thread safe or non thread safe in PHP
Whether or not you can do anything about it depends on your hosting situation.
I am looking for a performance monitoring tool for my application which will collect/visualize in realtime the CPU and cache usage on single Linux box like IBM System or HP ProLiant with typical configuration 8 processors / 80 cores.
Application is the home-grown multithreaded C+ code which uses OpenMP.
This monitoring tool should not run 24 hours per day; it should not do e-mail notification.
I will run this tool just before sending commands to my apps, the apps will execute the command (it may take as a maximum few minutes only). During this time interval I need to analyze:
- usage of cores
- data movement between processors
- usage of L1, L2, L3 caches
- some other metrics (help me here) which can help to find bottleneck in application
performance and resource utilization
I guess that tools like Nagios / Zabbix are too heavy for this task.
From another side using the command-line tools like "top" and "sar" for 80 cores not very convenient and plotting (not necessary real-time) would be nice to have...
While getting the per core usage is rather easy - the other values might prove to be not practical, not at least without running that application within a profiler of some sorts.
Measuring QPI utilization is something highly non-trivial if at all possible. Intel's vTune might be able to acquire such things but only when running instrumented version of your binaries.
Also on x86 there is no way to figure out L1,L2,L3 usage of any kind - you can grab the low level CPU counters to measure cache misses though (but would probably need to use instrumented/profiled binaries and always withan something like vTune or PAPI).
You could "easily" setup something to pull all the lower level performance counters into SNMP and grab the SNMP values via standard SNMP capable monitoring tools but be aware that SNMP pulling is something that you don't want to occur more than 1-2/s max. Or pull that info into something like collectd.
I'm also having the impression that you don't understand the problem domain of monitoring tools. They are not ment to be used as low level analysis probes for finding application level/system bottlenecks - at best you could get some hints which resource (from a 10K feet view) is running under full utilization. Monitoring and alterting tools are something that operations staff needs to use to understand which part of their IT system is currently used and how, to gather historical data and predict future resource utilization and to be alerted when something breaks.
SiteScope, Hyperic or any combination of shell scripts, native OS utilities and a DB to store the results may do the job.