Get AppFabric Cache out of throttled mode - caching

Hosting an AppFabric cache on the same machine as an sql server presents some known challenges, one of them being the sql server take up most of the ram and there by putting the cache in throttled mode.
When this occurs and I have freed up enough memory, how i put the cache in "not throttled state" again. Can't seem to find a powershell command to fit my need
Yes, I know it's bad pratice to host the 2 on the same machine, but those are the terms

If the cache service detects there is enough memory to accept adds/puts, it will come out of throttling state, not controlled by user.
Here is the complete answer: http://msdn.microsoft.com/en-us/library/ff921030.aspx

Related

Postgres constant 30% CPU usage

I recently migrated my Postgres database from Windows to CentOS 6.7.
On Windows the database never used much CPU, but on Linux I see it using a constant ~30% CPU (using top). (4 core on machine)
Anyone know if this is normally, or why it would be doing this?
The application seems to run fine, and as fast or faster than Windows.
Note, it is a big database, 100gb+ data, 1000+ databases.
I tried using Pgadmin to monitor the server status, but the server status hangs, and fails to run, error "the log_filename parameter must be equal"
With 1000 databases I expect vacuum workers and stats collector to spend a lot of time checking about what needs maintenance.
I suggest you to do two things
raise the autovacuum_naptime parameter to reduce the frequency of checks
put the stats_temp_directory on a ramdisk
You probably also set a high max_connections limit to allow your clients to use those high number of databases and this is another probable source of CPU load, due to the high number of 'slots' to be checked every time a backend has to synchronize with the others.
There could be multiple reasons for increasing server loads.
If you are looking for query level loads on server then you should match a specific Postgres backend ID to a system process ID using the pg_stat_activity system table.
SELECT pid, datname, usename, query FROM pg_stat_activity;
Once you know what queries are running you can investigate further (EXPLAIN/EXPLAIN ANALYZE; check locks, etc.)
You may have lock contention issues, probably due to very high max_connections. Consider lowering max_connections and using a connection pooler if this is the case. But that can increase turn around time for clients connections.
Might be Windows System blocking connections and not allowing to use system. And now Linus allowing its connections to use CPU and perform faster. :P
Also worth read:
How to monitor PostgreSQL
Monitoring CPU and memory usage from Postgres

Monitoring I/O requests

One of my Railo web applications generates too many I/O requests.
Since it's hosted on an Amazon Ec2 instance, that directly affects my billing badly, because of EBS disk activity (hundreds of milions of operations).
How can I monitor I/O requests? The perfect tool would allow me to find which template/component makes intensive I/O.
I'm already using FusionReactor and that's great for profiling memory spaces and so on, but it doesn't have anything for I/O.
so you could start out by using the operating system monitoring tools to see if you have mainly reads or writes, next step is looking at memory issues despite it being an disk IO issue, maybe your servers are low on memory and thrashing the drives as they are swapping pages in and out of memory.
if you have not done so turn on the template cache this will stop railo checking the file system on every page request (provided you have the memory).
if you have plenty of memory (both for your OS and for the JVM) and you have template caching on start looking for your busy pages in fusion reactor, check for cffile, cfdirectory and other tags in these pages.... good luck.
also use of queries of queries is often a culprit in high disk io as internally a database is used which runs pages to disk on large resultsets if I remeber correctly.

How to reduce memory usage on windows azure shared website?

I have a site hosted on Windows Azure shared websites. It just got suspended for going over memory usage limit of 512MB/hour.
I do use .net caching rather heavily (to prevent multiple calls to database/external APIs, etc...).
Is that caching a no-no in shared websites on Windows Azure?
Do you use System.Runtime.Cache? You should be able to limit the amount of caching e.g. the memorycache object uses. See http://msdn.microsoft.com/en-us/library/dd941874.aspx for more information.
Even if you will stop using Cache it still can be used by framework/libs. I also have same problem (interesting, that in free mode memory limit is 1024MB, but shared one is lowered to 512).
As I see, memory amount that Azure shows on portal seems very close to System.Diagnostics.Process.GetCurrentProcess().PrivateMemorySize value.
At this moment I'm experimenting with caching settings to set maximum memory:
<system.web>
<caching>
<cache privateBytesLimit="250000000" privateBytesPollTime="00:00:15"/>
</caching>
</system.web>
Several days ago I set 300MB but several minutes ago got suspended again :(, so lowering to 250MB.
But anyway, this is very unclear, strange and "wrong" solution imho.
UPDATE
Got suspended again this morning. Temporarily converted to standard mode with small instance (1.7 GB RAM).
My WorkingSet counter now is about 200 megs now (with PeakWorkingSet 330 megs). BUT! GC's CollectionCount is increased approx 8 times (Gen0 is 1800 times instead of 250 for less that a day).
My current theory is that in "shared" mode websites are running inside "big" VM with a lot of memory and Garbage Collector just not have a need to run often, leading to longer "garbage life" and more memory consumption.
Have no access to my developer computer right now for some verification, but planing to convert site to web role in cloud service ASAP - with extra small instance (cost is comparable to shared web site cost)...
Might be worth checking a profile using perfmon on your local machine to see if what if its hitting the limits normally first, then look at maybe configuring the logging on Azure and again digging through it.
Also ensuring everything is precompiled and that your not loading and modules etc you don't need can really effect performance etc on Azure.
I think what you might want to try here is scale our instead of up. If you add a second instance that will double your resource limit.

Is there a memory limit for w3wp.exe?

Is there a max memory size for w3wp.exe? Mine is getting up to about 2.5-3G then seems to crash/reset.
Per the "GIVEN" dimensions below I setup some counters and noticed that the w3wp.exe will service http requests then reset to 0 along with the w3wp.exe process crashing (changing pids). As a result REQUESTS_QUEUED and ACTIVE_REQUESTS grow large causing delays in processing until the w3wp.exe can restart itself. It's doing this every 3-4min so more than likely due to heavy system volume during peak load. But not sure if it's a memory issue or not.
I see tons of warnings in my webserver (IIS) log:
A process serving application pool 'MyApplication' suffered a fatal
communication error with the Windows Process Activation Service. The
process id was '1732'. The data field contains the error number.
RESULT: Customers are reporting sporadic response times for http requests.
Can I increase this memory limit or reconfigure IIS to handle increased load?
GIVEN:
System has been passed down to me so there may be gaps with IIS configuration, etc.
Database: SQL Server 2008R2
Web Servers: Windows Server 2008R2 Enterprise SP1 (64bit, 64G RAM)
IIS 7.5
Using MVC4 Web API with MemoryCache aggressively with Model and Business Objects with eviction set to 2hrs
Looked at the logs but really don't see anything significantly relevant
One application pool...no other LOB applications running on this server
Is the application pool set to run in 32-bit mode? That can cause memory issues even if you have plenty of RAM. On a 64-bit system, the memory limit for a 32-bit process is 4 GB.
Actually after solving the RC in which overuse of memorycacheing that was crashing the w3wpe.exe process I can safely say that an mvc4 web api service can grow up to 20G ... from baseline of 3G (64bit machine and application pool). AT least that was the last level I saw it before eviction policy starting cleaning up things. Probably a bit excessive in footprint but the application is very fast returning machine learning targeted content sub-100ms.

What is named.exe process and how to avoid consuming high CPU rates

I have a Windows Server 2008 with Plesk running two web sites.
Sometimes the server is going slow and there is a named.exe process making the CPU peak 100%.
It last a short period of time and after a while it comes again.
I would like to know what this process is for and how to configure it for not consuming this cpu and make my sites go slow.
This must be a DNS service, also known as Bind. High CPU usage may indicate one of the following:
DNS is re-reading its configuration. In this case high CPU usage shall be aligned with your activities in Plesk - i.e. adding and removing domains.
Someone (normally another DNS server) is pulling data from your DNS server. It is normal process. As you say it is for short period of time, it doesn't look like DNS DDoS
AFAIK there is no default way in Windows to restrict software from taking 100% CPU if no other apps require CPU at the moment.
See "DNS Treewalk Suite" system, off the process, and uses the antivirus.
Check the error "log" in the system.

Resources