Allocation of SGA size in Oracle 12c - oracle

What should we keep SGA size with respect to total RAM available in system. I found in SQL server we should allocate 4GB or 10% of available RAM (which one is higher) .

It depends upon how much memory you are going to assign for different compoenents of SGA such as buffer cache, shared pool or are you going to use dedicated server mode or shared server mode.
And you should also consider whether there are other application going to be run on the server.
Asktom: SGA Size

Related

Is server memory and server cache the same?

I read somewhere "A modern server has 144GB RAM memory", is that 144GB all used as cache?
When we talk about a server's cache, does that mean the server's memory?
It all depends on the caching method utilized by the applications that run on the sever. There are numerous caching methods, but two methods frequently used are persistent Caching and In Memory Caching.
With persistent cache, the application stores cache values somewhere intended to be “permanent”, such as the file system, database or otherwise.
Whereas, with In Memory Caching, the application uses the memory (AKA RAM, in your question 144GB) to store data. Using this method, the data is intended to be semi-permanent and not persist across reboots, application recycles, or otherwise.
If, when coding, you allocate a new object, dictionary, list or otherwise, these objects are stored in memory. Additionally, all of a servers memory is not available to the applications that run on said server. All operating systems and processes that are installed use the same RAM. Therefore, it’s common for a device that has 4GB RAM to only have 2GB reasonably usable, as the other 2GB is used by the operating system. Of course, these numbers depend on a lot of factors.

Creating Elasticsearch cluster from three servers

We have three physical servers. Each server has 2 CPUs (32 cores), 96 TB HDD, and 768 GB RAM. We would like to use these servers in an Elasticsearch cluster.
Each server will be located in a different data center, connecting each server using a private connection.
How can be optimize our configuration for high performance? Also, how should we best run Elasticsearch on these machines. For example, should we use virtualization to create multiple nodes per machine, or not?
As you have huge RAM(768) available on each physical server and according to ES documentation on heap setting it shouldn't cross 32 GB, so you will have to use virtualization to create multiple nodes per physical server for better ultization of your infra.
Apart from these there are various cluster settings and node settings which you can optimize but as you have not provided them, its difficult to provide recommendation on them.
Another thing to note is that you have huge RAM and disk but CPU is not in proportion to it, so if you can increase them as well, it would be good.

Limiting memory for varnish process

On Varnishv4.1 when i use ram as backend for Caching,when Reqests comes to it
after a while the amount of server's ram begans to full little by little and after it completely fills, the server crashes
and again it starts caching in the ram
I assign following variables in systemd service file for varnish.service
but still it does its Previous behavior and it crushs again:
LimitMEMLOCK=14336
MemoryLimit=13G
MemoryHigh=13G
MemoryMax=13G
How can i limit and specify the special amonut of memory that it cant exceed from that?
#Version used:
Varnishv4.1
#Operating System and version:
Ubuntu16.04
#Source of binary packages used (if any)
Installed from official ubuntu packages
You will have to limit both malloc and Transient.
I.e. as startup parameters -s malloc,3GB s Transient,1GB
In general the RAM allocated for Varnish should not exceed the 80% of total RAM avaiable on the system

Postgres constant 30% CPU usage

I recently migrated my Postgres database from Windows to CentOS 6.7.
On Windows the database never used much CPU, but on Linux I see it using a constant ~30% CPU (using top). (4 core on machine)
Anyone know if this is normally, or why it would be doing this?
The application seems to run fine, and as fast or faster than Windows.
Note, it is a big database, 100gb+ data, 1000+ databases.
I tried using Pgadmin to monitor the server status, but the server status hangs, and fails to run, error "the log_filename parameter must be equal"
With 1000 databases I expect vacuum workers and stats collector to spend a lot of time checking about what needs maintenance.
I suggest you to do two things
raise the autovacuum_naptime parameter to reduce the frequency of checks
put the stats_temp_directory on a ramdisk
You probably also set a high max_connections limit to allow your clients to use those high number of databases and this is another probable source of CPU load, due to the high number of 'slots' to be checked every time a backend has to synchronize with the others.
There could be multiple reasons for increasing server loads.
If you are looking for query level loads on server then you should match a specific Postgres backend ID to a system process ID using the pg_stat_activity system table.
SELECT pid, datname, usename, query FROM pg_stat_activity;
Once you know what queries are running you can investigate further (EXPLAIN/EXPLAIN ANALYZE; check locks, etc.)
You may have lock contention issues, probably due to very high max_connections. Consider lowering max_connections and using a connection pooler if this is the case. But that can increase turn around time for clients connections.
Might be Windows System blocking connections and not allowing to use system. And now Linus allowing its connections to use CPU and perform faster. :P
Also worth read:
How to monitor PostgreSQL
Monitoring CPU and memory usage from Postgres

How to increase memory and cache size for application pool in IIS 7 efficiently

I have searched the internet for how to increase memory and cache size for application pools in IIS 7 but all topics are diffused and I don't know the effect of combining those settings together.
Can somebody describe how I can increase memory and cache size for application pools in IIS 7?
In my understanding output cache can be set only at the IIS level and not specifically for an application pool. Whatever is set at the IIS level is applied to all the web sites under it. So effectively you can apply a max cache size at the web application level.
If you are using windows 7 professional (IIS features vary depending on the operating system) if you open IIS manager and click on the server name, in the features view there is an Output Caching feature. You can edit that to set the max cache size. If you set it to a very high value, it will use up a lot of your RAM and could deteriorate the performance of the whole box.
THe application pool itself can have a private memory limit and a virtual memory limit.
Primary memory limit: Maximum amount of private memory (in KB) a worker process can consume before causing the application pool to recycle.
Virtual memory Limit: Maximum amount of virtual memory (in KB) a worker process can consume before causing the application pool to recycle.
Both the above settings are set to 0 by default, which means there is no limit set.
Long story short: Raising the output cache size at the IIS server level is the best option that suits your needs.

Resources