postgresql cache and swap memory increasing - performance

I have postgresql(Master and Slave) infrastructure. The postgresql server have 32 GB RAM and 600 MB swap space. We are using java(tomcat) applications. I have performed postgresql tuning using http://pgtune.leopard.in.ua/. But currently my issue is, my postgresql server caching and swap memory increasing periodically and certain point we forced to clear the cache or restart the postgresql. Could you please let me with the reason behind it. Below are my performance tuning parameter. Rest of the parameters are default. Also we are using pgbarman to take the point of time backup which configure on other server.
postgresql.conf
===============
max_connections = 300
shared_buffers = 8GB
effective_cache_size = 24GB
work_mem = 27962kB
maintenance_work_mem = 2GB
checkpoint_segments = 32
checkpoint_completion_target = 0.7
wal_buffers = 16MB
default_statistics_target = 100
/etc/sysctl.conf
================
kernel.shmmax=17179869184
kernel.shmall=4194304

Related

Oracle NUM_CPUS, NUM_CPU_SOCKETS and Connection Pooling

I read about Oracle connection pooling size. Official documents say:
For example, suppose a server has 2 CPUs and each CPU has 18 cores.
Each CPU core has 2 threads. Based on the Oracle Real-Wold Performance
group guidelines, the application can have between 36 and 360
connections to the database instance.
My Oracle server's NUM_CPUS are 16, NUM_CPU_CORES are 8 but NUM_CPU_SOCKETS are 2. It means we have 2 CPU in fact but with multithreading it works like 16 CPU.
I am not sure which one to use in connection formula. Probably 16 but I'm asking here to be sure.
16cpu * 8cores = min 128 connection?
or
2cpu * 8cores = min 16 connection?
Which one applies to me. :/
Thanks in advance.
You should use 2 cpu * 8 cores = 16 minimum connections.
This sentence in the documentation implies that the rule is meant for physical CPUs and cores: "The number of connections should be based on the number of CPU cores and not the number of CPU core threads." Your NUM_CPUS value of 16 must be a "logical" CPU since the system only has 2 sockets.

Green Plum cannot using all memory of server

I am new to Green Plum. I have a single server installed GreenPlum(1 master instance, 6 segment instances), and we have huge data imported(about 10TB). as we all run it for about 1 month, the memory utilization is low(15GB of 128GB), but the cpu is almost 100% when we run some calculation on it.
It will report the OOM issue of segment some time.
OS version: CentOS 7.2, Server Type: VM
Here are the os settings:
kernel.shmmax = 107374182400
kernel.shmall = 26214400
kernel.shmmin = 4096
for GP setting:
gp_vmem_protect_limit=11900
Any help is appreciated
shmall should be <50% of RAM
you have one single VM (128GB) with gpdb master process and 6 primary segment processes. Am I right? Do you have mirror segment processes? How many CPU cores does your VM have?
gp_vmem_protect_limit =12GB. This means you have 12GB x 7 (1master, 6primary segments) = 84GB.
1 single node VM to handle 10TB data? Your cpu is probably waiting for IO all the time. This is not right.

Postgres configuration for better performance

We have done a PostreSql database Based ERP project. I have 32 GB RAM Windows Server 2012 R2 system. Out of 32 GB, I have used 8 GB for JVM and assuming 4 GB for OS, I have tried to tune the postgres with 20 GB RAM.
I have find out the configuration from the below link:
https://www.pgconfig.org/#/tuning?total_ram=20&max_connections=300&environment_name=OLTP&pg_version=9.2&os_type=Windows&arch=x86-64&share_link=true
But the performance goes down after the change. What could be the reason. As I am less knowledge in the postgres server maintenance, if anything more required for you to assess/answer let me know.
UPDATE
shared_buffers (integer) : 512 MB
effective_cache_size (integer) : 15 GB
work_mem (integer): 68 MB
maintenance_work_mem (integer): 1 GB
checkpoint_segments (integer): 96
checkpoint_completion_target (floating): 0.9
wal_buffers (integer): 16 MB

Cassandra memory usage

In Cassandra configuration memory limit is given.Suppose we have physical memory of 8 gb out of which 2 gb is allocated to Cassandra. So if Cassandra's memory usage goes upto 2gb, does additional memory from 8 gb will be allocated to Cassandra or not?

How to measure performance in Docker?

Is it possible to have performance issues in Docker?
Because I know vm's and you have to specify how much RAM you want to use etc.
But I don't know that in docker. It's just running. Will it automatically use the RAM what it needs or how is this working?
Will it automatically use the RAM what it needs or how is this working?
No by default, it will use the minimum memory needed, up to a limit.
You can use docker stats to see it against a running container:
$ docker stats redis1 redis2
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O
redis1 0.07% 796 KB / 64 MB 1.21% 788 B / 648 B 3.568 MB / 512 KB
redis2 0.07% 2.746 MB / 64 MB 4.29% 1.266 KB / 648 B 12.4 MB / 0 B
When you use docker run, you can specify those limits with Runtime constraints on resources.
That includes RAM:
-m, --memory=""
Memory limit (format: <number>[<unit>], where unit = b, k, m or g)
Under normal circumstances, containers can use as much of the memory as needed and are constrained only by the hard limits set with the -m/--memory option.
When memory reservation is set, Docker detects memory contention or low memory and forces containers to restrict their consumption to a reservation limit.
By default, kernel kills processes in a container if an out-of-memory (OOM) error occurs.
To change this behaviour, use the --oom-kill-disable option. Only disable the OOM killer on containers where you have also set the -m/--memory option.
Note: the upcoming (1.10) docker update command might include dynamic memory changes. See docker update.
By default, docker containers are not limited in the amount of resources they can consume from the host. Containers are limited in what permissions / capabilities they have (that's the "container" part).
You should always set constraints on a container, for example, the maximum amount of memory a container is allowed to use, the amount of swap space, and the amount of CPU. Not setting such limits can potentially lead to the host running out of memory, and the kernel killing off random processes (OOM kill), to free up memory. "Random" in that case, can also mean that the kernel kills your ssh server, or the docker daemon itself.
Read more on constraining resources on a container in Runtime constraints on resources in the manual.

Resources