What parameters to tweak for a text-based PG restore? - performance

Every night we dump and restore a 200 GB database using:
# Production, PG 9:
pg_dump DATNAME | some-irrelevant-pipe
# QA, PG 8.3:
some-irrelevant-pipe | psql -d DATNAME
I had to go for text-based backups in order to restore a dump from 9 on 8.3.
The restore is painfully and unreasonably slow. I noticed my log is full of these:
2011-05-22 08:02:47 CDT LOG: checkpoints are occurring too frequently (9 seconds apart)
2011-05-22 08:02:47 CDT HINT: Consider increasing the configuration parameter "checkpoint_segments".
2011-05-22 08:02:54 CDT LOG: checkpoints are occurring too frequently (7 seconds apart)
2011-05-22 08:02:54 CDT HINT: Consider increasing the configuration parameter "checkpoint_segments".
My question is: Is it possible that the setting of checkpoint_segments is the bottleneck? What other parameters can I tweak to speed up the process?
That machine has 4 GB RAM. Other possibly relevant settings in postgresql.conf are:
shared_buffers = 1000MB
work_mem = 200MB
maintenance_work_mem = 200MB
effective_cache_size = 2000MB
# fsync and checkpoint settings are default

Did you read this ? See specially sec 14.4.9

For the purposes of restoring a database, change:
# I don't think PostgreSQL 8.3 supports synchronous_commit
synchronous_commit = off
# only change fsync = off if your version of PG is too old to support synchronous_commit. If you do support synchronous_commit, don't ever change fsync to anything but on. Ever.
#fsync = off
checkpoint_segments = 25
Regarding checkpoint_segments, set that value to the size of your disk controller's write buffer. 25 = 400MB
Also, make sure your psql is loading everything in a single transaction:
some-irrelevant-pipe | psql -1 -d DATNAME

Related

Enabling tempcomp in Chrony

I'm working on a Raspberry Pi running Buster with an Adafruit Ultimate GPS hat. I'm trying to get Chrony to temperature compensate. I've modified the chrony.conf file to contain.
# Uncomment the following line to turn logging on.
log measurements refclocks statistics tempcomp tracking
tempcomp /sys/class/hwmon/hwmon0/temp1_input 30 26000 0.0 0.000183 0.0
#tempcomp /sys/class/hwmon/hwmon0/temp1_input 30 /var/log/chrony/tempcomp.log
The system is currently adding measurements to the tempcomp.log file every 30 seconds. However, if I enable the second (commented out) tempcomp line above, chrony dies on restart with the error
Sep 6 12:31:45 rpi-tick2 chronyd[24713]: chronyd version 3.4 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +SECHASH +IPV6 -DEBUG)
Sep 6 12:31:45 rpi-tick2 chronyd[24713]: Frequency 23.662 +/- 0.165 ppm read from /var/lib/chrony/chrony.drift
Sep 6 12:31:45 rpi-tick2 chronyd[24713]: Fatal error : Could not read tempcomp point from /var/log/chrony/tempcomp.log
Sep 6 12:31:45 rpi-tick2 chronyd[24711]: Could not read tempcomp point from /var/log/chrony/tempcomp.log
I believe this is due to the fact that the tempcomp.log files has entries like
===========================================
Date (UTC) Time Temp. Comp.
===========================================
2021-09-06 17:40:47 5.2095e+04 4.7754e+00
2021-09-06 17:41:17 5.2582e+04 4.8645e+00
2021-09-06 17:41:47 5.2582e+04 4.8645e+00
2021-09-06 17:42:17 5.3069e+04 4.9536e+00
2021-09-06 17:42:47 5.2582e+04 4.8645e+00
Where chrony is expecting something like
20000 1.0
21000 0.64
22000 0.36
23000 0.16
24000 0.04
and sorted by temperature not sample time.
So it seems like I'm missing a step somewhere.
Also, once set up, is this a dynamic process where new datapoints are added as we go, or do we stop collecting data and just use the static table to compensate for temps?
Thanks for any insights.
I think you have to manually create a chrony.tempcomp file, likely by analyzing the tempcomp.log file. They are separate files. Then specify the chrony.tempcomp file like this:
tempcomp /sys/class/hwmon/hwmon0/temp2_input 30 /etc/chrony.tempcomp

Diagnosing high CPU usage on Docker for Mac

How do I diagnose the cause of Docker on MacOS, specifically com.docker.hyperkit using 100% of CPU?
Docker stats
Docker stats shows all the running containers have low CPU, memory, net IO and block IO.
iosnoop
iosnoop shows that com.docker.hyperkit performs about 50 writes per second totaling 500KB per second to the file Docker.qcow2. According to What is Docker.qcow2?, Docker.qcow2 is a sparse file that's the persistent storage for all Docker containers.
In my case the file isn't that sparse. The physical size matches the logical size.
dtrace (dtruss)
dtruss sudo dtruss -p $DOCKER_PID shows a large number of psynch_cvsignal and psynch_cvwait calls.
psynch_cvsignal(0x7F9946002408, 0x4EA701004EA70200, 0x4EA70100) = 257 0
psynch_mutexdrop(0x7F9946002318, 0x5554700, 0x5554700) = 0 0
psynch_mutexwait(0x7F9946002318, 0x5554702, 0x5554600) = 89474819 0
psynch_cvsignal(0x10BF7B470, 0x4C8095004C809600, 0x4C809300) = 257 0
psynch_cvwait(0x10BF7B470, 0x4C8095014C809600, 0x4C809300) = 0 0
psynch_cvwait(0x10BF7B470, 0x4C8096014C809700, 0x4C809600) = -1 Err#316
psynch_cvsignal(0x7F9946002408, 0x4EA702004EA70300, 0x4EA70200) = 257 0
psynch_cvwait(0x7F9946002408, 0x4EA702014EA70300, 0x4EA70200) = 0 0
psynch_cvsignal(0x10BF7B470, 0x4C8097004C809800, 0x4C809600) = 257 0
psynch_cvwait(0x10BF7B470, 0x4C8097014C809800, 0x4C809600) = 0 0
psynch_cvwait(0x10BF7B470, 0x4C8098014C809900, 0x4C809800) = -1 Err#316
Update: top on Docker host
From https://stackoverflow.com/a/58293240/30900:
docker run -it --rm --pid host busybox top
The CPU usage on docker embedded host is ~3%. CPU usage on my MacBook was ~100%. So, the docker embedded host isn't causing the CPU usage spike.
Update: running dtrace scripts of most common stack traces
Stack traces from the dtrace scripts in the answer below: https://stackoverflow.com/a/58293035/30900.
These kernel stack traces look innocuous.
AppleIntelLpssGspi`AppleIntelLpssGspi::regRead(unsigned int)+0x1f
AppleIntelLpssGspi`AppleIntelLpssGspi::transferMmioDuplexMulti(void*, void*, unsigned long long, unsigned int)+0x91
AppleIntelLpssSpiController`AppleIntelLpssSpiController::transferDataMmioDuplexMulti(void*, void*, unsigned int, unsigned int)+0xb2
AppleIntelLpssSpiController`AppleIntelLpssSpiController::_transferDataSubr(AppleInfoLpssSpiControllerTransferDataRequest*)+0x5bc
AppleIntelLpssSpiController`AppleIntelLpssSpiController::_transferData(AppleInfoLpssSpiControllerTransferDataRequest*)+0x24f
kernel`IOCommandGate::runAction(int (*)(OSObject*, void*, void*, void*, void*), void*, void*, void*, void*)+0x138
AppleIntelLpssSpiController`AppleIntelLpssSpiDevice::transferData(IOMemoryDescriptor*, void*, unsigned long long, unsigned long long, IOMemoryDescriptor*, void*, unsigned long long, unsigned long long, unsigned int, AppleIntelSPICompletion*)+0x151
AppleHSSPISupport`AppleHSSPIController::transferData(IOMemoryDescriptor*, void*, unsigned long long, unsigned long long, IOMemoryDescriptor*, void*, unsigned long long, unsigned long long, unsigned int, AppleIntelSPICompletion*)+0xcc
AppleHSSPISupport`AppleHSSPIController::doSPITransfer(bool, AppleHSSPITransferRetryReason*)+0x97
AppleHSSPISupport`AppleHSSPIController::InterruptOccurred(IOInterruptEventSource*, int)+0xf8
kernel`IOInterruptEventSource::checkForWork()+0x13c
kernel`IOWorkLoop::runEventSources()+0x1e2
kernel`IOWorkLoop::threadMain()+0x2c
kernel`call_continuation+0x2e
53
kernel`waitq_wakeup64_thread+0xa7
pthread`__psynch_cvsignal+0x495
pthread`_psynch_cvsignal+0x28
kernel`psynch_cvsignal+0x38
kernel`unix_syscall64+0x27d
kernel`hndl_unix_scall64+0x16
60
kernel`hndl_mdep_scall64+0x4
113
kernel`ml_set_interrupts_enabled+0x19
524
kernel`ml_set_interrupts_enabled+0x19
kernel`hndl_mdep_scall64+0x10
5890
kernel`machine_idle+0x2f8
kernel`call_continuation+0x2e
43395
The most common stack traces in user space over 17 seconds clearly implicate com.docker.hyperkit. There 1365 stack traces in 17 seconds in which com.docker.hyperkit created threads which averages to 80 threads per second.
com.docker.hyperkit`0x000000010cbd20db+0x19f9
com.docker.hyperkit`0x000000010cbdb98c+0x157
com.docker.hyperkit`0x000000010cbf6c2d+0x4bd
libsystem_pthread.dylib`_pthread_body+0x7e
libsystem_pthread.dylib`_pthread_start+0x42
libsystem_pthread.dylib`thread_start+0xd
19
Hypervisor`hv_vmx_vcpu_read_vmcs+0x1
com.docker.hyperkit`0x000000010cbd4c4f+0x2a
com.docker.hyperkit`0x000000010cbd20db+0x174a
com.docker.hyperkit`0x000000010cbdb98c+0x157
com.docker.hyperkit`0x000000010cbf6c2d+0x4bd
libsystem_pthread.dylib`_pthread_body+0x7e
libsystem_pthread.dylib`_pthread_start+0x42
libsystem_pthread.dylib`thread_start+0xd
22
Hypervisor`hv_vmx_vcpu_read_vmcs
com.docker.hyperkit`0x000000010cbdb98c+0x157
com.docker.hyperkit`0x000000010cbf6c2d+0x4bd
libsystem_pthread.dylib`_pthread_body+0x7e
libsystem_pthread.dylib`_pthread_start+0x42
libsystem_pthread.dylib`thread_start+0xd
34
com.docker.hyperkit`0x000000010cbd878d+0x36
com.docker.hyperkit`0x000000010cbd20db+0x42f
com.docker.hyperkit`0x000000010cbdb98c+0x157
com.docker.hyperkit`0x000000010cbf6c2d+0x4bd
libsystem_pthread.dylib`_pthread_body+0x7e
libsystem_pthread.dylib`_pthread_start+0x42
libsystem_pthread.dylib`thread_start+0xd
47
Hypervisor`hv_vcpu_run+0xd
com.docker.hyperkit`0x000000010cbd20db+0x6b6
com.docker.hyperkit`0x000000010cbdb98c+0x157
com.docker.hyperkit`0x000000010cbf6c2d+0x4bd
libsystem_pthread.dylib`_pthread_body+0x7e
libsystem_pthread.dylib`_pthread_start+0x42
libsystem_pthread.dylib`thread_start+0xd
135
Related issues
Github - docker/for-mac: com.docker.hyperkit 100% cpu usage is back again #3499
. One comment suggests adding volume caching described here: https://www.docker.com/blog/user-guided-caching-in-docker-for-mac/. I tried this and got a small ~10% reduction in CPU usage.
I have the same problem. My CPU % went back down to normal after I removed all my volumes.
docker system prune --volumes
I also manually removed some named volumes:
docker volume rm NameOfVolumeHere
That doesn't solve the overall issue of not being able to use volumes with Docker for mac. Right now I'm just being careful about the amount of volumes I use and closing Docker desktop when not in use.
My suspicion is that the issue is IO related. With MacOS volumes, this involves osxfs where there is some performance tuning you can perform. Mainly, if you can accept fewer consistency checks, you can set the volume mode to delegated for faster performance. See the docs for more details: https://docs.docker.com/docker-for-mac/osxfs-caching/. However, if your image contains a large number of small files, performance will suffer, especially if you also have lots of image layers.
You can also try the following command to debug any process issues within the embedded VM that docker uses:
docker run -it --rm --pid host busybox top
(To exit, use <ctrl>-c)
To track down if it's IO, you can also try the following:
$ docker run -it --rm --pid host alpine /bin/sh
$ apk add sysstat
$ pidstat -d 5 12
That will run inside the alpine container running in the VM pid namespace, showing any IO happening from any process, whether or not that process is inside of a container. The stats are every 5 seconds for one minute (12 times) and then it will give you an average table per process. You can then <ctrl>-d to destroy the alpine container.
From the comments and edits, these stats may check out. A 4 core MBP has 8 threads, so full CPU utilization should be 800% if MacOS is reporting the same as other Unix based systems. Inside the VM there's over 100% load shown in the top command for the average in the past minute (though less from the 5 and 15 averages) which is roughly what you see for the hyperkit process on the host. The instantaneous usage is over 12% from top, not 3%, since you need to add the system and user percentages. And the IO numbers shown in pidstat align roughly with what you see written to the qcow2 image.
If the docker engine itself is thrashing (e.g. restarting containers, or running lots of healthchecks), then you can debug that by watching the output of:
docker events
EDIT: after a few weeks, my cpu issues have come back - so the below solutions probably aren't worth it
My CPU was always running crazy high, and it wasn't I/O, as determined using docker stats
I did a bunch of stuff, but had it suddenly decrease to reasonable levels and stay that way for over a week now, after doing the following:
Ensure you have the right # of CPU's set - not what you have, but HALF that amount. Mine was more than half, and I feel this was the real problem, in Preferences | Resources
decrease # of file shares if possible - Preferences | Resources, /private, /tmp/, /var/folders
disable use gRPC FUSE for file sharing - Preferences | Resources
Changing the volumes to use a delegated configuration worked for me and resulted in a drastic drop in CPU usage.
see the document: https://docs.docker.com/docker-for-mac/osxfs-caching/#delegated
how set in my docker-compose.yml:
version: "3"
services:
my_service:
image: python3.6
ports:
- "80:10000"
volumes:
- ./code:/www/code:cached
For me this worked, macOS 10.15.5, Docker Desktop 2.3.0
This is a small dTrace script I use to find where the kernel is spending its time (it's from Solaris, and dates back to the early days of Solaris 10):
#!/usr/sbin/dtrace -s
profile:::profile-1001hz
/arg0/
{
#[ stack() ] = count();
}
It simply samples kernel stack traces and counts each one it encounters in the # aggregation.
Run it as root:
... # ./kernelhotspots.d > /tmp/kernel_hot_spots.txt
Let it run for a decent amount of time while you're having CPU issues, then hit CTRL-C to break the script. It will emit all the kernel stack traces it encountered, the most common last. If you need more (or less) stack frames from the default with
#[ stack( 15 ) ] = count();
That will show a stack frame 15 calls deep.
The last few stack traces will be where your kernel is spending most of its time. That may or may not be informative.
This script will do the same for user-space stack traces:
#!/usr/sbin/dtrace -s
profile:::profile-1001hz
/arg1/
{
#[ ustack() ] = count();
}
Run it similarly:
... # ./userspacehotspots.d > /tmp/userspace_hot_spots.txt
ustack() is a bit slower - to emit the actual function names, dTrace has to do a lot more work to get them from the address spaces of the appropriate processes.
Disabling System Integrity Protection might help you get better stack traces.
See DTrace Action Basics for some more details.
Had same issue with docker today in Big Sur (tried pruning images, changing to apple virtualization, nothing helped). However, disabling the docker desktop to startup in preferences and never opening the desktop gui seems to fix it for me. Docker now runs with only 10%cpu usage even after starting a few containers. However, once I open the desktop gui it slowly rises again to +90% cpu and keeps on hogging the cpu even after closing the DockerDesktop process. Docker version 20.10.13, build a224086.
The solution I found was to increase the resources given to Docker. I increased the Memory from 2GB to 8GB, the Swap from 1GB to 2GB, and the disk image size to 160GB. Completely solved the problem for me, and it's an easy one for readers to try.
to disable use gRPC FUSE for file sharing might not good idea. I found the feedback from another issue made by docker community. see bellow:
So we'll look into that. However,
osxfs will not be supported long term.
We can't maintain two solutions.
hier to docker issue thread
There is an open issue here https://github.com/docker/for-mac/issues/6166
It seems there are a few bugs going on
For some people (me including) unchecking the "Open Docker Dashboard at startup" and manually restarting docker do the job.
For other people increasing resources like CPU and Memory works

Oracle Database CPU Usage on AIX

I want to find the CPU process usage for all Oracle processes on an AIX box.
On Solaris I can do the following:
prstat -n 400 -c -s cpu -p 9013 1 1
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
9013 oracle 3463M 2928M sleep 53 0 0:00:35 0.9% oracle/2
Total: 1 processes, 2 lwps, load averages: 2.25, 2.32, 2.40
This basically reports the CPU usage for a given process ID (in this case 9013). Given a list of all Oracle PID’s I can use this command to get the CPU usage for each one, sum them up and hey presto I have my Oracle database CPU usage.
How can I get the same with AIX?
Thanks
You can try nmon or topas, which will show the current %CPU. You might also want to look into using WLM to create a class for all the Oracle processes, then use wlmstat to see the CPU usage for that class. That would save you the trouble of adding them up manually.

Difference between memory_get_peak_usage and actual php process' memory usage

Why result of php memory_get_peak_usage differs so much from memory size that is shown as allocated to process when using 'top' or 'ps' commands in Linux?
I've set 2 Mb of memory_limit in php.ini
My single-string php-script with
echo memory_get_peak_usage(true);
says that it is using 786432 bytes (768 Kb)
If I try to ask system about current php process
echo shell_exec('ps -p '.getmypid().' -Fl');
it gives me
F S UID PID PPID C PRI NI ADDR SZ WCHAN RSS PSR STIME TTY TIME CMD
5 S www-data 14599 14593 0 80 0 - 51322 pipe_w 6976 2 18:53 ? 00:00:00 php-fpm: pool www
RSS param is 6976, so memory usage is 6976 * 4096 = 28573696 = ~28 Mb
Where that 28 Mb come from and is there any way to decrease memory size that is being used by php-fpm process?
The memory size is mostly used by the PHP process itself. memory_get_peak_usage() returns the memory used by your specific script. Ways to reduce the memory overhead is to remove the number of extensions, statically compile PHP, etc.. But don't forget that php-fpm (should) fork and that a lot of the memory usage that's not different between PHP process is in fact shared (until it changes).
PHP itself may only be set to a 2meg limit, but it's running WITHIN a Apache child process, and that process will have a much higher memory footprint.
If you were running the script from the command line, you'd get memory usage of PHP by itself, as it's not wrapped within Apache and is running on its own.
The peak memory usage is for the current script only.

PostgreSQL: improving pg_dump, pg_restore performance

When I began, I used pg_dump with the default plain format. I was unenlightened.
Research revealed to me time and file size improvements with pg_dump -Fc | gzip -9 -c > dumpfile.gz. I was enlightened.
When it came time to create the database anew,
# create tablespace dbname location '/SAN/dbname';
# create database dbname tablespace dbname;
# alter database dbname set temp_tablespaces = dbname;
% gunzip dumpfile.gz # to evaluate restore time without a piped uncompression
% pg_restore -d dbname dumpfile # into a new, empty database defined above
I felt unenlightened: the restore took 12 hours to create the database that's only a fraction of what it will become:
# select pg_size_pretty(pg_database_size('dbname'));
47 GB
Because there are predictions this database will be a few terabytes, I need to look at improving performance now.
Please, enlighten me.
First check that you are getting reasonable IO performance from your disk setup. Then check that you PostgreSQL installation is appropriately tuned. In particular shared_buffers should be set correctly, maintenance_work_mem should be increased during the restore, full_page_writes should be off during the restore, wal_buffers should be increased to 16MB during the restore, checkpoint_segments should be increased to something like 16 during the restore, you shouldn't have any unreasonable logging on (like logging every statement executed), auto_vacuum should be disabled during the restore.
If you are on 8.4 also experiment with parallel restore, the --jobs option for pg_restore.
Improve pg dump&restore
PG_DUMP | always use format-directory and -j options
time pg_dump -j 8 -Fd -f /tmp/newout.dir fsdcm_external
PG_RESTORE | always use tuning for postgres.conf and format-directory and -j options
work_mem = 32MB
shared_buffers = 4GB
maintenance_work_mem = 2GB
full_page_writes = off
autovacuum = off
wal_buffers = -1
time pg_restore -j 8 --format=d -C -d postgres /tmp/newout.dir/
Two issues/ideas:
By specifying -Fc, the pg_dump output is already compressed. The compression is not maximal, so you may find some space savings by using "gzip -9", but I would wager it's not enough to warrant the extra time (and I/O) used compressing and uncompressing the -Fc version of the backup.
If you are using PostgreSQL 8.4.x you can potentially speed up the restore from a -Fc backup with the new pg_restore command-line option "-j n" where n=number of parallel connections to use for the restore. This will allow pg_restore to load more than one table's data or generate more than one index at the same time.
I assume you need backup, not a major upgrade of database.
For backup of large databases you should setup continuous archiving instead of pg_dump.
Set up WAL archiving.
Make your base backups for example every day by using
psql template1 -c "select pg_start_backup('`date +%F-%T``')"
rsync -a --delete /var/lib/pgsql/data/ /var/backups/pgsql/base/
psql template1 -c "select pg_stop_backup()"`
A restore would be as simple as restoring database and WAL logs not older than pg_start_backup time from backup location and starting Postgres. And it will be much faster.
zcat dumpfile.gz | pg_restore -d db_name
Removes the full write of the uncompressed data to disk, which is currently your bottleneck.
As you may have guessed simply by the fact that compressing the backup results in faster performance, your backup is I/O bound. This should come as no surprise as backup is pretty much always going to be I/O bound. Compressing the data trades I/O load for CPU load, and since most CPUs are idle during monster data transfers, compression comes out as a net win.
So, to speed up backup/restore times, you need faster I/O. Beyond reorganizing the database to not be one huge single instance, that's pretty much all you can do.
If you're facing issues with the speed of pg_restore check whether you dumped your data using INSERT or COPY statement.
If you use INSERT (pg_dump is called with --column-inserts parameter) the restore of data would be significantly slower.
Using INSERT is good for making dumps that are loaded into non-Postgres databases. But if you do a restore to Postgres omit using --column-inserts parameter when using pg_dump.

Resources