How to increase max_locks_per_transaction - macos

I've been performing kind of intensive schema dropping and creating over a PostgreSQL server,
ERROR: out of shared memory
HINT: You might need to increase max_locks_per_transaction.
I need to increase max_locks_per_transaction but how can i increase it in MAC OSX

you might find ../data/postgresql.conf file, then edit with notepad, set max_locks_per_transaction = 1024
if it looks like # max_locks_per_transaction... you must remove #.
it must look like that:
max_locks_per_transaction = 1024 # min 10
than save it and restart postgresql

It is a setting in your postgresql.conf if you do not know where that file is run SHOW config_file; on an sql prompt/window.
Then when you have modified that file restart postgresql, I don't know how you do that on MacOS a reboot will work of course.

Related

PostgreSQL Log Rotation Size Reached File Limit

I have configured the following settings in postgreSQL 13.
logging_collector = on
log_rotation_size='100MB'
log_truncate_on_rotation = on
log_filename ='postgresql-%Y-%m-%d.log'
My issue is when the log file size reached 100MB, it will continue to append on it, I think it is because of the log_filename. Is there anyway I can rename the filename when it reached the log_rotation_size?
I need to set the log_filename with this format (without the time) so that whenever I restart the service, the log will still be in the same log file.
Do I have to run some script or services on the background so that the program is able to monitor the data/logs folder and rename the file when the log file size reaches the limit?
As the documentation says:
However, truncation will occur only when a new file is being opened due to time-based rotation, not during server startup or size-based rotation.
Truncating the log file in your case would mean to lose recent log information, so PostgreSQL won't do it.
I can think of no better way than a cron job that removes the log file when it approaches the limit. Then size based log rotation will create the file again.

PVCS service getting down once the server CPU physical memory usage become high. Whats the issue and How to resolve it?

Our PVCS service getting down once the physical memory usage of the server goes high. Once the server restarts(Not recommended) again the service will be up. Is there any permenant fix for this?
I resolved this issue by increasing the heapsize parameters...:-)
1.On the server system, open the following file in a text editor:
Windows as of VM 8.4.6: VM_Install\vm\common\bin\pvcsrunner.bat
Windows prior to VM 8.4.6: VM_Install\vm\common\bin\pvcsstart.bat
UNIX/Linux: VM_Install/vm/common/bin/pvcsstart.sh
2.Find the following line:
set JAVA_OPTS=
And set the value of the following parameters as needed:
-Xmsvaluem -Xmxvaluem
3.If you are running a VM release prior to 8.4.3, make sure -Dpvcs.mx= is followed by the same value shown after -Xmx.
4.Save the file and restart the server.
The following is a rule of thumb when increasing the values for -Xmx:
•256m -> 512m
•512m -> 1024m
•1024m -> 1280m
As Riant points out above, adjusting the HEAP size is your best course of action here. I actually supported PVCS for nine years until this time in 2014 when I jumped ship. Riant's numbers are exactly what I would recommend.
I would actually counsel a lot of customers to set -Xms and -Xmx to the same value (basically start it at 1024) because if your PDBs and/or your user community are large you're going to hit the ceiling quicker than you might realize.

Reduce Memory Usage from 16GB to 8GB - Oracle

I had created a oracle instance using "Database Configuration Assistant". My system is having 64GB RAM. I had given 16GB to oracle instance, in Initialization Parameters Wizard.
Now i want to reduce that 16GB to 8GB. So that, the RAM occupied by oracle will be 8GB. I had tried this in SQL Developer,
ALTER SYSTEM SET pga_aggregate_target = 8289 M;
ALTER SYSTEM SET sga_target = 1536 M;
I had restarted the oracle service. It not got reflected. Still the oracle is using 16GB.
I dont know whether this is correct. Whether system reboot is needed for this.? or else how to reduce the memory usage.
There are various ways to define the amount of memory used. Historically, you needed a lot of settings to change to impact total memory footprint. Nowadays, it is often by default setting only one and start tweaking later (when the Oracle installer does not screw up; it often sets things wrongly).
I would check the following:
select *
from v$parameter
where name like '%size%'
or
name like '%target%'
Check which ones have been set and need changing. It can be settingslike shared_pool_size, memory_target, sga_target, and others.
When you change it, some settings (depends on version and edition) can be changed while the instance is open and running, while some require a restart. Also, sometimes you are using a text file (pfile) and in some instance you may be using a binary file (spfile). Binary file is pre-condition to allow online changing without restarting.
You will probably succeed using something like:
alter system set NAME = VALUE scope=[spfile|both]
as sys user. Scope=spfile only changes the spfile, both changes runtime and spfile. When using a pfile like init*.ora, you just edit the text file and restart your instance.
To quickly restart, the best way is IMHO:
startup force
Please decreasing size, you will generally not have a problem assuming that the size is sufficient to handle the load. Do it in a test environment first. When increasing and depending on platform, please make sure first that your new settings can be handled. For instance, increasing memory allocated on Linux may require you to change kernel settings. Otherwise, your Oracle instance will not start unless the corrections are made first.

Transferring (stopping, resuming) file using rsync

I have an external hard-drive that I suspect is on its way out. At the minute, I can transfer files from it, but only for a while. Unfortunately, I have one single file that's >50GB in size. My solution to this is to use rsync to transfer this one particular file a bit at a time, leave the drive to rest (switch it off), and resume a little while later.
I'm using rsync --partial --progress --inplace --append -a /Volumes/Backup\ Drive/chris/Desktop/Recording\ Sessions/S1/Session\ 1/untitled ~/Desktop/temp to transfer it. (The file is in the untitled folder, which I'm moving into the temp folder) However, after having stopped it and resumed it, it seems to be over-writing the previous attempt at the file, meaning I don't really get any further.
Is there something I'm missing? :X
Thankyou ^_^
EDIT: Still don't know :\
Well, since this is a programming site, here's a program to do it. I tested it on OS X, but you should definitely test it on some small files first to make sure it does what you want:
#!/usr/bin/env python
import os
import sys
source = sys.argv[1]
target = sys.argv[2]
begin = int(sys.argv[3])
end = int(sys.argv[4])
mode = 'r+b' if os.path.exists(target) else 'w+b'
with open(source, 'rb') as source_file, open(target, mode) as target_file:
source_file.seek(begin)
target_file.seek(begin)
buffer = source_file.read(end - begin)
target_file.write(buffer)
You run this with four arguments: the source file, the destination, and two numbers. The first number is the byte count to start copying from (so on the first run you'd use 0). The second number is the byte count to copy until (not including). So on subsequent runs you'd always use the previous fourth argument as the new third argument (new begin equals old end). And just go on like that until it's done, using whatever sizes you like along the way.
I know this is related to macOS, but the best way to get all the files off a dying drive is with GNU ddrescue. I have no idea if this runs nicely on macOS, but you can always use a Linux live-usb to do this. You'll want to open a terminal and be either root (preferred) or use sudo.
Firstly, find the disk that you want to backup. This can be done by running the following. Make note of the partition name or disk name that you want to back up. Hard drives/flash drives will typically use the format sdX, where X is the drive letter. Partitions will be listed under sdX1, sdX2... etc. NVMe drives/partitions follow a similar naming convention.
lsblk -o name,size,label,fstype,model
Mount and change directory (cd) to a writable location that is bigger than the drive/partition you want to back up.
Now we are going to do a first pass over the drive/partition. This will do a first pass, without stopping on problematic sections. This will ensure that ddrescue does not cause any more damage by trying to access a bad section. Think of it like a hole in a sweater -- you wouldn't want to keep picking at the hole or it would get bigger. Run the following, with sdX replaced with the drive/partition name from earlier:
ddrescue -d /dev/sdX backup.img backup.logfile
the -d flag uses direct disk access and ignores the kernel cache, and the logfile is important in case the drive gets disconnected or the process stops somehow.
Run ddrescue again with the -r flag. This will retry bad sections 3 times. Feel free to run this a few times, but note that ddrescue cannot restore everything. From my experience it usually restores in the high 90%s, and many of the files are system files (aka not your personal files).
ddrescue -d -r3 /dev/sdX backup.img backup.logfile
Finally, you can use the image however you want. You can either mount it to copy the files off or use it in a virtual machine/burn it to a working drive with dd. Do note that the latter options will not always work if system critical files were damaged.
Good luck and remember to make backups!

h2.db file size difference

I have an application that generates an H2 database.
When I execute the application on Windows XP it generates an .h2.db file with size 176K, but when I execute the same application on Unix (SunOS) it generates an .h2.db file with size 1126K, although they contain exactly the same data.
Can anyone explain what might be causing the UNIX generated file to be so much larger?
Thanks!
Martin
The easiest way to shrink the database file in this case is to open and close it. An alternative is to run the statement shutdown compact
In your case, the "Unix" database is not fully compacted, that means it contains empty pages in the database file (the empty pages where most likely temporarily contained the transaction log; this is normal). When closing the database, H2 will try to compact the database file by moving unused pages to the end of the file and then truncating the file. The default compact time is 0.2 seconds. Probably this 0.2 seconds were not quiet enough to fully compact the database in case of the "Unix" platform, but enough for the "Windows" platform.

Resources