MariaDB 10.5.5 Database extremely slow after mysqldump - mariadb-10.5

We have this problem on 2 separate installs.
MariaDB 5.x installed. Once the daily backup is done, the database is almost unusable.
We have TS environments and a DB server - mysql 5.1 running on the TS machines but we have two older sites doing the same and working perfectly.
MyISAM database.
Our batch file runs from one of the TS servers with the following command:
mysqldump -u root -p --databases DB1 DB2 > dumpname_date.sql
Please help friends, this has become extremely oppressive.
Thank you in advance

I have been struggling to get mysqldump to run faster (use more of VM) and figured out a couple of things.
mysqldump sucks if the db is significantly bigger that the mariadb ram innodb_buffer_pool_size as all of the db does not fit into ram/buffer.
when mysqldump runs it asks for all data from each table one by one, forcing mariadb server to load the data into the buffer and flush other data. (does your db speed recover after running for a while ?)
this is probably what your seeing, mariadb is good at keeping hot data in the buffer cache, but mysqldump destroys this and it takes time for it to recover.
The solution:
use mariadb-backup
I have seen a 600% speed improvement. (>90min to <15min)
mariadb-backup spin's up it's own sqlserver on the same data files and does a backup (can do it in parralel) thus it should not affect your running mariadb buffer/cache.
my situation is slightly different using mariadb+galera with dediated backup server, where is want as much backup speed as possible.
the backup generated by mariadb-backup is bigger (dir with files) but with the --stream option can be piped through gzip to compress.

Related

How to export/import db?

I have a remote server running a huge Oracle 11g database on a Docker that I need to export/import on my local machine.
I have already tried 2 approaches:
an attempt to copy db through SQLDeveloper led to Time Out exception
save running container into an image and load it after also
didn't help as an initialization error occurred. The size of resulting image.tar made up 10.5GB.
By reason of Oracle is commonly used in production environment and designed to cope with big amounts of data I'm sure there must be clear off-the-shelf solution to export/import a db from one host to another.
Could you give me any ideas, please?

How to check that the H2 DataBase is Fully not corrupted?

H2 Database is not very stable (But very Fast wich is very good for DEV), especialy during the developpement process, i hope that the number of corruption is du to the immediat shutdown of the Server (during debuging).
How to ensure that a H2 DataBase is not corrupted?
In order garant that a backup is good.
Probably the best way to check if everything is OK is to create a SQL script from the database, using the SCRIPT statement. If that works, then the data is fully readable. The index data might still be corrupt, but indexes can be re-created.
Another option is to always backup the data in the form of a SQL script. This will make a separate check unnecessary; but backup is a bit slower and can't be done online (while updates are happening).
By the way: if a database file gets corrupt, it's due to misconfiguration or wrong usage (H2 supports disabling the transaction log), due to hardware failure, or due to a bug in the database engine itself.

Speed up PostGreSQL createdb?

Is there a way to speed up PostgreSQL's createdb command?
Normally I wouldn't care, but doing unit testing in Django creates a database every time, and it takes about 5 seconds.
I'm using openSUSE 11.2 64-bit, PostgreSQL 8.4.2
It won't help you now, but there has been some work done around this in PostgreSQL 9.0.`
What you can try as a workaround is to run with fsync=off. Of course, don't even think about doing this if you have actual data in your database, but if it runs on a test system only, that will make your CREATE DATABASE run a lot faster.
If Django supported postgres schemas then you could simply drop the schema in question & recreate it instead of killing the entire database.
You can still use DROP OWNED BY ... CASCADE to drop all objects created by whichever user is configured in Django, bringing the database back to an essentially pristine condition. See how much faster this is.
You can shut down Postgres, then untar an existing database cold backup instead of running initdb. See how much faster this is.

Efficiently clone a MySQL database on another server

We need to regularly create a clone of a production server's live MySQL 4 database (only one schema) and import it on one or more development databases. Our current process is to 'mysqldump' the database, copy it via ssh and restore it on the target machine using the 'mysql' client utility.
Dumping and copying is relatively fast, but restoring the database schema (structure + content) takes hours. Is there a less time-consuming to do the cloning?
Use load data infile. This is an order of magnitude faster than loading from dumps. If you are lucky you could load data using a pipe. If you were able to export the data from one server to this same pipe, then you could have the two servers working simultaneously.
If you have LVM setup then have a look at this for using LVM for mysql backup . Using LVM the backups can be made really fast. Once the backup is taken tar it and copy the snapshot to the destination and untar it. It should be faster than the loading from mysqldump.
I don't have experience with it myself - mysqldump and mysqldump have always been sufficient for my data volumes - but mysqlhotcopy looks like it could be faster, as it uses cp/scp to copy the data directories.

Is it safe to delete the 3 default databases created during a PostgreSQL install?

I installed a default installation of PostgreSQL 8.4 on Windows 2003 Server, using the one-click installer provided. Running psql -l for the first time, I noticed there are three databases installed by default: postgres, template0, and template1.
Being security-minded, my initial reaction is to delete or change default configurations. However, I also know I haven't a clue regarding databases (as this install is my first step in self-learning about databases), so I thought I would ask first.
Is it safe to delete these?
Basically - no.
postgres database is here as a non-template database with reasonable guarantee that it exists - so any script that doesn't know where to connect to, can connect there.
if you will remove template1 - you will lose the ability to create new databases (at least easily).
template0 is there as a backup, in case your template1 got damaged.
While I can theoretically imagine a working database with no template* and postgres databases, the thing that bugs me is that i have no idea what (security-wise) you want to achieve by removing them.
You can delete the postgres but do not touch template0 or template1. The postgres database is there for convenience.

Resources