mysql import only missing rows - cmd

I'm looking for a way to restore my DB from a prior backup. However, the backup should not simply overwrite all existing records but instead add only the difference between current DB and the backup file. If no "non existent" records are stored in the backup, nothing should happen. The backups were made with mysqldump. Any clues?
Thanks in advance

Here is a less manual answer:
mysqldump -t --insert-ignore --skip-opt -u USER -pPASSWORD -h 127.0.0.1 database > database.sql
That export command with the -t --insert-ignore --skip-opt options will give you a sql dump file with no DROP TABLE or CREATE TABLE commands and every INSERT is now an INSERT IGNORE.
BONUS:
This will dump a single table in the same way:
mysqldump -t --insert-ignore --skip-opt -u USER -pPASSWORD -h 127.0.0.1 database table_name > table_name.sql
I needed this today and could not help but to share it!

Remove the DROP TABLE and CREATE TABLE statements from the dump file. Change the INSERT statements to INSERT IGNORE. Then load the backup file and it should not update any duplicate rows.

Related

How can I execute SQL files stored in Github?

I have a nice bash script that uses az cli to generate an Azure SQL Server (az sql server create) and SQL database (az sql db create).
I'd like to populate the database with tables and columns I have defined in a series of .sql files stored in Github.
Example file:
Filename: TST_HDR.sql
File contents:
-- Create a new table called 'TEST_HDR' in schema 'dbo'
-- Drop the table if it already exists
IF OBJECT_ID('dbo.TEST_HDR', 'U') IS NOT NULL
DROP TABLE dbo.TEST_HDR
GO
-- Create the table in the specified schema
CREATE TABLE dbo.TEST_HDR
(
tstID INT IDENTITY(1,1) NOT NULL PRIMARY KEY,
tstGUID [NVARCHAR](50),
tstComments [NVARCHAR](2000),
tstHdrCreated DATETIME2,
tstHdrCreatedBy [NVARCHAR](255),
tstHdrLastUpdated DATETIME2,
tstHdrLastUpdatedBy [NVARCHAR](255),
tstHdrDeleted [NVARCHAR](3),
tstHdrDeletedBy [NVARCHAR](255),
tstHdrVersionNum INT
);
GO
Which bash (or other scripting) commands do I use to get these files from Github and execute them against the SQL database?
Assuming you have sqlcmd installed:
tmp=$(mktemp) && \
curl -sL https://raw.githubusercontent.com/path/to/your/TST_HDR.sql > ${tmp} && \
sqlcmd -S <servername>.database.windows.net -d <database> -U <user> -P <password> -i ${tmp}
mkdir (create directory)
cd (to the directory created, for the Github repository)
git clone (The address to the repository where your sql file is located)
Make sure the ports are accessible on the pc you are connecting from and on the server you are connecting to.
sqlcmd -d (database) -i (filepath to the sql file, in the git repository) -P (password) -S (servername).database.windows.net -U (user)

Mamp: import large database

I'm importing a large drupal database to my mac using mamp and I keep finding errors, the phpmyadmin can't import the database. can anyone help me?
Importing a large database through phpmyadmin is not recommended (it will typically hangup forever). It's much more efficient to use the command line through the Terminal.
First, make sure you can connect to your database from the command line with one of the following commands:
1/ If your root password isn't set:
mysql -u root
2/ or if you have a root password:
mysql -u root -p
3/ or if you have a specific username and password:
mysql -u username -p
If one of those commands execute correctly, you're good to go to the next step.
Notice you can exit the mysql interactive session anytime with entering:
exit
List your databases:
SHOW databases;
If you don't have your database listed here, you will need to create it:
CREATE DATABASE database_name CHARACTER SET utf8 COLLATE utf8_general_ci;
Then select your database:
USE database_name;
Finally, import the data from your sql file:
SOURCE "path/to/your/file.sql"
Second method (it suppose your database is already created)
mysql -u username -p database_name < path/to/your/file.sql

Simple Batch Script for Presto query

I am running a bash script to extract data from a table via presto...
./presto --server myprestoserver:8889 --catalog mycatalog --schema myschema --execute "select * from TABLEResultsAuditLog;" > /mydirectory/audit.dat
This command will successfully and extract the table results and send them to the audit.dat file. What I am looking for is to replace the
--execute "select * from TABLEResultsAuditLog;"
section and have a file located in /mydirectory/audit.sql which would then contain the sql statement which I need executed. I have tried using
./presto --server myprestoserver:8889 --catalog mycatalog --schema myschema < /mydirectory/audit.sql > /mydirectory/audit.dat
where the audit.sql contains the select statement only, but this only populates the audit.dat file with the query statement and not the results. I'm not familiar with bash scripting, so its probably an easy fix for someone!!
Presto CLI has --file option for this purpose:
presto-cli --server ... --file input.sql > output-file

How to export 4 million data in MySQL?

I have a database with one particular table having more than 4 million record entries. I tried downloading whole db it using MySQL workbench as well as command terminal using following command:
mysqldump -u root -p password mydb > myfile.sql
But, I got only half of the data downloaded. If I ignored that one particular table, then it's working fine. Can anyone suggest me how to download db with tables having more than million entries?
Try adding the below lines in the my.cnf and restart
[mysqld]
# Performance settings used for import.
delay_key_write=ALL
bulk_insert_buffer_size=256M
or
mysqldump -u root -p --max_allowed_packet=1073741824 --lock-tables=false mydb > myfile.sql

How to create a backup of an POSTGRES DB using bash?

How to create a backup of an POSTGRES DB using bash?
pg_dump -U some_user_name -f dump.file -Fc database_name
That's all.
If you need to authenticate with password - use pgpass file.
Use pg_dump.
Ideally you should add an scheduled job to crontab to be executed daily. The following will create a gzipped sql file with timestamp. SQL dumps otherwise could be very big.
pg_dump database_name | gzip -c > ~/backup/postgres/database_name-`/bin/date +%Y%m%d-%H%M`.sql.gz

Resources