How to export 4 million data in MySQL? - terminal

I have a database with one particular table having more than 4 million record entries. I tried downloading whole db it using MySQL workbench as well as command terminal using following command:
mysqldump -u root -p password mydb > myfile.sql
But, I got only half of the data downloaded. If I ignored that one particular table, then it's working fine. Can anyone suggest me how to download db with tables having more than million entries?

Try adding the below lines in the my.cnf and restart
[mysqld]
# Performance settings used for import.
delay_key_write=ALL
bulk_insert_buffer_size=256M
or
mysqldump -u root -p --max_allowed_packet=1073741824 --lock-tables=false mydb > myfile.sql

Related

H2 RunScript SHOW TABLES doesn't find any table

I have an h2 v1 DB which I'm trying to run SHOW TABLES from the command line as part of a verification process. It's not finding any tables even though the H2 web console sees the tables.
$ cat showtable.sql
SHOW TABLES;
$ java -cp ./h2-1.4.200.jar org.h2.tools.RunScript -url 'jdbc:h2:./agentdb-dev.mv.db' -script showtable.sql -showResults
SHOW TABLES;
;
If I use the H2 web console on the same file and run "SHOW TABLES" it shows everything it should.
What am I missing?
As pointed out by Evgenji I needed to remove the ".mv.db" from the file name given since h2 adds that to whatever name is given.

Mamp: import large database

I'm importing a large drupal database to my mac using mamp and I keep finding errors, the phpmyadmin can't import the database. can anyone help me?
Importing a large database through phpmyadmin is not recommended (it will typically hangup forever). It's much more efficient to use the command line through the Terminal.
First, make sure you can connect to your database from the command line with one of the following commands:
1/ If your root password isn't set:
mysql -u root
2/ or if you have a root password:
mysql -u root -p
3/ or if you have a specific username and password:
mysql -u username -p
If one of those commands execute correctly, you're good to go to the next step.
Notice you can exit the mysql interactive session anytime with entering:
exit
List your databases:
SHOW databases;
If you don't have your database listed here, you will need to create it:
CREATE DATABASE database_name CHARACTER SET utf8 COLLATE utf8_general_ci;
Then select your database:
USE database_name;
Finally, import the data from your sql file:
SOURCE "path/to/your/file.sql"
Second method (it suppose your database is already created)
mysql -u username -p database_name < path/to/your/file.sql

mysql import only missing rows

I'm looking for a way to restore my DB from a prior backup. However, the backup should not simply overwrite all existing records but instead add only the difference between current DB and the backup file. If no "non existent" records are stored in the backup, nothing should happen. The backups were made with mysqldump. Any clues?
Thanks in advance
Here is a less manual answer:
mysqldump -t --insert-ignore --skip-opt -u USER -pPASSWORD -h 127.0.0.1 database > database.sql
That export command with the -t --insert-ignore --skip-opt options will give you a sql dump file with no DROP TABLE or CREATE TABLE commands and every INSERT is now an INSERT IGNORE.
BONUS:
This will dump a single table in the same way:
mysqldump -t --insert-ignore --skip-opt -u USER -pPASSWORD -h 127.0.0.1 database table_name > table_name.sql
I needed this today and could not help but to share it!
Remove the DROP TABLE and CREATE TABLE statements from the dump file. Change the INSERT statements to INSERT IGNORE. Then load the backup file and it should not update any duplicate rows.

Shell script to execute pgsql commands in files

I am trying to automate a set of procedures that create TEMPLATE databases.
I have a set of files (file1, file2, ... fileN), each of which contains a set of pgsql commands required for creating a TEMPLATE database.
The contents of the file (createdbtemplate1.sql) looks roughly like this:
CREATE DATABASE mytemplate1 WITH ENCODING 'UTF8';
\c mytemplate1
CREATE TABLE first_table (
--- fields here ..
);
-- Add C language extension + functions
\i db_funcs.sql
I want to be able to write a shell script that will execute the commands in the file, so that I can write a script like this:
# run commands to create TEMPLATE db mytemplate1
# ./groksqlcommands.sh createdbtemplate1.sql
for dbname in foo foofoo foobar barbar
do
# Need to simply create a database based on an existing template in this script
psql CREATE DATABASE $dbname TEMPLATE mytemplate1
done
Any suggestions on how to do this? (As you may have guessed, I'm a shell scripting newbie.)
Edit
To clarify the question further, I want to know:
How to write groksqlcommands.sh (a bash script that will run a set of pgsql cmds from file)
How to create a database based on an existing template at the command line
First off, do not mix psql meta-commands and SQL commands. These are separate sets of commands. There are tricks to combine those (using the psql meta-commands \o and \\ and piping strings to psql in the shell), but that gets confusing quickly.
Make your files contain only SQL commands.
Do not include the CREATE DATABASE statement in the SQL files. Create the db separately, you have multiple files you want to execute in the same template db.
Assuming you are operating as OS user postgres and use the DB role postgres as (default) Postgres superuser, all databases are in the same DB cluster on the default port 5432 and the role postgres has password-less access due to an IDENT setting in pg_hba.conf - a default setup.
psql postgres -c "CREATE DATABASE mytemplate1 WITH ENCODING 'UTF8'
TEMPLATE template0"
I based the new template database on the default system template database template0. Basics in the manual here.
Your questions
How to (...) run a set of pgsql cmds from file
Try:
psql mytemplate1 -f file
Example script file for batch of files in a directory:
#! /bin/sh
for file in /path/to/files/*; do
psql mytemplate1 -f "$file"
done
The command option -f makes psql execute SQL commands in a file.
How to create a database based on an existing template at the command line
psql -c 'CREATE DATABASE my_db TEMPLATE mytemplate1'
The command option -c makes psql execute a single SQL command string. Can be multiple commands, terminated by ; - will be executed in one transaction and only the result of the last command returned.
Read about psql command options in the manual.
If you don't provide a database to connect to, psql will connect to the default maintenance database named "postgres". In the second answer it is irrelevant which database we connect to.
you can echo your commands to the psql input:
for dbname in foo foofoo foobar barbar
do
echo """
CREATE DATABASE $dbname TEMPLATE mytemplate1
""" | psql
done
If you're willing to go the extra mile, you'll probably have more success with sqlalchemy. It'll allow you to build scripts with python instead of bash, which is easier and has better control.
As requested in the comments: https://github.com/srathbun/sqlCmd
Store your sql scripts under a root dir
Use dev,tst,prd parametrized dbs
Use find to run all your pgsql scripts as shown here
Exit on errors
Or just git clone the whole tool from here
For that use case where you have to do it....
Here is a script I've used for importing JSON into PostgreSQL (WSL Ubuntu), which basically requires that you mix psql meta commands and SQL in the same command line. Note use of the somewhat obscure script command, which allocates a pseudo-tty:
$ more update.sh
#!/bin/bash
wget <filename>.json
echo '\set content `cat $(ls -t <redacted>.json.* | head -1)` \\ delete from <rable>; insert into <table> values(:'"'content'); refresh materialized view <view>; " | PGPASSWORD=<passwd> psql -h <host> -U <user> -d <database>
$

How to create a backup of an POSTGRES DB using bash?

How to create a backup of an POSTGRES DB using bash?
pg_dump -U some_user_name -f dump.file -Fc database_name
That's all.
If you need to authenticate with password - use pgpass file.
Use pg_dump.
Ideally you should add an scheduled job to crontab to be executed daily. The following will create a gzipped sql file with timestamp. SQL dumps otherwise could be very big.
pg_dump database_name | gzip -c > ~/backup/postgres/database_name-`/bin/date +%Y%m%d-%H%M`.sql.gz

Resources