How to backup a partitioned table in GreenPlum 5.18 - greenplum

Is it possible to make a backup of a specific partitioned table with GreenPlum?
When using:
pg_dump -t schema.partitioned_table dbname >/tmp/tlb
I could get all partition tables with DDL if I exe my command on my GP's master node.
I could not get any partitions tables with DDL when I exe my command without my GP's nodes.
What should I do without my GP's nodes to get partition tables with DDL?
pro on pg 10

Ok.
The answer is GP's pg_dump is different from pg's pg_dmp.
Exe the command on GP's master node and any other pg nodes to see the different.
pg_dump --help

Related

Does hive beeline have support to download/move file to client

I was using hive CLI to directly run the INSERT OVERWRITE LOCAL DIRECTORY 'local/machine/folder/location' SELECT * FROM table.
The Hive CLI would download/get the file to the client machine location.
Now I'm moving to beeline. The same command invoked through beeline would download/get the file to hiveserver2 machine.
beeline -u ${hive_resource_jdbcurl} ${hive_resource_username} ${hive_resource_password} ${hive_resource_driverclass} -S -e "${insert_overwrite_command}"
Want to know if there is any way to get the file to the client instead of hiveserver2 machine.
Eg.,
HiverServer2 machine - HS2_Machine
AppServer/WebServer machine - App1_Machine
Beeline command(Insert overwrite local directory) will be triggered from App1_Machine which will move the output to local directory in HS2_Machine. I want to know if there is a way/command to get the file to appserver App1_Machine local.
PS: Don't want to scp or move the file from HS2_Machine to AppServer using scp/ftp, because I'm dealing with huge volume and I don't want two operation (Storing it in HS2_Machine and then moving that huge file into App1_Machine).

How determine Hive database size?

How determine Hive's database size from Bash or from Hive CLI?
hdfs and hadoop commands are also avaliable in Bash.
A database in hive is a metadata storage - meaning it holds information about tables and has a default location. Tables in a database can also be stored anywhere in hdfs if location is specified when creating a table.
You can see all tables in a database using show tables command in Hive CLI.
Then, for each table, you can find its location in hdfs using describe formatted <table name> (again in Hive CLI).
Last, for each table you can find its size using hdfs dfs -du -s -h /table/location/
I don't think there's a single command to measure the sum of sizes of all tables of a database. However, it should be fairly easy to write a script that automates the above steps. Hive can also be invoked from bash CLI using: hive -e '<hive command>'
Show Hive databases on HDFS
sudo hadoop fs -ls /apps/hive/warehouse
Show Hive database size
sudo hadoop fs -du -s -h /apps/hive/warehouse/{db_name}
if you want the size of your complete database run this on your "warehouse"
hdfs dfs -du -h /apps/hive/warehouse
this gives you the size of each DB in your warehouse
if you want the size of tables in a specific DB run:
hdfs dfs -du -h /apps/hive/warehouse/<db_name>
run a "grep warehouse" on hive-site.xml to find your warehouse path

How to export 4 million data in MySQL?

I have a database with one particular table having more than 4 million record entries. I tried downloading whole db it using MySQL workbench as well as command terminal using following command:
mysqldump -u root -p password mydb > myfile.sql
But, I got only half of the data downloaded. If I ignored that one particular table, then it's working fine. Can anyone suggest me how to download db with tables having more than million entries?
Try adding the below lines in the my.cnf and restart
[mysqld]
# Performance settings used for import.
delay_key_write=ALL
bulk_insert_buffer_size=256M
or
mysqldump -u root -p --max_allowed_packet=1073741824 --lock-tables=false mydb > myfile.sql

how to start a database using mserver in the monetdb

I want to bind the program in one cpu when I using monetdb,so I think I can start the monetdb with mserver for this purpose,but I do not know how to do,if my database named newdb,what can I do?
Sorry for late reply ....
First create dbform using
monetdbd create ~/my_dbform
monetdbd start ~/my_dbform
Then you have to create database
monetdb create newdb
monetdb release newdb
Thus u can create a mserver .
For client do
mclient -u monetdb -d mydb
with password as
monetdb
it is empty when you create a database,you can insert data or sql> \< voc_dump.sql
https://www.monetdb.org/Documentation/UserGuide/Tutorial/Windows

mysql import only missing rows

I'm looking for a way to restore my DB from a prior backup. However, the backup should not simply overwrite all existing records but instead add only the difference between current DB and the backup file. If no "non existent" records are stored in the backup, nothing should happen. The backups were made with mysqldump. Any clues?
Thanks in advance
Here is a less manual answer:
mysqldump -t --insert-ignore --skip-opt -u USER -pPASSWORD -h 127.0.0.1 database > database.sql
That export command with the -t --insert-ignore --skip-opt options will give you a sql dump file with no DROP TABLE or CREATE TABLE commands and every INSERT is now an INSERT IGNORE.
BONUS:
This will dump a single table in the same way:
mysqldump -t --insert-ignore --skip-opt -u USER -pPASSWORD -h 127.0.0.1 database table_name > table_name.sql
I needed this today and could not help but to share it!
Remove the DROP TABLE and CREATE TABLE statements from the dump file. Change the INSERT statements to INSERT IGNORE. Then load the backup file and it should not update any duplicate rows.

Resources