I've loaded .tbl files in the tables, now how I can see the total space used on the disk by the database?
I'm using Fedora
The disk footprint can be assessed using the (Linux) command 'du' on
the dbfarm directory or to run the query 'select * from storage();
Source: http://www.monetdb.org/Documentation/Userguide/diskspace
You can try as follows:
mclient -d dwh -f tab -s "select location from storage() where table='name_of_a_table';" | xargs -i du -m /var/monetdb5/dbfarm/dwh/bat/{}.tail | cut -f1 | paste -sd+ | bc
Related
Running a nx affected:apps command gives me this output:
> NX NOTE Affected criteria defaulted to --base=master --head=HEAD
> NX Affected apps:
- app-backend
- app-frontend
- app-something
- app-anything
I need to get all the application names and use them again for a command call.
So I started with that
output=$(nx affected:apps)
echo "$output" | grep -E "^\W+app-(\w+)"
This gives me
- app-backend
- app-frontend
- app-something
- app-anything
But I need to get the names only instead to run foo --name={appname} four times.
Also not quite sure how to use it in a loop. Quite new to bash scripting :-(
You may use -o (show matches only) with -P (perl regex moode) in gnu-grep:
nx affected:apps |
grep -oP "^\W+app-\K\w+" |
xargs -I {} docker build -t {} .
If gnu-grep isn't available then use this awk command:
nx affected:apps |
awk -F- '/app-/{print $3}' |
xargs -I {} docker build -t {} .
I don't have nx command here but you can try using xargs:
nx affected:apps | grep '^ -' | cut -d' ' -f4 | xargs -I{} echo docker build -t {} ./dist/{}
Remove echo to actually run the command.
You can use the --plain option:
nx affected:apps --plain
the command should return all the affected apps with space as a divider. You can then store those to a bash array and cycle through them in a for loop, running the command you need:
#!/bin/bash
AFFECTED=($(./node_modules/.bin/nx affected:apps --plain))
for t in ${AFFECTED[#]}; do
echo $t
done
is it possible to run the psql without enter the password ?
I mean how to set the password word in the CLI ( by expect or other way ) so I will not enter the password
goal - I need to run this psql from bash script
psql -U ambari ambari -c "select * from blueprint" --> HDP
Password for user ambari:
blueprint_name | security_type | security_descriptor_reference | stack_id
----------------+---------------+-------------------------------+----------
HDP | NONE | | 2
(1 row)
I am also try this but without success - why ?
su - postgres -c " psql -tc \"SELECT * FROM BLUEPRINT\" "
ERROR: relation "blueprint" does not exist
LINE 1: SELECT * FROM BLUEPRINT
^
second
how to capture the first word after "blueprint_name"
meanwhile I use this but not satisfied about this approach
psql -U ambari ambari -c "select * from blueprint" | grep -v row | tail -2 | awk '{print $1}'
Password for user ambari:
HDP
Is it possible to run the psql without enter the password ?
Yes it's possible:
set the PGPASSWORD environment variable. Here is the manual (http://www.postgresql.org/docs/current/static/libpq-envars.html)
use a .pgpass file to store the password. Here is the manual (http://www.postgresql.org/docs/current/static/libpq-pgpass.html)
use "trust authentication" for that specific user (http://www.postgresql.org/docs/current/static/auth-methods.html#AUTH-TRUST)
use a connection URI that contains everything (http://www.postgresql.org/docs/current/static/libpq-connect.html#AEN42532)
Need to use this "-d ambari" to tell the database name as "ambari"
# su - postgres -c "psql -d ambari -tc 'select * from ambari.blueprint'
example:
# su - postgres -c "psql -d ambari -tc 'select * from ambari.blueprint' "
HDP | NONE | | 6
I'm currently using mySQLdump to backup my dev machine and servers.
There is one project I just started, however, that has a HUUUUUGE database that I don't really need backed up, and i'll be a big problem to add it to the rest of the backup cycle.
I'm currently doing this:
"c:\Program Files\mysql\MySQL Server 5.1\bin\mysqldump" -u root -pxxxxxx --all-databases > g:\backups\MySQL\mysqlbackup.sql
Is it possible to somehow specify "except this database(s)"?
I wouldn't like to have to specify the list of DBs manually, since that would mean that I'd have to remember updating my backup batch file every time I create a new DB, and I know that's not gonna happen.
EDIT: As you probably guessed from my command line above, i'm doing this on Windows, so I can't do any kind of fancy bash stuff, only wimpy .bat things.
Alternatively, if you have other ideas to solve this same issue, they are more than welcome, of course!
mysql ... -N -e "show databases like '%';" |grep-v -F databaseidontwant |xargsmysqldump ... --databases > out.sql
echo 'show databases;' | mysql -uroot -proot | grep -v ^Database$ | grep -v ^information_schema$ | grep -v ^mysql$ | grep -v -F db1 | xargs mysqldump -uroot -proot --databases > all.sql
dumps all databases except: mysql, information_schema, mysql and db1.
Or if you'd like to review the list before dumping:
echo 'show databases;' | mysql -uroot -proot > databases.txt
edit databases.txt and remove any you don't want to dump
cat databases.txt | xargs mysqldump -uroot -proot --databases > all.sql
What about
--ignore-table=db_name.tbl_name
Do not dump the given table, which must be specified using both the database and table names. To ignore multiple tables, use this option multiple times.
Maybe you'll need to specify a few to completely ignore the big database.
I created the following one line solution avoiding multiple grep commands.
mysql -e "show databases;" | grep -Ev "Database|DatabaseToExclude1|DatabaseToExclude2" | xargs mysqldump --databases >mysql_dump_filename.sql
The -E in grep enables extended regex support which allowed to provide different matches separated by the pipe symbol "|". More options can be added to the mysqldump command. But only before the "--databases" parameter.
Little side note, i like to define the filename for the dump like this ...
... >mysql_dump_$(hostname)_$(date +%Y-%m-%d_%H-%M).sql
This will automatically ad the host name, date and time to the filename. :)
Seeing as your using Windows you should have PowerShell available to use.
Here is a short PowerShell script to get a list of all Databases, remove unwanted ones from the list & then use mysqldump to backup the others.
$MySQLPath = "."
$Hostname = "localhost"
$Username = "root"
$Password = ""
# Get list of Databases
$Databases = [System.Collections.Generic.List[String]] (
& $MySQLPath\mysql.exe -h"$Hostname" -u"$Username" -p"$Password" -B -N -e"show databases;"
)
# Remove databases from list we don't want
[void]$Databases.Remove("information_schema")
[void]$Databases.Remove("mysql")
# Dump database to .SQL file
& $MySQLPath\mysqldump.exe -h"$HostName" -u"$Username" -p"$Password" -B $($Databases) | Out-File "DBBackup.sql"
Create a backup user and only grant that user access to the databases that you want to backup.
You still need to remember to explicitly grant the privileges but that can be done in the database and doesn't require a file to be edited.
It took me a lot of finagling to come up with this but I've used it for a few years now and it works well...
mysql -hServerName -uUserName -pPassword -e "SELECT CONCAT('\nmysqldump -hServerName -uUserName -pPassword --set-gtid-purged=OFF --max_allowed_packet=2048M --single-transaction --add-drop-database --opt --routines --databases ',DBList,' | mysql -hServerName2 -uUserName2 -pPAssword2 ' ) AS Cmd FROM (SELECT GROUP_CONCAT(schema_name SEPARATOR ' ') AS DBList FROM information_schema.SCHEMATA WHERE LEFT(schema_name, 8) <> 'cclegacy' AND schema_name NOT IN ('mysql','information_schema','performance_schema','test','external','othertoskip')) a \G" | cmd
Instead of the pipe over to mysql where I'm moving from serverName to Servername2 you could redirect to a file but this allows me to tailor what I move. Sometimes i even OR the list so I can say LIKE 'Prefix%' etc.
You can use this one for production
It excludes 'performance_schema\|information_schema\|mysql\|sys'...modify for your needs
MYSQL_USER=
MYSQL_PASS=
MYSQL_HOST=
MYSQL_CONN="-u${MYSQL_USER} -p${MYSQL_PASS} -h${MYSQL_HOST}"
MYSQLDUMP_OPTIONS="--routines --triggers --single-transaction"
DBLIST=`mysql -s --host=$MYSQL_HOST --user=$MYSQL_USER --password=$MYSQL_PASS \
--execute="SHOW DATABASES;" | grep -v \
'performance_schema\|information_schema\|mysql\|sys' | awk '{printf("\"%s\" ",$0)}'`
mysqldump ${MYSQL_CONN} ${MYSQLDUMP_OPTIONS} --databases ${DBLIST} | gzip >all-dbs.sql.gz
I have the following:
mysqldump -u xxxx
-h localhost
--password=xxxxx databasename |
ssh username#00.000.00.202 "dd of=httpdocs/backup`date +'%Y-%m-%d-%H-%M-%S'`.sql"
...which SSH's a mysqldump to a remote machine.
I need to compress the mysqldump before it is SSH'd as the dump is 500mb and its eating up my bandwidth allowance.
mysqldump ... | gzip -9 | ssh ...
or
mysqldump ... | bzip2 -9 | ssh ...
or, if you want it uncompressed on the other end
mysqldump ... | bzip2 -9 | ssh machine "bzip2 -d >..."
mysqldump ... | gzip -9 | ssh machine "gzip -d >..."
You can add the -C flag to the ssh call to automatically compress the transmitted data.
You need to call gzip between mysqldump and ssh, like:
mysqldump [mysql options] | gzip | ssh [ssh options]
I would recommend changing the saved file extension to ".sql.gz" as well.
This has already been answered and accepted, but I thought you might find this an interesting alternative.
Percona's OpenSource xtrabackup application will perform compressed (TAR) backups on the fly - along with lots of other interesting things.
I couldn't find an anchor on the page, but scroll down to "Compressed Backups".
I can run commands like vacuumdb, pg_dump, and psql just fine in a script if I preface them like so:
/usr/bin/sudo -u postgres /usr/bin/pg_dump -Fc mydatabase > /opt/postgresql/prevac.gz
/usr/bin/sudo -u postgres /usr/bin/vacuumdb --analyze mydatabase
/usr/bin/sudo -u postgres /usr/bin/pg_dump -Fc mydatabase > /opt/postgresql/postvac.gz
SCHEMA_BACKUP="/opt/postgresql/$(date +%w).db.schema"
sudo -u postgres /usr/bin/pg_dump -C -s mydatabase > $SCHEMA_BACKUP
These run at command line on Redhat when I am sudo to root and then as you see in the commands above I do a sudo -u to postgres.
But when I try to kick this off from cron, I get zero bytes in all the files -- meaning it didn't run properly. And I don't get a clue in the logs that I can see.
My /etc/crontab file has this entry at the bottom
00 23 * * * root /etc/db_backup.cron
And yes, /etc/db_backup.cron is chmod ug+x, owned by root, and the top of the file says "#!/bin/bash" (minus doublequotes).
Anyone know what gives?
Since you seem to have superuser rights anyway, you could put those commands into the crontab of the postgres user like so:
sudo su postgres
crontab -e
and then put the pg_dump/vacuumdb commands there.
I have a dynamic bash script that backs up all the databases on the server. It gets a list of all the databases and then vacuums each DB before performing a backup. All logs are written to a file and then that log is emailed to me. This is something you could use if you want.
Copy the code below into a file and add the file to your crontab. I have setup my pg_hba.conf to trust local connections.
#!/bin/bash
logfile="/backup/pgsql.log"
backup_dir="/backup"
touch $logfile
databases=`psql -h localhost -U postgres -q -c "\l" | sed -n 4,/\eof/p | grep -v rows\) | grep -v template0 | grep -v template1 | awk {'print $1'}`
echo "Starting backup of databases " >> $logfile
for i in $databases; do
dateinfo=`date '+%Y-%m-%d %H:%M:%S'`
timeslot=`date '+%Y%m%d%H%M'`
/usr/bin/vacuumdb -z -h localhost -U postgres $i >/dev/null 2>&1
/usr/bin/pg_dump -U postgres -i -F c -b $i -h 127.0.0.1 -f $backup_dir/$i-database-$timeslot.backup
echo "Backup and Vacuum complete on $dateinfo for database: $i " >> $logfile
done
echo "Done backup of databases " >> $logfile
tail -15 /backup/pgsql.log | mailx youremail#domain.com
I have set my cron like this. Every 59 minutes since monday to friday
*/59 * * * 1-5 sh /home/my_user/scripts/back_my_bd.sh
The script to run the backup is inside back_my_bd.sh file and the content is:
pg_dump -U USERDATABASE DATABASENAME > /home/my_user/sql/mybackup.sql
And i created the .pgpass file inside home directory to allow the backup whithout specify the user and password
localhost:5432:DATABASENAME:USER:PASSWORD
Sorry my english is not good!
Your environment variable are maybe not set in cron.
In your normal session, you probably have defined these variables:
PG_PORT
PG_HOST
PG_DATABASE
PG_USERNAME
PG_PASSWORD
Add an "env" into yout script.
you probably have "ident" authentication in your pg_hba.conf for your postgres user.
The option "-u postgres" fails when that is the case.
either change user to postgres in your backup script or configure a different authentication method.
Instead of the following command:
databases=psql -h localhost -U postgres -q -c "\l" | sed -n 4,/\eof/p | grep -v rows\) | grep -v template0 | grep -v template1 | awk {'print $1'}
You can use below:
databases=psql -t -c "select datname from pg_database where datname not like 'template%';" | grep -v '^$'
The first one return '|' for template databases and an empty line.
The second one is cleaner.
databases=psql -h localhost -U postgres -q -x -t -c "\l" | grep 'Name' | sed 's/ //g' | sed 's/Name|//g'
Another version to get the list of databases:
psql -lqt | grep -vE '^ +(template[0-9]+|postgres)? *\|' | cut -d'|' -f1| sed -e 's/ //g' -e '/^$/d'
As my psql -lqt output is:
abcdefghij | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
abc | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |