I'm developing a bash script, I've used ssh command in my bash script to run some commands on a remote server and I need to get the result from the command which runs on the remote server. so I wrote this code:
db="$(ssh -t user#host 'mysql --user=username -ppassword -e \"SHOW DATABASES;\" | grep -Ev \"(Database|information_schema|performance_schema)\"' | grep -Ev \"(mysql)\")"
But each time which I run my bash script, I will get Connection to host closed. in first of the db result. this is a default message from ssh command.
Also, If I use > /dev/null 2>&1 end of my command the db variable would be empty.
How can I turn off the return message from the executed command?
Like this :
#!/bin/bash
db=$(
ssh -t user#host bash<<EOF
mysql --user=username -ppassword -e "SHOW DATABASES" |
grep -Ev "(Database|information_schema|performance_schema|mysql)" \
2> >(grep -v 'Connection to host closed')
EOF
)
or if Connection to host closed comes from STDOUT :
...
mysql --user=username -ppassword -e "SHOW DATABASES" |
grep -Ev "(Database|information_schema|performance_schema|mysql|Connection to host closed)"
...
I have been tasked with replacing ISQL in a lot of our bash scripts with sqlcmd. ISQL allows piping a variable in it's execution.
An example would be:
SQL_STATEMENT="SELECT TOP 1 SYS_USER_NAME FROM SYS_USER"
echo $SQL_STATEMENT | isql -b -d, $DSN $DBUID $DBPWD >> setupdb_test.txt
From what I can tell this is not viable in sqlcmd. How can I do this? What flags does sqlcmd have to allow this to happen?
Here is what I have tried and have had a good result BUT I really do not want to create the file sql_command.sql every time a particular script runs:
echo "SELECT TOP 1 SYS_USER_NAME FROM SYS_USER" > sql_command.sql
sqlcmd -S $DB -U $DBUID -P $DBPWD -d $DSN -i sql_command.sql >> setupdb_test.txt
Programs originating on Windows can be picky about how they handle non-regular files and I don't have the opportunity to test, but you can try the typical Unix tricks for providing a "file" with data from an echo.
Either /dev/stdin:
echo "SELECT TOP 1 SYS_USER_NAME FROM SYS_USER" | sqlcmd -S "$DB" -U "$DBUID" -P "$DBPWD" -d "$DSN" -i /dev/stdin
or process substitution:
sqlcmd -S "$DB" -U "$DBUID" -P "$DBPWD" -d "$DSN" -i <(echo "SELECT TOP 1 SYS_USER_NAME FROM SYS_USER")
I'm trying to run a script (automation.sh) automated in crontab.
(I'm on Ubuntu 14.04).
#!/usr/bin/env bash
day=$(date +%F -d'yesterday')
cat /home/tomi/logs/$day |grep registration > /home/tomi/registrations/$day
cat /home/tomi/logs/$day |grep free_tree > /home/tomi/free_tree/$day
cat /home/tomi/logs/$day |grep super_tree > /home/tomi/super_tree/$day
psql -U hello -d postgres -c "\COPY registrations FROM '/home/tomi/registrations/$day' DELIMITER ' '";
psql -U hello -d postgres -c "\COPY free_tree FROM '/home/tomi/free_tree/$day' DELIMITER ' '";
psql -U hello -d postgres -c "\COPY super_tree FROM '/home/tomi/super_tree/$day' DELIMITER ' '";
psql -U hello -d postgres -f daily_active_users.sql > /home/tomi/tmp1
psql -U hello -d postgres -f daily_revenue.sql > /home/tomi/tmp2
If I run this script normally from the command line, then the last two lines generate tmp1 and tmp2 with data in them. (That's the expected result.)
However, if I run this very same script in crontab, everything works, but the last two lines generate empty files (tmp1 and tmp2).
The tricky thing is that when I break this script into two scripts (eg. automated.sh and automated2.sh) and run the last two lines in crontab 5 minutes later (via this automated2.sh script), tmp1 and tmp2 are generated correctly, with data in them.
Any idea, what can cause this?
Answer is in the comments:
full path is missing on last two lines in automation.sh!
Thanks!
Hello to professionals !
There was a good and simplest script idea to make mysqldump of every database - taken from
dump all mysql tables into separate files automagically?
author - https://stackoverflow.com/users/1274838/elias-torres-arroyo
with script as follows
#!/bin/bash
# Optional variables for a backup script
MYSQL_USER="root"
MYSQL_PASS="PASSWORD"
BACKUP_DIR="/backup/01sql/";
# Get the database list, exclude information_schema
for db in $(mysql -B -s -u $MYSQL_USER --password=$MYSQL_PASS -e 'show databases' | grep -v information_schema)
do
# dump each database in a separate file
mysqldump -u $MYSQL_USER --password=$MYSQL_PASS "$db" | gzip > "$BACKUP_DIR/$db.sql.gz"
done
sh
but the problem is that this script does not "understand" arguments like
--add-drop-database
to perform
mysqldump -u $MYSQL_USER --password=$MYSQL_PASS "$db" --add-drop-database | gzip > "$BACKUP_DIR/$db.sql.gz"
Is there any idea how to force this script to understand the additional arguments listed under
mysqldump --help
because while all my tests shows it doesn't.
Thank you in advance for any hint to try !
--add-drop-database works only with --all-databases or --databases.
See please the reference in docs
So in your case mysqldump utility ignore mentioned parameter because you are going to dump one database.
I'm currently using mySQLdump to backup my dev machine and servers.
There is one project I just started, however, that has a HUUUUUGE database that I don't really need backed up, and i'll be a big problem to add it to the rest of the backup cycle.
I'm currently doing this:
"c:\Program Files\mysql\MySQL Server 5.1\bin\mysqldump" -u root -pxxxxxx --all-databases > g:\backups\MySQL\mysqlbackup.sql
Is it possible to somehow specify "except this database(s)"?
I wouldn't like to have to specify the list of DBs manually, since that would mean that I'd have to remember updating my backup batch file every time I create a new DB, and I know that's not gonna happen.
EDIT: As you probably guessed from my command line above, i'm doing this on Windows, so I can't do any kind of fancy bash stuff, only wimpy .bat things.
Alternatively, if you have other ideas to solve this same issue, they are more than welcome, of course!
mysql ... -N -e "show databases like '%';" |grep-v -F databaseidontwant |xargsmysqldump ... --databases > out.sql
echo 'show databases;' | mysql -uroot -proot | grep -v ^Database$ | grep -v ^information_schema$ | grep -v ^mysql$ | grep -v -F db1 | xargs mysqldump -uroot -proot --databases > all.sql
dumps all databases except: mysql, information_schema, mysql and db1.
Or if you'd like to review the list before dumping:
echo 'show databases;' | mysql -uroot -proot > databases.txt
edit databases.txt and remove any you don't want to dump
cat databases.txt | xargs mysqldump -uroot -proot --databases > all.sql
What about
--ignore-table=db_name.tbl_name
Do not dump the given table, which must be specified using both the database and table names. To ignore multiple tables, use this option multiple times.
Maybe you'll need to specify a few to completely ignore the big database.
I created the following one line solution avoiding multiple grep commands.
mysql -e "show databases;" | grep -Ev "Database|DatabaseToExclude1|DatabaseToExclude2" | xargs mysqldump --databases >mysql_dump_filename.sql
The -E in grep enables extended regex support which allowed to provide different matches separated by the pipe symbol "|". More options can be added to the mysqldump command. But only before the "--databases" parameter.
Little side note, i like to define the filename for the dump like this ...
... >mysql_dump_$(hostname)_$(date +%Y-%m-%d_%H-%M).sql
This will automatically ad the host name, date and time to the filename. :)
Seeing as your using Windows you should have PowerShell available to use.
Here is a short PowerShell script to get a list of all Databases, remove unwanted ones from the list & then use mysqldump to backup the others.
$MySQLPath = "."
$Hostname = "localhost"
$Username = "root"
$Password = ""
# Get list of Databases
$Databases = [System.Collections.Generic.List[String]] (
& $MySQLPath\mysql.exe -h"$Hostname" -u"$Username" -p"$Password" -B -N -e"show databases;"
)
# Remove databases from list we don't want
[void]$Databases.Remove("information_schema")
[void]$Databases.Remove("mysql")
# Dump database to .SQL file
& $MySQLPath\mysqldump.exe -h"$HostName" -u"$Username" -p"$Password" -B $($Databases) | Out-File "DBBackup.sql"
Create a backup user and only grant that user access to the databases that you want to backup.
You still need to remember to explicitly grant the privileges but that can be done in the database and doesn't require a file to be edited.
It took me a lot of finagling to come up with this but I've used it for a few years now and it works well...
mysql -hServerName -uUserName -pPassword -e "SELECT CONCAT('\nmysqldump -hServerName -uUserName -pPassword --set-gtid-purged=OFF --max_allowed_packet=2048M --single-transaction --add-drop-database --opt --routines --databases ',DBList,' | mysql -hServerName2 -uUserName2 -pPAssword2 ' ) AS Cmd FROM (SELECT GROUP_CONCAT(schema_name SEPARATOR ' ') AS DBList FROM information_schema.SCHEMATA WHERE LEFT(schema_name, 8) <> 'cclegacy' AND schema_name NOT IN ('mysql','information_schema','performance_schema','test','external','othertoskip')) a \G" | cmd
Instead of the pipe over to mysql where I'm moving from serverName to Servername2 you could redirect to a file but this allows me to tailor what I move. Sometimes i even OR the list so I can say LIKE 'Prefix%' etc.
You can use this one for production
It excludes 'performance_schema\|information_schema\|mysql\|sys'...modify for your needs
MYSQL_USER=
MYSQL_PASS=
MYSQL_HOST=
MYSQL_CONN="-u${MYSQL_USER} -p${MYSQL_PASS} -h${MYSQL_HOST}"
MYSQLDUMP_OPTIONS="--routines --triggers --single-transaction"
DBLIST=`mysql -s --host=$MYSQL_HOST --user=$MYSQL_USER --password=$MYSQL_PASS \
--execute="SHOW DATABASES;" | grep -v \
'performance_schema\|information_schema\|mysql\|sys' | awk '{printf("\"%s\" ",$0)}'`
mysqldump ${MYSQL_CONN} ${MYSQLDUMP_OPTIONS} --databases ${DBLIST} | gzip >all-dbs.sql.gz