I have a script that automate the rebuilding of mongo db's for our servers:
#!/bin/sh
mongo local:host 127.0.0.1 mongodb-create-ourdatabase.js > validate.txt
mongoimport --host 127.0.0.1 --db ourdatabase --collection ourUser --file create-ourUser.js > validate.txt
The output of the first line when the database is created writes to file, but the output of the second line, where the collection ourUser is created outputs to screen.
What am I missing?
First, both calls create an empty, new validate.txt file. So second call clobbers first call result. I doubt that this is what you want, so you should change the second > by >> to append to your logfile.
Second, executables issue output through 2 screen channels: standard output (aka stdout, used for normal output, results) and standard error (aka stderr, used for warnings and errors). It is not possible to know which stream is the target by looking at the output.
To merge both streams and get all process output, you have to pipe stderr to stdout to be able to redirect, using 2&>1 (dup & close pipe 2=stderr to 1=stdout)
mongo local:host 127.0.0.1 mongodb-create-ourdatabase.js 2&>1 > validate.txt
mongoimport --host 127.0.0.1 --db ourdatabase --collection ourUser --file create-ourUser.js 2&>1 >> validate.txt
Thanks for the response Jean-Francois, unfortunately that did not work, but it was close. What worked was:
#!/bin/sh
mongo localhost:27017 mongodb-create-our-database.js 2>&1 > validate.txt
mongoimport --host 127.0.0.1 --db ourdatabase --collection ourUser --file create-ourUser.js >> validate.txt 2>&1
Using 2&>1 had the script looking for file 2, and I found this excellent explanation:
Scroll down to 1st answer
which worked for me.
Related
I'm running ycsb, which sends workload generated by YCSB to mongodb and it has a standard output, which I am storing in the file outputLoad.
./bin/ycsb load mongodb -s -P workloads/workloada -p mongodb.database=ycsb > outputLoad
The -s parameter in the command tells it to generate a client report status. The report status is printed directly to my terminal. How can I get this status into a log file?
Redirect standard error (file descriptor 2) to a file.
./bin/ycsb [...options...] > outputLoad 2> mylog.log
I'm trying to make a remote mysqldump and afterwards download it with rsync which is all working good, but I also want to log the remote errors I get which I now only see in the terminal output.
I mean errors like this mysqldump: Got error: 1044: Access denied for user 'root'#'localhost' to database 'information_schema' when using LOCK TABLES?
This is the important part of my code:
MYSQL_CMD="mysqldump -u ${MYSQL_USER} -p${MYSQL_PASS} $db -r /root/mysql_${db}.sql"
$SSH -p ${SSH_PORT} ${SSH_USER}#${SSH_HOST} "${MYSQL_CMD}" >> "${LOGFILE}"
In my research I only found solutions for getting the exit code and return values.
I hope someone can give me a hint, thanks in advance.
These error messages are being written to stderr. You can redirect this to a file using 2> or 2>> just like you do for stdout with > and >>. Eg:
ssh ... 2>/tmp/logerrors
Note there is no space between 2 and >. You can merge stderr into the same file as stdout by replacing your >> "${LOGFILE}" with
ssh ... &>> "${LOGFILE}"
Again, no space in &>, which can also be written >&.
I have read all other solutions and none adapts to my needs, I do not use Java, I do not have super user rights and I do not have API's installed in my server.
I have select rights on a remote PostgreSQL server and I want to run a query in it remotely and export its results into a .csv file in my local server.
Once I manage to establish the connection to the server I first have to define the DB, then the schema and then the table, fact that makes the following lines of code not work:
\copy schema.products TO '/home/localfolder/products.csv' CSV DELIMITER ','
copy (Select * From schema.products) To '/home/localfolder/products.csv' With CSV;
I have also tried the following bash command:
psql -d DB -c "select * from schema.products;" > /home/localfolder/products.csv
and logging it with the following result:
-bash: home/localfolder/products.csv: No such file or directory
I would really appreciate if someone can show a light on this.
Have you tried this? I do not have psql right now to test it.
echo “COPY (SELECT * from schema.products) TO STDOUT with CSV HEADER” | psql -o '/home/localfolder/products.csv'
Details:
-o filename Put all output into file filename. The path must be writable by the client.
echo builtin + piping (|) pass command to psql
Aftr a while a good colleague deviced this solution which worked perfectly for my needs, hope this can help someone.
'ssh -l user [remote host] -p [port] \'psql -c "copy (select * from schema.table_name') to STDOUT csv header" -d DB\' > /home/localfolder/products.csv'
Very similar to idobr's answer.
From http://www.postgresql.org/docs/current/static/sql-copy.html:
Files named in a COPY command are read or written directly by the server, not by the client application.
So, you'll always want to use psql's \copy meta command.
The following should do the trick:
\copy (SELECT * FROM schema.products) to 'products.csv' with csv
If the above doesn't work, we'll need an error/warning message to work with.
You mentioned that the server is remote, however you are connecting to a localhost. Add the -h [server here] or set the ENV variable
export PGHOST='[server here]'
The database name should be the last argument, and not with -d.
And finally that command should have not failed, my guess is that that directory does not exist. Either create it or try writing to tmp.
I would ask you to try the following command:
psql -h [server here] -c "copy (select * from schema.products) to STDOUT csv header" DB > /tmp/products.csv
I am using this command to export.
export PGPASSWOD=${PASSWORD}
pg_dump –i –b -o -host=${HOST} -port=5444 -username=${USERNAME} -format=c -schema=${SCHEMA} --file=${SCHEMA}_${DATE}.dmp ${HOST}
Just want to know how can i include the log file in it so that i can get logs also.
I assume you mean you want to capture any errors, notifications, etc that are output by pg_dump in a file.
There is no specific option for this, but pg_dump will write these to STDERR, so you can easily capture them like this:
pg_dump –i –b -o ...other options ... 2> mylogfile.log
In a shell, 2> redirects STDERR to the given file.
This advice is good for nearly any command line tool you are likely to find on a *nix system.
I'm using mysqldump in a shell script to dump several schemas from a production environment to a local one.
schemas = (one two three)
read -p "Enter Username: " un
read -s -p "Enter Password: " pw
for schema in "${schemas[#]}"
do
:
mysqldump -h SERV -u $un --password=$p > /dev/null 2>&1 | mysql -uroot LOCAL
done
I'm redirecting out and error to /dev/null to prevent warnings and error messages, but I want to be able to catch the error and do something else based on the output (e.g. Access Denied, Not Found).
How can I capture the error returned from mysqldump and use it to take another action in a shell script?
For what it's worth, the $? variable always seems to be 0 after mysqldump completes, even if the STDERR is access denied.
I did a little more research and found the answer here:
http://scratching.psybermonkey.net/2011/01/bash-how-to-check-exit-status-of-pipe.html