How to pass log file in pg_dump - bash

I am using this command to export.
export PGPASSWOD=${PASSWORD}
pg_dump –i –b -o -host=${HOST} -port=5444 -username=${USERNAME} -format=c -schema=${SCHEMA} --file=${SCHEMA}_${DATE}.dmp ${HOST}
Just want to know how can i include the log file in it so that i can get logs also.

I assume you mean you want to capture any errors, notifications, etc that are output by pg_dump in a file.
There is no specific option for this, but pg_dump will write these to STDERR, so you can easily capture them like this:
pg_dump –i –b -o ...other options ... 2> mylogfile.log
In a shell, 2> redirects STDERR to the given file.
This advice is good for nearly any command line tool you are likely to find on a *nix system.

Related

How can i redirect unwanted output on bash login over ssh?

I've got a script, that will use ssh to login to another machine and run a script there. My local script will redirect all the output to a file. It works fine in most cases, but on certain remote machines, i am capturing output that i don't want, and it looks like it's coming from stderr. Maybe because of the way bash is processing entries in its start-up files, but this is just speculation.
Here is an example of some unwanted lines that end up in my file.
which: no node in (/usr/local/bin:/bin:/usr/bin)
stty: standard input: Invalid argument
My current method is to just strip the predictable output that i don't want, but it feels like bad practice.
How can i capture output from only my script?
Here's the line that runs the remote script.
ssh -p 22 -tq user#centos-server "/path/to/script.sh" > capture
The ssh uses authorized_keys.
Edit: In the meantime, i'm going to work on directing the output from my script on machine B to a file and then copying it to A via scp and deleting it on B. But i would really like to be able to suppress the output completely, because when i run the script on machine A, it makes the output difficult to read.
To build on your comment on Raman's answer. Have you tried supressing .bashrc and .bash_profile as shown below?
ssh -p 22 -tq user#centos-server "bash --norc --noprofile /path/to/script.sh" > capture
If rc-files is the problem on some servers you should try and fix the broken rc-files instead of your script/invocation since it'll affect all (non-interactive) logins.
Try running ssh user#host 'grep -ls "which node" .*' on all your servers to find if they have "which node" in any dotfiles as indicated by your error message.
Another thing to look out for is your shebang. You tag this as bash and write CentOS but on a Debian/Ubuntu server #!/bin/sh gives you dash instead of (sh-compatible) bash.
YOu can redirect stdout (2) to /dev/null and redirect the rest to the log fole as follows:
ssh -p 22 -tq user#centos-server bash -c "/path/to/script.sh" 2>/dev/null >> capture

SMB Client Commands Through Shell Script

I have a shell script, which I am using to access the SMB Client:
#!/bin/bash
cd /home/username
smbclient //link/to/server$ password -W domain -U username
recurse
prompt
mput baclupfiles
exit
Right now, the script runs, accesses the server, and then asks for a manual input of the commands.
Can someone show me how to get the commands recurse, prompt, mput baclupfiles and exit commands to be run by the shell script please?
I worked out a solution to this, and sharing for future references.
#!/bin/bash
cd /home/username
smbclient //link/to/server$ password -W domain -U username << SMBCLIENTCOMMANDS
recurse
prompt
mput backupfiles
exit
SMBCLIENTCOMMANDS
This will enter the commands between the two SMBCLIENTCOMMANDS statements into the smb terminal.
smbclient accepts the -c flag for this purpose.
-c|--command command string
command string is a semicolon-separated list of commands to be executed instead of
prompting from stdin.
-N is implied by -c.
This is particularly useful in scripts and for printing stdin to the server, e.g.
-c 'print -'.
For instance, you might run
$ smbclient -N \\\\Remote\\archive -c 'put /results/test-20170504.xz test-20170504.xz'
smbclient disconnects when it is finished executing the commands.
smbclient //link/to/server$ password -W domain -U username -c "recurse;prompt;mput backupfiles"
I would comment to Calchas's answer which is the correct approach-but did not directly answer OP's question-but I am new and don't have the reputation to comment.
Note that the -c listed above is semicolon separated list of commands (as documented in other answers), thus adding recurse and prompt enables the mput to copy without prompting.
You may also consider using the -A flag to use a file (or a command that decrypts a file to pass to -A) to fully automate this script
smbclient //link/to/server$ password -A ~/.smbcred -c "recurse;prompt;mput backupfiles"
Where the file format is:
username = <username>
password = <password>
domain = <domain>
workgroup = <workgroup>
workgroup is optional, as is domain, but usually needed if not using a domain\username formatted username.
I suspect this post is WAY too late to be useful to this particular need, but maybe useful to other searchers, since this thread lead me to the more elegant answer through -c and semicolons.
I would take a different approach using autofs with smb. Then you can eliminate the smbclient/ftp like approach and refactor your shell script to use other functions like rsync to move your files around. This way your credentials aren't stored in the script itself as well. You can bury them somewhere on your fs and make it read only by root an no one else.

Using Plink and redirect output in bash script

I've got problem, I've setup plink to create a connection to a BlueCoat device, retrieve the full configuration and redirect the output to a file.
The problem is, when I try it from the script, the output of plink is displayed on screen and not redirected to the file, but if I use the same exact command interactively, it works!
I've checked the file rights, etc. they all seem to be ok.
The way I use it is:
/usr/bin/plink -4 -batch -ssh -l <user> -pw <password> -m /tmp/bluecoat.backup <hostname> > output.txt
Any clues?
Kind regards,
Chris

bash script not capturing stdout, just stderr

I have the following script (let's call it move_site.sh) that copies a website directory structure to another server
#!/bin/bash
scp -r /usr/local/apache2/htdocs/$1 http#$2:/local/htdocs 1>$1$2.out 2>&1
So calling it from the command line, I pass it the webiste site directory name, and destination server as such:
nohup ./move_site.sh site1 server1 &
However, in the resulting that is named site1server1.out, there are only stderr messages, if any.
Can someone tell me how I can get the file and directory names that are copied, included in the output file, so that I have some kind of record?
Thanks.
A quick try :
Maybe it is because when everything went fine, scp doesn't print anything to stdout (?).
Have a try : run your scp command outside the script, most probably you don't have anything on std out. (redirect nothing to $1$2.out, it's still nothing :))
I don't think it is possible with scp but with rsync you can track what has been transferred to stdout. So changing scp -r by rsync -r -v -e should does the trick. (at least if you can go for rsync unstead of scp).

NCFTPPUT command problem

I using passive mode FTP command provide by NCFTP, Currently i want to pass a raw ftp command after file transferred. i found that got an option to do that:
ncftpput -u user -p password -X "rename 123.exe 1234.exe" host /path C:\123.exe
however, it is not working. It can put the file, but rename command not working.
Have anyone did that before?Pls help
-X use RAW FTP commands
Use the following syntax:
ncftpput -u user -p password -X "RNFR 123.exe" -X "RNTO 1234.exe" host /path/123.exe
It works with ncftls as well. It is more immediate if you what you have to do is just a rename without actually uploading anything on the FTP server.
(-W is like -X. The only difference is that it does the rename immediately after logging in)
Here is the syntax:
ncftpls -u name -p psw -W "RNFR FTPfolder/anotherFolder/OLDname.txt" -W "RNTO FTPfolder/anotherFolder/NEWname.txt" ftp://ftp.name.org

Resources