Apologies if the title isn't worded very well, hard to explain exactly what I'm trying to do without an example
I am running a database backup command that creates a file with a timestamp. In the same command I am then uploading that file to a remote location.
pg_dump -U postgres -W -F t db > $backup_dir/db_backup_$(date +%Y-%m-%d-%H.%M.%S).tar && gsutil cp $backup_dir/db_backup_$(date +%Y-%m-%d-%H.%M.%S).tar $bucket_dir
As you can see here it is creating the timestamp during the pg_dump command. However in the 2nd half of the command, the timestamp will now be different and it won't find the file.
I'm looking for a way to 'save' or assign the value of the backup file name from the first half of the command, so that I can then use it in the 2nd half of the command.
Ideally this would be done across 2 separate commands however in this particular use case I'm limited to 1.
a variation of the advice already given in comments -
fn=db_backup_$(date +%Y-%m-%d-%H.%M.%S).tar &&
pg_dump -U postgres -W -F t db > "$backup_dir/$fn" &&
gsutil cp "$backup_dir/$fn" "$bucket_dir"
The $fn var makes the whole thing shorter and more readable, too.
Related
Currently I am working with a local machine that does not have finger command built in and we do not have permission to install it either. However, there is a remote server that has it installed and can be used that way. I am using finger command to get First and Last name of the users. Here is the code below in bash:
#!/usr/bin/env bash
NAMES=("ssmith" "jnicol" "ahumph" "nkidma" "bbanne")
for name in ${NAMES[#]}; do
theName=`ssh -qX 123.45.67.89 finger $name | awk 'NR==1{if($7!="???") print $7, $8}'`
arr+=("$theName") #Appending name returned from command to global array
done
The above code works but it is super slow. Is there any simpler way to ssh over to remote server to run command and get list of all user(s) first and last name in single attempt, and then append all of those into an array like shown above? There are 100s of users in the system and doing ssh over to remote server for every single one of them is not going to be optimal.
Any help would be appreciated.
All, found the answer to this. I could do the following in my Local machine and still get user's first and last name.
FULLNAME=$(getent passwd $USER | cut -d : -f 5)
Thanks all.
I have read all other solutions and none adapts to my needs, I do not use Java, I do not have super user rights and I do not have API's installed in my server.
I have select rights on a remote PostgreSQL server and I want to run a query in it remotely and export its results into a .csv file in my local server.
Once I manage to establish the connection to the server I first have to define the DB, then the schema and then the table, fact that makes the following lines of code not work:
\copy schema.products TO '/home/localfolder/products.csv' CSV DELIMITER ','
copy (Select * From schema.products) To '/home/localfolder/products.csv' With CSV;
I have also tried the following bash command:
psql -d DB -c "select * from schema.products;" > /home/localfolder/products.csv
and logging it with the following result:
-bash: home/localfolder/products.csv: No such file or directory
I would really appreciate if someone can show a light on this.
Have you tried this? I do not have psql right now to test it.
echo “COPY (SELECT * from schema.products) TO STDOUT with CSV HEADER” | psql -o '/home/localfolder/products.csv'
Details:
-o filename Put all output into file filename. The path must be writable by the client.
echo builtin + piping (|) pass command to psql
Aftr a while a good colleague deviced this solution which worked perfectly for my needs, hope this can help someone.
'ssh -l user [remote host] -p [port] \'psql -c "copy (select * from schema.table_name') to STDOUT csv header" -d DB\' > /home/localfolder/products.csv'
Very similar to idobr's answer.
From http://www.postgresql.org/docs/current/static/sql-copy.html:
Files named in a COPY command are read or written directly by the server, not by the client application.
So, you'll always want to use psql's \copy meta command.
The following should do the trick:
\copy (SELECT * FROM schema.products) to 'products.csv' with csv
If the above doesn't work, we'll need an error/warning message to work with.
You mentioned that the server is remote, however you are connecting to a localhost. Add the -h [server here] or set the ENV variable
export PGHOST='[server here]'
The database name should be the last argument, and not with -d.
And finally that command should have not failed, my guess is that that directory does not exist. Either create it or try writing to tmp.
I would ask you to try the following command:
psql -h [server here] -c "copy (select * from schema.products) to STDOUT csv header" DB > /tmp/products.csv
I've created a bash script to migrate sites and databases from one server to another: Algorithm:
Parse .pgpass file to create individual dumps for all the specified Postgres db's.
Upload said dumps to another server via rsync.
Upload a bunch of folders related to each db to the other server, also via rsync.
Since databases and folders have the same name, the script can predict the location of the folders if it knows the db name. The problem I'm facing is that the loop is only executing once (only the first line of .pgpass is being completed).
This is my script, to be run in the source server:
#!/bin/bash
# Read each line of the input file, parse args separated by semicolon (:)
while IFS=: read host port db user pswd ; do
# Create the dump. No need to enter the password as we're using .pgpass
pg_dump -U $user -h $host -f "$db.sql" $db
# Create a dir in the destination server to copy the files into
ssh user#destination.server mkdir -p webapps/$db/static/media
# Copy the dump to the destination server
rsync -azhr $db.sql user#destination:/home/user
# Copy the website files and folders to the destination server
rsync -azhr --exclude "*.thumbnails*" webapps/$db/static/media/ user#destination.server:/home/user/webapps/$db/static/media
# At this point I expect the script to continue to the next line, but if exits at the first line
done < $1
This is .pgpass, the file to parse:
localhost:*:db_name1:db_user1:db_pass1
localhost:*:db_name3:db_user2:db_pass2
localhost:*:db_name3:db_user3:db_pass3
# Many more...
And this is how I'm calling it:
./my_script.sh .pgpass
At this point everything works. The first dump is created, and it is transferred to the destination server along with the related files and folders. The problem is the script finishes there, and won't parse the other lines of .pgpass. I've commented out all lines related to rsync (so the script only creates the dumps), and it works correctly, executing once for each line in the script. How can I get the script to not exit after executing rsync?
BTW, I'm using key based ssh auth to connect the servers, so the script is completely prompt-less.
Let's ask shellcheck:
$ shellcheck yourscript
In yourscript line 4:
while IFS=: read host port db user pswd ; do
^-- SC2095: ssh may swallow stdin, preventing this loop from working properly.
In yourscript line 8:
ssh user#destination.server mkdir -p webapps/$db/static/media
^-- SC2095: Add < /dev/null to prevent ssh from swallowing stdin.
And there you go.
I am trying to automate a set of procedures that create TEMPLATE databases.
I have a set of files (file1, file2, ... fileN), each of which contains a set of pgsql commands required for creating a TEMPLATE database.
The contents of the file (createdbtemplate1.sql) looks roughly like this:
CREATE DATABASE mytemplate1 WITH ENCODING 'UTF8';
\c mytemplate1
CREATE TABLE first_table (
--- fields here ..
);
-- Add C language extension + functions
\i db_funcs.sql
I want to be able to write a shell script that will execute the commands in the file, so that I can write a script like this:
# run commands to create TEMPLATE db mytemplate1
# ./groksqlcommands.sh createdbtemplate1.sql
for dbname in foo foofoo foobar barbar
do
# Need to simply create a database based on an existing template in this script
psql CREATE DATABASE $dbname TEMPLATE mytemplate1
done
Any suggestions on how to do this? (As you may have guessed, I'm a shell scripting newbie.)
Edit
To clarify the question further, I want to know:
How to write groksqlcommands.sh (a bash script that will run a set of pgsql cmds from file)
How to create a database based on an existing template at the command line
First off, do not mix psql meta-commands and SQL commands. These are separate sets of commands. There are tricks to combine those (using the psql meta-commands \o and \\ and piping strings to psql in the shell), but that gets confusing quickly.
Make your files contain only SQL commands.
Do not include the CREATE DATABASE statement in the SQL files. Create the db separately, you have multiple files you want to execute in the same template db.
Assuming you are operating as OS user postgres and use the DB role postgres as (default) Postgres superuser, all databases are in the same DB cluster on the default port 5432 and the role postgres has password-less access due to an IDENT setting in pg_hba.conf - a default setup.
psql postgres -c "CREATE DATABASE mytemplate1 WITH ENCODING 'UTF8'
TEMPLATE template0"
I based the new template database on the default system template database template0. Basics in the manual here.
Your questions
How to (...) run a set of pgsql cmds from file
Try:
psql mytemplate1 -f file
Example script file for batch of files in a directory:
#! /bin/sh
for file in /path/to/files/*; do
psql mytemplate1 -f "$file"
done
The command option -f makes psql execute SQL commands in a file.
How to create a database based on an existing template at the command line
psql -c 'CREATE DATABASE my_db TEMPLATE mytemplate1'
The command option -c makes psql execute a single SQL command string. Can be multiple commands, terminated by ; - will be executed in one transaction and only the result of the last command returned.
Read about psql command options in the manual.
If you don't provide a database to connect to, psql will connect to the default maintenance database named "postgres". In the second answer it is irrelevant which database we connect to.
you can echo your commands to the psql input:
for dbname in foo foofoo foobar barbar
do
echo """
CREATE DATABASE $dbname TEMPLATE mytemplate1
""" | psql
done
If you're willing to go the extra mile, you'll probably have more success with sqlalchemy. It'll allow you to build scripts with python instead of bash, which is easier and has better control.
As requested in the comments: https://github.com/srathbun/sqlCmd
Store your sql scripts under a root dir
Use dev,tst,prd parametrized dbs
Use find to run all your pgsql scripts as shown here
Exit on errors
Or just git clone the whole tool from here
For that use case where you have to do it....
Here is a script I've used for importing JSON into PostgreSQL (WSL Ubuntu), which basically requires that you mix psql meta commands and SQL in the same command line. Note use of the somewhat obscure script command, which allocates a pseudo-tty:
$ more update.sh
#!/bin/bash
wget <filename>.json
echo '\set content `cat $(ls -t <redacted>.json.* | head -1)` \\ delete from <rable>; insert into <table> values(:'"'content'); refresh materialized view <view>; " | PGPASSWORD=<passwd> psql -h <host> -U <user> -d <database>
$
I often have to login to one of several servers and go to one of several directories on those machines. Currently I do something of this sort:
localhost ~]$ ssh somehost
Welcome to somehost!
somehost ~]$ cd /some/directory/somewhere/named/Foo
somehost Foo]$
I have scripts that can determine which host and which directory I need to get into but I cannot figure out a way to do this:
localhost ~]$ go_to_dir Foo
Welcome to somehost!
somehost Foo]$
Is there an easy, clever or any way to do this?
You can do the following:
ssh -t xxx.xxx.xxx.xxx "cd /directory_wanted ; bash --login"
This way, you will get a login shell right on the directory_wanted.
Explanation
-t Force pseudo-terminal allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g. when implementing menu services.
Multiple -t options force tty allocation, even if ssh has no local tty.
If you don't use -t then no prompt will appear.
If you don't add ; bash then the connection will get closed and return control to your local machine
If you don't add bash --login then it will not use your configs because its not a login shell
You could add
cd /some/directory/somewhere/named/Foo
to your .bashrc file (or .profile or whatever you call it) at the other host. That way, no matter what you do or where you ssh from, whenever you log onto that server, it will cd to the proper directory for you, and all you have to do is use ssh like normal.
Of curse, rogeriopvl's solution works too, but it's a tad bit more verbose, and you have to remember to do it every time (unless you make an alias) so it seems a bit less "fun".
My preferred approach is using the SSH config file (described below), but there are a few possible solutions depending on your usages.
Command Line Arguments
I think the best answer for this approach is christianbundy's reply to the accepted answer:
ssh -t example.com "cd /foo/bar; exec \$SHELL -l"
Using double quotes will allow you to use variables from your local machine, unless they are escaped (as $SHELL is here). Alternatively, you can use single quotes, and all of the variables you use will be the ones from the target machine:
ssh -t example.com 'cd /foo/bar; exec $SHELL -l'
Bash Function
You can simplify the command by wrapping it in a bash function. Let's say you just want to type this:
sshcd example.com /foo/bar
You can make this work by adding this to your ~/.bashrc:
sshcd () { ssh -t "$1" "cd \"$2\"; exec \$SHELL -l"; }
If you are using a variable that exists on the remote machine for the directory, be sure to escape it or put it in single quotes. For example, this will cd to the directory that is stored in the JBOSS_HOME variable on the remote machine:
sshcd example.com \$JBOSS_HOME
SSH Config File
If you'd like to see this behavior all the time for specific (or any) hosts with the normal ssh command without having to use extra command line arguments, you can set the RequestTTY and RemoteCommand options in your ssh config file.
For example, I'd like to type only this command:
ssh qaapps18
but want it to always behave like this command:
ssh -t qaapps18 'cd $JBOSS_HOME; exec $SHELL'
So I added this to my ~/.ssh/config file:
Host *apps*
RequestTTY yes
RemoteCommand cd $JBOSS_HOME; exec $SHELL
Now this rule applies to any host with "apps" in its hostname.
For more information, see http://man7.org/linux/man-pages/man5/ssh_config.5.html
I've created a tool to SSH and CD into a server consecutively – aptly named sshcd. For the example you've given, you'd simply use:
sshcd somehost:/some/directory/somewhere/named/Foo
Let me know if you have any questions or problems!
Based on additions to #rogeriopvl's answer, I suggest the following:
ssh -t xxx.xxx.xxx.xxx "cd /directory_wanted && bash"
Chaining commands by && will make the next command run only when the previous one was successful (as opposed to using ;, which executes commands sequentially). This is particularly useful when needing to cd to a directory performing the command.
Imagine doing the following:
/home/me$ cd /usr/share/teminal; rm -R *
The directory teminal doesn't exist, which causes you to stay in the home directory and remove all the files in there with the following command.
If you use &&:
/home/me$ cd /usr/share/teminal && rm -R *
The command will fail after not finding the directory.
In my very specific case, I just wanted to execute a command in a remote host, inside a specific directory from a Jenkins slave machine:
ssh myuser#mydomain
cd /home/myuser/somedir
./commandThatMustBeRunInside_somedir
exit
But my machine couldn't perform the ssh (it couldn't allocate a pseudo-tty I suppose) and kept me giving the following error:
Pseudo-terminal will not be allocated because stdin is not a terminal
I could get around this issue passing "cd to dir + my command" as a parameter of the ssh command (to not have to allocate a Pseudo-terminal) and by passing the option -T to explicitly tell to the ssh command that I didn't need pseudo-terminal allocation.
ssh -T myuser#mydomain "cd /home/myuser/somedir; ./commandThatMustBeRunInside_somedir"
I use the environment variable CDPATH
going one step further with the -t idea. I keep a set of scripts calling the one below to go to specific places in my frequently visited hosts. I keep them all in ~/bin and keep that directory in my path.
#!/bin/bash
# does ssh session switching to particular directory
# $1, hostname from config file
# $2, directory to move to after login
# can save this as say 'con' then
# make another script calling this one, e.g.
# con myhost repos/i2c
ssh -t $1 "cd $2; exec \$SHELL --login"
My answer may differ from what you really want, but I write here as may be useful for some people. In my solution you have to enter into the directory once and then every new ssh session goes to the same dir (after the first logout).
How to ssh to the same directory you have been in your last login.
(I assume you use bash on the remote node.)
Add this line to your ~/.bash_logout on the remote node(!):
echo $PWD > ~/.bash_lastpwd
and these lines to the ~/.bashrc file (still on the remote node!)
if [ -f ~/.bash_lastpwd ]; then
cd $(cat ~/.bash_lastpwd)
fi
This way you save your current path on every logout and .bashrc put you into that directory after login.
ps: You can tweak it further like using the SSH_CLIENT variable to decide to go into that directory or not, so you can differentiate between local logins and ssh or even between different ssh clients.
Another way of going to directly after logging in is create "Alias". When you login into your system just type that alias and you will be in that directory.
Example : Alias = myfolder '/var/www/Folder'
After you log in to your system type that alias (this works from any part of the system)
this command if not in bashrc will work for current session. So you can also add this alias to bashrc to use that in future
$ myfolder => takes you to that folder
I know this has been answered ages ago but I found the question while trying to incorporate an ssh login in a bash script and once logged in run a few commands and log back out and continue with the bash script. The simplest way I found which hasnt been mentioned elsewhere because it is so trivial is to do this.
#!/bin/bash
sshpass -p "password" ssh user#server 'cd /path/to/dir;somecommand;someothercommand;exit;'
Connect With User
In case if you don't know this, you can use this to connect by specifying both user and host
ssh -t <user>#<Host domain / IP> "cd /path/to/directory; bash --login"
Example: ssh -t admin#test.com "cd public_html; bash --login"
You can also append the commands to be executed on every login by appending it in the double quotes with a ; before each command
Unfortunately, the suggested solution (of #rogeriopvl) doesn't work when you use multiple hops, so I found another one.
On remote machine add into ~/.bashrc the following:
[ "x$CDTO" != "x" ] && cd $CDTO
This allows you to specify the desired target directory on command line in this way:
ssh -t host1 ssh -t host2 "CDTO=/desired_directory exec bash --login"
Sure, this way can be used for a single hop too.
This solution can be combined with the usefull tip of #redseven for greater flexibilty (if no $CDTO, go to saved directory, if exists).
SSH itself provides a means of communication, it does not know anything about directories. Since you can specify which remote command to execute (this is - by default - your shell), I'd start there.
simply modify your home with the command:
usermod -d /newhome username