This is my first post, and actually first steps into bash.
I wanted to create automated ftp server backup feature to make my life easier. Since I've felt good doing this script I've started working on improvements for it and currently I ran out of ideas.
I wanted to make script backup download all files from ftp server and defined credentials as variables (which I am proud of lol) but then I've realized that I can use same script for all my servers but I will have to store login credentials in separate file.
The problem is that I can't make my script to read this values for backup processing and they remain blank as I see from output.
If anyone have any ideas how can I make it read my credentials and server name from separate file I would be grateful!
Below you can find my files and scripts in order that they are being processed:
'start_backup.sh'
#!/bin/sh
# Load credentials
/home/user/scripts/credentials.sh
# Launch backup script
/home/user/scripts/my_ovh_backup.sh
credentials.sh
#!/bin/sh
my_ovh_server="(server address goes here)"
my_ovh_login="(login goes here)"
my_ovh_password="(password goes here)"
my_ovh_backup.sh
# Path that need to be copied
backup_files="/(path on server)"
# Where to backup
dest="/home/user/ftp_backup"
data=$(date +%y-%m-%d)
# Create folder with current date
mkdir $dest/in_progress_$data
# Copy data
cd $dest/in_progress_$data
wget -r -nc ftp://$my_ovh_login:$my_ovh_password#$my_ovh_server/$backup_files
# Rename folder after download completion
mv $dest/in_progress_$data $dest/$data
# Cleanup
rm -r $dest/$data
Output that I recieve:
./start_backup.sh
ftp://:#//(remote folder)/: Invalid host name.
adding: home/user/ftp_backup/22-01-02/ (stored 0%)
From a local server, I'm trying to check for file distribution across an array. The check requires that I ssh into the master server then into each worker to check for files. I'm having trouble getting the double ssh hop to work.
From the shell I can get it to work fine:
master=master_server.com
worker=worker_server.com
filename=/path/file.config
ssh -q $master "ssh -q $worker \"[ -e $filename ]\" && echo $filename found on $worker"
But when I create an .sh script, this same call doesn't work. Instead, the script ssh's into the first worker in the array and leaves me at the command line without having executed the file check. Using exit, I can manually log off the worker and the script resumes, but for each subsequent worker I get:
worker2.com: Command not found.
worker3.com: Command not found.
...
I've tried different combinations of single and double quotes. I've also tried including the -t flag.
How can I ssh across two servers and test for a file?
I'm trying to automate the download of a subdirectories in a directory. However, when executing the script, the last one or two directories cannot be found by the script - "No such file or directory". All others do fine and can be downloaded. This occurs for all directories I've tried this on which is strange to me. Why would it always not find the last two directories?
Can anyone help with this? Is it due to the loop? I've tried changing it to loop over the last ones only and this doesn't help. Or maybe it's due to the conversion of array=($l)?
Here's my script:
dirServer=/dir/to/location/in/server
dirLocal=/dir/to/location/in/local/pc
l=`ssh -t username#server 'ls' ${dirServer}`
#array of folders that should be copied to local machine
array=($l)
for folder in ${array[#]}
do
echo ${folder}
#if directory doesn't exist, creat it
mkdir -p ${dirLocal}${folder}
scp -r username#server:${dirServer}${folder}/analysis/ ${dirLocal}${folder}
done
Try change the following line including this "-o LogLevel=QUIET"
l=`ssh -o LogLevel=QUIET -t username#server 'ls' ${dirServer}`
Explanation:
That is coming from SSH. You see it because you gave the -t switch, which forces SSH to allocate a pseudo-terminal for the connection. Traditionally, SSH displays that message to make it clear that you are no longer interacting with the shell on the remote host, which is normally only a question when SSH has a pseudo-terminal allocated.
In my script - i create a sftp connection.
I read some directory value from user earlier and once the sftp connection is established, i try to cd to that dir which i got from the user.
But its not working, probably bec the prompt goes inside the server to which the SFTP connection was established.
In this case how to make it work ?
I also faced this problem and was able to find the solution. The solution is right there in the man page of sftp. In the man page, you will find where it is written the format of using sftp like this:
sftp [options] [user#]host[:dir[/]]
Actually, two formats are given there but this is the one I wanted to use and it worked.
So, what do you do? You simply supply the username#host as seen there, then, without any space followed by : and the path you want to change to in the client/remote server in your script and that's all. Here is a practical example:
sftp user#host:/path/
If your script does, as you state somewhere in this page,
sftp $user#$host cd $directory
and then tries to do something else, like:
sftp $user#$host FOO
That command FOO will not be executed in the same directory $directory since you're executing a new command, which will create a new connection to the SFTP server.
What you can do is use the "batchfile" option of sftp, i.e. construct a file which contains all the commands you'd like sftp to do over one connection, for example:
$ cat commands.txt
cd foo/bar
put foo.tgz
lcd /tmp/
get foo.tgz
Then, you will be able to tell sftp to execute those commands in one connection, by executing:
sftp -b commands.txt $user#$host
So, I propose your solution to be:
With user's input, create a temporary text file which contains all the commands to be executed over one SFTP connection, then
Execute sftp using that temporary text file as "batch file"
Your script would do something like:
echo "Directory in which to go:"
read directory
temp=$( mktemp /tmp/FOOXXX )
echo "" > $temp
echo "cd $directory" >> $temp
# other commands
sftp -b $temp $user#$host
rm $temp
If you are trying to change the directory of the machine, try lcd
In what way is it not working? To change directories on the remote server, you use the "cd" command. To change directories on the local server, you use the "lcd" command. Read "man sftp".
I often have to login to one of several servers and go to one of several directories on those machines. Currently I do something of this sort:
localhost ~]$ ssh somehost
Welcome to somehost!
somehost ~]$ cd /some/directory/somewhere/named/Foo
somehost Foo]$
I have scripts that can determine which host and which directory I need to get into but I cannot figure out a way to do this:
localhost ~]$ go_to_dir Foo
Welcome to somehost!
somehost Foo]$
Is there an easy, clever or any way to do this?
You can do the following:
ssh -t xxx.xxx.xxx.xxx "cd /directory_wanted ; bash --login"
This way, you will get a login shell right on the directory_wanted.
Explanation
-t Force pseudo-terminal allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g. when implementing menu services.
Multiple -t options force tty allocation, even if ssh has no local tty.
If you don't use -t then no prompt will appear.
If you don't add ; bash then the connection will get closed and return control to your local machine
If you don't add bash --login then it will not use your configs because its not a login shell
You could add
cd /some/directory/somewhere/named/Foo
to your .bashrc file (or .profile or whatever you call it) at the other host. That way, no matter what you do or where you ssh from, whenever you log onto that server, it will cd to the proper directory for you, and all you have to do is use ssh like normal.
Of curse, rogeriopvl's solution works too, but it's a tad bit more verbose, and you have to remember to do it every time (unless you make an alias) so it seems a bit less "fun".
My preferred approach is using the SSH config file (described below), but there are a few possible solutions depending on your usages.
Command Line Arguments
I think the best answer for this approach is christianbundy's reply to the accepted answer:
ssh -t example.com "cd /foo/bar; exec \$SHELL -l"
Using double quotes will allow you to use variables from your local machine, unless they are escaped (as $SHELL is here). Alternatively, you can use single quotes, and all of the variables you use will be the ones from the target machine:
ssh -t example.com 'cd /foo/bar; exec $SHELL -l'
Bash Function
You can simplify the command by wrapping it in a bash function. Let's say you just want to type this:
sshcd example.com /foo/bar
You can make this work by adding this to your ~/.bashrc:
sshcd () { ssh -t "$1" "cd \"$2\"; exec \$SHELL -l"; }
If you are using a variable that exists on the remote machine for the directory, be sure to escape it or put it in single quotes. For example, this will cd to the directory that is stored in the JBOSS_HOME variable on the remote machine:
sshcd example.com \$JBOSS_HOME
SSH Config File
If you'd like to see this behavior all the time for specific (or any) hosts with the normal ssh command without having to use extra command line arguments, you can set the RequestTTY and RemoteCommand options in your ssh config file.
For example, I'd like to type only this command:
ssh qaapps18
but want it to always behave like this command:
ssh -t qaapps18 'cd $JBOSS_HOME; exec $SHELL'
So I added this to my ~/.ssh/config file:
Host *apps*
RequestTTY yes
RemoteCommand cd $JBOSS_HOME; exec $SHELL
Now this rule applies to any host with "apps" in its hostname.
For more information, see http://man7.org/linux/man-pages/man5/ssh_config.5.html
I've created a tool to SSH and CD into a server consecutively – aptly named sshcd. For the example you've given, you'd simply use:
sshcd somehost:/some/directory/somewhere/named/Foo
Let me know if you have any questions or problems!
Based on additions to #rogeriopvl's answer, I suggest the following:
ssh -t xxx.xxx.xxx.xxx "cd /directory_wanted && bash"
Chaining commands by && will make the next command run only when the previous one was successful (as opposed to using ;, which executes commands sequentially). This is particularly useful when needing to cd to a directory performing the command.
Imagine doing the following:
/home/me$ cd /usr/share/teminal; rm -R *
The directory teminal doesn't exist, which causes you to stay in the home directory and remove all the files in there with the following command.
If you use &&:
/home/me$ cd /usr/share/teminal && rm -R *
The command will fail after not finding the directory.
In my very specific case, I just wanted to execute a command in a remote host, inside a specific directory from a Jenkins slave machine:
ssh myuser#mydomain
cd /home/myuser/somedir
./commandThatMustBeRunInside_somedir
exit
But my machine couldn't perform the ssh (it couldn't allocate a pseudo-tty I suppose) and kept me giving the following error:
Pseudo-terminal will not be allocated because stdin is not a terminal
I could get around this issue passing "cd to dir + my command" as a parameter of the ssh command (to not have to allocate a Pseudo-terminal) and by passing the option -T to explicitly tell to the ssh command that I didn't need pseudo-terminal allocation.
ssh -T myuser#mydomain "cd /home/myuser/somedir; ./commandThatMustBeRunInside_somedir"
I use the environment variable CDPATH
going one step further with the -t idea. I keep a set of scripts calling the one below to go to specific places in my frequently visited hosts. I keep them all in ~/bin and keep that directory in my path.
#!/bin/bash
# does ssh session switching to particular directory
# $1, hostname from config file
# $2, directory to move to after login
# can save this as say 'con' then
# make another script calling this one, e.g.
# con myhost repos/i2c
ssh -t $1 "cd $2; exec \$SHELL --login"
My answer may differ from what you really want, but I write here as may be useful for some people. In my solution you have to enter into the directory once and then every new ssh session goes to the same dir (after the first logout).
How to ssh to the same directory you have been in your last login.
(I assume you use bash on the remote node.)
Add this line to your ~/.bash_logout on the remote node(!):
echo $PWD > ~/.bash_lastpwd
and these lines to the ~/.bashrc file (still on the remote node!)
if [ -f ~/.bash_lastpwd ]; then
cd $(cat ~/.bash_lastpwd)
fi
This way you save your current path on every logout and .bashrc put you into that directory after login.
ps: You can tweak it further like using the SSH_CLIENT variable to decide to go into that directory or not, so you can differentiate between local logins and ssh or even between different ssh clients.
Another way of going to directly after logging in is create "Alias". When you login into your system just type that alias and you will be in that directory.
Example : Alias = myfolder '/var/www/Folder'
After you log in to your system type that alias (this works from any part of the system)
this command if not in bashrc will work for current session. So you can also add this alias to bashrc to use that in future
$ myfolder => takes you to that folder
I know this has been answered ages ago but I found the question while trying to incorporate an ssh login in a bash script and once logged in run a few commands and log back out and continue with the bash script. The simplest way I found which hasnt been mentioned elsewhere because it is so trivial is to do this.
#!/bin/bash
sshpass -p "password" ssh user#server 'cd /path/to/dir;somecommand;someothercommand;exit;'
Connect With User
In case if you don't know this, you can use this to connect by specifying both user and host
ssh -t <user>#<Host domain / IP> "cd /path/to/directory; bash --login"
Example: ssh -t admin#test.com "cd public_html; bash --login"
You can also append the commands to be executed on every login by appending it in the double quotes with a ; before each command
Unfortunately, the suggested solution (of #rogeriopvl) doesn't work when you use multiple hops, so I found another one.
On remote machine add into ~/.bashrc the following:
[ "x$CDTO" != "x" ] && cd $CDTO
This allows you to specify the desired target directory on command line in this way:
ssh -t host1 ssh -t host2 "CDTO=/desired_directory exec bash --login"
Sure, this way can be used for a single hop too.
This solution can be combined with the usefull tip of #redseven for greater flexibilty (if no $CDTO, go to saved directory, if exists).
SSH itself provides a means of communication, it does not know anything about directories. Since you can specify which remote command to execute (this is - by default - your shell), I'd start there.
simply modify your home with the command:
usermod -d /newhome username