Script to upload to sftp is not working - shell

I have 2 Linux boxes and i am trying to upload files from one machine to other using sftp. I have put all the commands I use in the terminal to she'll script like below.
#!/bin/bash
cd /home/tests/sftptest
sftp user1#192.168.0.1
cd sftp/sftptest
put test.txt
bye
But this is not working and gives me error like the directory does not exist. Also, the terminal remain in >sftp, which means bye is not executed. How can I fix this?

I suggest to use a here-document:
#!/bin/bash
cd /home/tests/sftptest
sftp user1#192.168.0.1 <<< EOF
cd sftp/sftptest
put test.txt
EOF

When you run the sftp command, it connects and waits for you to type commands. It kind of starts its own "subshell".
The other commands in your script would execute only once the sftp command finishes. And they would obviously execute as local shell commands, so particularly the put will fail as a non existing command.
You have to provide the sftp commands directly to sftp command.
One way to do that, is using an input redirection. E.g. using the "here document" as the answer by #cyrus already shows:
sftp username#host <<< EOF
sftp_command_1
sftp_command_2
EOF
Other way is using an external sftp script:
sftp username#host -b sftp.txt
Where, the sftp.txt script contains:
sftp_command_1
sftp_command_2

Related

How can i redirect unwanted output on bash login over ssh?

I've got a script, that will use ssh to login to another machine and run a script there. My local script will redirect all the output to a file. It works fine in most cases, but on certain remote machines, i am capturing output that i don't want, and it looks like it's coming from stderr. Maybe because of the way bash is processing entries in its start-up files, but this is just speculation.
Here is an example of some unwanted lines that end up in my file.
which: no node in (/usr/local/bin:/bin:/usr/bin)
stty: standard input: Invalid argument
My current method is to just strip the predictable output that i don't want, but it feels like bad practice.
How can i capture output from only my script?
Here's the line that runs the remote script.
ssh -p 22 -tq user#centos-server "/path/to/script.sh" > capture
The ssh uses authorized_keys.
Edit: In the meantime, i'm going to work on directing the output from my script on machine B to a file and then copying it to A via scp and deleting it on B. But i would really like to be able to suppress the output completely, because when i run the script on machine A, it makes the output difficult to read.
To build on your comment on Raman's answer. Have you tried supressing .bashrc and .bash_profile as shown below?
ssh -p 22 -tq user#centos-server "bash --norc --noprofile /path/to/script.sh" > capture
If rc-files is the problem on some servers you should try and fix the broken rc-files instead of your script/invocation since it'll affect all (non-interactive) logins.
Try running ssh user#host 'grep -ls "which node" .*' on all your servers to find if they have "which node" in any dotfiles as indicated by your error message.
Another thing to look out for is your shebang. You tag this as bash and write CentOS but on a Debian/Ubuntu server #!/bin/sh gives you dash instead of (sh-compatible) bash.
YOu can redirect stdout (2) to /dev/null and redirect the rest to the log fole as follows:
ssh -p 22 -tq user#centos-server bash -c "/path/to/script.sh" 2>/dev/null >> capture

FTP using Unix shell script and then triggering a task

I have a requirement at hand in Unix where I need to build a shell script.
The requirement is below:
I need to SFTP a file (let's say CSV file) from my dev server to uat server.
After the SFTP is done to that server, as soon as the file comes there and the exit code of the previous SFTP is 0, I need to trigger a task (this task I can take care of).
I have the basic idea on SFTP but I am not aware of how to trigger the next task as soon as the file comes to the uat server.
Please need a pseudo code to start my exploration.
If you want to copy from somewhere to your local machine and run a command locally
If you have access to ssh then it can be done easily which I am doing it usually.
For example I have a backup file from one of my server. We can get a copy this way using scp
scp root#server:/home/weekly.sql.zip .
. means put the file with its name here on this directory I am in now
the problem with this command is that it has an interaction for getting password so to eliminate this we can install sshpass and use it this way:
sshpass -p'your-password' scp root#server:/home/weekly.sql.zip .
Since we are using bash and it take care of exiting code if you add and && operator then you can add a second command so to be triggered after first command has successfully done.
sshpass -p'your-password' scp root#server:/home/weekly.sql.zip . && unzip weekly.sql.zip
First task is copying a file and second is to unzip it.
installing sshpass:
sudo apt install -y sshpass
If you want to copy from your local machine to somewhere and run a command remotely
sshpass -p'your-password' scp test.txt root#address:/home/ && sshpass -p'your-password' ssh root#address cat /home/test.txt
Which does this:
copy file test.txt to the server
then read it by cat command

Execute local script remotely and direct output back to local file

I have a shell script (script.sh) on a local linux machine that I want to run on a few remote servers. The script should take a local txt file (output.txt) as an argument and write to it from the remote server.
script.sh:
#!/bin/sh
file="$1"
echo "hello from remote server" >> $file
I have tried the following with no success:
ssh user#host "bash -s" < script.sh /path/to/output.txt
So if I'm reading this correctly...
script.sh is stored on the local machine, but you'd like to execute it on remote machines,
output.txt should get the output of the remotely executed script, but should live on the local machine.
As you've written things in your question, you're providing the name of the local output file to a script that won't be run locally. While $1 may be populated with a value, that file it references is nowhere to be seen from the perspective of where it's running.
In order to get a script to run on a remote machine, you have to get it to the remote machine. The standard way to do this would be to copy the script there:
$ scp script.sh user#remotehost:.
$ ssh user#remotehost sh ./script.sh > output.txt
Though, depending on the complexity of the script, you might be able to get away with embedding it:
$ ssh user#remotehost sh -c "$(cat script.sh)" > output.txt
The advantage of this is of course that it doesn't require disk space to be used on the remote machine. But it may be trickier to debug, and the script may function a little differently if it's run in-line like this rather than from a file. (For example, positional options will be different.)
If you REALLY want to provide the local output file as an option to the script you're running remotely, then you need to include a remote path to get to that script. For example:
script.sh:
#!/bin/sh
outputhost="${1%:*}" # trim off path, leaving host
outputpath="${1#*:}" # trim off host, leaving path
echo "Hello from $(hostname)." | ssh "$outputhost" "cat >> '$outputpath'"
Then after copying the script over, call the script with:
$ ssh user#remotehost sh ./script.sh localhostname:/path/to/output.txt
That way, script.sh running on the remote host will send its output independently, rather than through your existing SSH connection. Of course, you'll want to set up SSH keys and such to facilitate this extra connection.
You can achieve it like below:-
For saving output locally on UNIX/Linux:
ssh remote_host "bash -s script.sh" > /tmp/output.txt
The first line of your script file should be #!/bin/bash and you don't need to use bash -s in your command line. Lets try like below:-
Better always put full path for your bash file like
ssh remote_host "/tmp/scriptlocation/script.sh" > /tmp/output.txt
For testing execute any unix command first:-
ssh remote_host "/usr/bin/ls -l" > /tmp/output.txt
For saving output locally on Windows:,
ssh remote_host "script.sh" > .\output.txt
Better use plink.exe . Example
plink remote_host "script.sh" > log.txt

How to CD inside a SFTP connection where the connection is established using - Shell script

In my script - i create a sftp connection.
I read some directory value from user earlier and once the sftp connection is established, i try to cd to that dir which i got from the user.
But its not working, probably bec the prompt goes inside the server to which the SFTP connection was established.
In this case how to make it work ?
I also faced this problem and was able to find the solution. The solution is right there in the man page of sftp. In the man page, you will find where it is written the format of using sftp like this:
sftp [options] [user#]host[:dir[/]]
Actually, two formats are given there but this is the one I wanted to use and it worked.
So, what do you do? You simply supply the username#host as seen there, then, without any space followed by : and the path you want to change to in the client/remote server in your script and that's all. Here is a practical example:
sftp user#host:/path/
If your script does, as you state somewhere in this page,
sftp $user#$host cd $directory
and then tries to do something else, like:
sftp $user#$host FOO
That command FOO will not be executed in the same directory $directory since you're executing a new command, which will create a new connection to the SFTP server.
What you can do is use the "batchfile" option of sftp, i.e. construct a file which contains all the commands you'd like sftp to do over one connection, for example:
$ cat commands.txt
cd foo/bar
put foo.tgz
lcd /tmp/
get foo.tgz
Then, you will be able to tell sftp to execute those commands in one connection, by executing:
sftp -b commands.txt $user#$host
So, I propose your solution to be:
With user's input, create a temporary text file which contains all the commands to be executed over one SFTP connection, then
Execute sftp using that temporary text file as "batch file"
Your script would do something like:
echo "Directory in which to go:"
read directory
temp=$( mktemp /tmp/FOOXXX )
echo "" > $temp
echo "cd $directory" >> $temp
# other commands
sftp -b $temp $user#$host
rm $temp
If you are trying to change the directory of the machine, try lcd
In what way is it not working? To change directories on the remote server, you use the "cd" command. To change directories on the local server, you use the "lcd" command. Read "man sftp".

How to use SSH to run a local shell script on a remote machine?

I have to run a local shell script (windows/Linux) on a remote machine.
I have SSH configured on both machine A and B. My script is on machine A which will run some of my code on a remote machine, machine B.
The local and remote computers can be either Windows or Unix based system.
Is there a way to run do this using plink/ssh?
If Machine A is a Windows box, you can use Plink (part of PuTTY) with the -m parameter, and it will execute the local script on the remote server.
plink root#MachineB -m local_script.sh
If Machine A is a Unix-based system, you can use:
ssh root#MachineB 'bash -s' < local_script.sh
You shouldn't have to copy the script to the remote server to run it.
This is an old question, and Jason's answer works fine, but I would like to add this:
ssh user#host <<'ENDSSH'
#commands to run on remote host
ENDSSH
This can also be used with su and commands which require user input. (note the ' escaped heredoc)
Since this answer keeps getting bits of traffic, I would add even more info to this wonderful use of heredoc:
You can nest commands with this syntax, and that's the only way nesting seems to work (in a sane way)
ssh user#host <<'ENDSSH'
#commands to run on remote host
ssh user#host2 <<'END2'
# Another bunch of commands on another host
wall <<'ENDWALL'
Error: Out of cheese
ENDWALL
ftp ftp.example.com <<'ENDFTP'
test
test
ls
ENDFTP
END2
ENDSSH
You can actually have a conversation with some services like telnet, ftp, etc. But remember that heredoc just sends the stdin as text, it doesn't wait for response between lines
I just found out that you can indent the insides with tabs if you use <<-END!
ssh user#host <<-'ENDSSH'
#commands to run on remote host
ssh user#host2 <<-'END2'
# Another bunch of commands on another host
wall <<-'ENDWALL'
Error: Out of cheese
ENDWALL
ftp ftp.example.com <<-'ENDFTP'
test
test
ls
ENDFTP
END2
ENDSSH
(I think this should work)
Also see
http://tldp.org/LDP/abs/html/here-docs.html
Also, don't forget to escape variables if you want to pick them up from the destination host.
This has caught me out in the past.
For example:
user#host> ssh user2#host2 "echo \$HOME"
prints out /home/user2
while
user#host> ssh user2#host2 "echo $HOME"
prints out /home/user
Another example:
user#host> ssh user2#host2 "echo hello world | awk '{print \$1}'"
prints out "hello" correctly.
This is an extension to YarekT's answer to combine inline remote commands with passing ENV variables from the local machine to the remote host so you can parameterize your scripts on the remote side:
ssh user#host ARG1=$ARG1 ARG2=$ARG2 'bash -s' <<'ENDSSH'
# commands to run on remote host
echo $ARG1 $ARG2
ENDSSH
I found this exceptionally helpful by keeping it all in one script so it's very readable and maintainable.
Why this works. ssh supports the following syntax:
ssh user#host remote_command
In bash we can specify environment variables to define prior to running a command on a single line like so:
ENV_VAR_1='value1' ENV_VAR_2='value2' bash -c 'echo $ENV_VAR_1 $ENV_VAR_2'
That makes it easy to define variables prior to running a command. In this case echo is our command we're running. Everything before echo defines environment variables.
So we combine those two features and YarekT's answer to get:
ssh user#host ARG1=$ARG1 ARG2=$ARG2 'bash -s' <<'ENDSSH'...
In this case we are setting ARG1 and ARG2 to local values. Sending everything after user#host as the remote_command. When the remote machine executes the command ARG1 and ARG2 are set the local values, thanks to local command line evaluation, which defines environment variables on the remote server, then executes the bash -s command using those variables. Voila.
<hostA_shell_prompt>$ ssh user#hostB "ls -la"
That will prompt you for password, unless you have copied your hostA user's public key to the authorized_keys file on the home of user .ssh's directory. That will allow for passwordless authentication (if accepted as an auth method on the ssh server's configuration)
I've started using Fabric for more sophisticated operations. Fabric requires Python and a couple of other dependencies, but only on the client machine. The server need only be a ssh server. I find this tool to be much more powerful than shell scripts handed off to SSH, and well worth the trouble of getting set up (particularly if you enjoy programming in Python). Fabric handles running scripts on multiple hosts (or hosts of certain roles), helps facilitate idempotent operations (such as adding a line to a config script, but not if it's already there), and allows construction of more complex logic (such as the Python language can provide).
cat ./script.sh | ssh <user>#<host>
chmod +x script.sh
ssh -i key-file root#111.222.3.444 < ./script.sh
Try running ssh user#remote sh ./script.unx.
Assuming you mean you want to do this automatically from a "local" machine, without manually logging into the "remote" machine, you should look into a TCL extension known as Expect, it is designed precisely for this sort of situation. I've also provided a link to a script for logging-in/interacting via SSH.
https://www.nist.gov/services-resources/software/expect
http://bash.cyberciti.biz/security/expect-ssh-login-script/
ssh user#hostname ". ~/.bashrc;/cd path-to-file/;. filename.sh"
highly recommended to source the environment file(.bashrc/.bashprofile/.profile). before running something in remote host because target and source hosts environment variables may be deffer.
I use this one to run a shell script on a remote machine (tested on /bin/bash):
ssh deploy#host . /home/deploy/path/to/script.sh
if you wanna execute command like this
temp=`ls -a`
echo $temp
command in `` will cause errors.
below command will solve this problem
ssh user#host '''
temp=`ls -a`
echo $temp
'''
If the script is short and is meant to be embedded inside your script and you are running under bash shell and also bash shell is available on the remote side, you may use declare to transfer local context to remote. Define variables and functions containing the state that will be transferred to the remote. Define a function that will be executed on the remote side. Then inside a here document read by bash -s you can use declare -p to transfer the variable values and use declare -f to transfer function definitions to the remote.
Because declare takes care of the quoting and will be parsed by the remote bash, the variables are properly quoted and functions are properly transferred. You may just write the script locally, usually I do one long function with the work I need to do on the remote side. The context has to be hand-picked, but the following method is "good enough" for any short scripts and is safe - should properly handle all corner cases.
somevar="spaces or other special characters"
somevar2="!##$%^"
another_func() {
mkdir -p "$1"
}
work() {
another_func "$somevar"
touch "$somevar"/"$somevar2"
}
ssh user#server 'bash -s' <<EOT
$(declare -p somevar somevar2) # transfer variables values
$(declare -f work another_func) # transfer function definitions
work # call the function
EOT
The answer here (https://stackoverflow.com/a/2732991/4752883) works great if
you're trying to run a script on a remote linux machine using plink or ssh.
It will work if the script has multiple lines on linux.
**However, if you are trying to run a batch script located on a local
linux/windows machine and your remote machine is Windows, and it consists
of multiple lines using **
plink root#MachineB -m local_script.bat
wont work.
Only the first line of the script will be executed. This is probably a
limitation of plink.
Solution 1:
To run a multiline batch script (especially if it's relatively simple,
consisting of a few lines):
If your original batch script is as follows
cd C:\Users\ipython_user\Desktop
python filename.py
you can combine the lines together using the "&&" separator as follows in your
local_script.bat file:
https://stackoverflow.com/a/8055390/4752883:
cd C:\Users\ipython_user\Desktop && python filename.py
After this change, you can then run the script as pointed out here by
#JasonR.Coombs: https://stackoverflow.com/a/2732991/4752883 with:
`plink root#MachineB -m local_script.bat`
Solution 2:
If your batch script is relatively complicated, it may be better to use a batch
script which encapsulates the plink command as well as follows as pointed out
here by #Martin https://stackoverflow.com/a/32196999/4752883:
rem Open tunnel in the background
start plink.exe -ssh [username]#[hostname] -L 3307:127.0.0.1:3306 -i "[SSH
key]" -N
rem Wait a second to let Plink establish the tunnel
timeout /t 1
rem Run the task using the tunnel
"C:\Program Files\R\R-3.2.1\bin\x64\R.exe" CMD BATCH qidash.R
rem Kill the tunnel
taskkill /im plink.exe
This bash script does ssh into a target remote machine, and run some command in the remote machine, do not forget to install expect before running it (on mac brew install expect )
#!/usr/bin/expect
set username "enterusenamehere"
set password "enterpasswordhere"
set hosts "enteripaddressofhosthere"
spawn ssh $username#$hosts
expect "$username#$hosts's password:"
send -- "$password\n"
expect "$"
send -- "somecommand on target remote machine here\n"
sleep 5
expect "$"
send -- "exit\n"
You can use runoverssh:
sudo apt install runoverssh
runoverssh -s localscript.sh user host1 host2 host3...
-s runs a local script remotely
Useful flags:
-g use a global password for all hosts (single password prompt)
-n use SSH instead of sshpass, useful for public-key authentication
If it's one script it's fine with the above solution.
I would set up Ansible to do the Job. It works in the same way (Ansible uses ssh to execute the scripts on the remote machine for both Unix or Windows).
It will be more structured and maintainable.
It is unclear if the local script uses locally set variables, functions, or aliases.
If it does this should work:
myscript.sh:
#!/bin/bash
myalias $myvar
myfunction $myvar
It uses $myvar, myfunction, and myalias. Let us assume they is set locally and not on the remote machine.
Make a bash function that contains the script:
eval "myfun() { `cat myscript.sh`; }"
Set variable, function, and alias:
myvar=works
alias myalias='echo This alias'
myfunction() { echo This function "$#"; }
And "export" myfun, myfunction, myvar, and myalias to server using env_parallel from GNU Parallel:
env_parallel -S server -N0 --nonall myfun ::: dummy
Extending answer from #cglotr. In order to write inline command use printf, it useful for simple command and it support multiline using char escaping '\n'
example :
printf "cd /to/path/your/remote/machine/log \n tail -n 100 Server.log" | ssh <user>#<host> 'bash -s'
See don't forget to add bash -s
I created a solution that works better for me by combining the use of a heredoc from Yarek T's answer with the piped cat method from cglotr's answer along with some other tricks for non-interactive login (using sshpass), using variables from the local and remote host in the script, and enabling sudo commands. The code is longer just because it includes some additional tricks that are likely desired, but the original questioner didn't ask for them.
The problem I have with Yarek's answer is that all the warnings and commands in the heredoc print to the screen. The problem I have with cglotr's answer is that is requires a script file and a complex command with additional interaction to execute the script. With my solution, I write a script that does everything by simply calling the script with the remote host IP address as the first argument like this:
./MYSCRIPT REMOTE_IP_ADDRESS
The script to be run on the remote host is saved to a variable within the script on the local host using a heredoc so that you don't need to do any quote escaping. Then, the variable containing the script is echo piped to sshpass. Be sure to indent the commands with tabs and not spaces (you'll get spaces instead of tabs when you copy the code). Here is an example of the remote script within the local script.
!/bin/bash
# Input argument 1 should be the target host IP address (required)
RX_IP="/(\b25[0-5]|\b2[0-4][0-9]|\b[01]?[0-9][0-9]?)(\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)){3}/"
IS_IP=$(echo $1 | sed -nr "${RX_IP}p" | wc -l)
if (( $IS_IP )); then
USERNAME=remoteuser
HOSTNAME=$1
# Export the SSH password to environment variable for sshpass and sudo.
# The space before the command prevents saving the command to history.
export SSHPASS=mypassword;
while read -r -d '' SCRIPT <<-EOS
# Enable sudo commands with the following command.
# The space before echo prevents saving the command to history.
echo $SSHPASS | sudo -Sv
# Do stuff here. Escape variables to be be accessed on the remote host.
# For example, escape print variable in an awk command:
# This command lists all USB block device partitions.
ls -l /dev /dev/mapper | awk '/^b/ && /sd[a-z][1-9]/ {print \$10}'
exit
EOS
echo "$SCRIPT" | sshpass -e ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ${USERNAME}#${HOSTNAME} &>/dev/null
echo 'DONE'
else
echo "Missing IP address of target host."
echo "Usage: ./SCRIPT_NAME IP_ADDRESS
fi
You need to install sshpass on the local host like this (for Debian based distros).
sudo apt install sshpass
There is another approach ,you can copy your script in your host with scp command then execute it easily .
First, copy the script over to Machine B using scp
[user#machineA]$ scp /path/to/script user#machineB:/home/user/path
Then, just run the script
[user#machineA]$ ssh user#machineB "/home/user/path/script"
This will work if you have given executable permission to the script.

Resources