function expect_password {
expect -c "\
set timeout 90
set env(TERM)
spawn $1
expect \"password:\"
send \"$password\r\"
expect eof
"
}
expect_password "scp /home/kit.ho/folder/file1 root#$IP:/usr/bin"
The above expect_password works perfect!
However, I want to transfer multiple files in that directory, so I tried:
expect_password "scp /home/kit.ho/folder/* root#$IP:/usr/bin"
But an error comes up:
/home/kit.ho/folder/*: No such file or directory
Killed by signal 1.
It seems that expect doesn't recognize *. How can I transfer files in that way?
There is a possible answer using rsync but I can't use that.
The manpage of expect says "If program cannot be spawned successfully because exec(2) fails", so I assume that expect uses exec internally. exec doesn't call any shell to do wildcard expansion and such magic, which means that your ssh sees the asterisk and can't handle it. Have you tried to call your shell explicitely like
expect_password "sh -c \"scp /home/kit.ho/folder/* root#$IP:/usr/bin\""
(maybe you need to omit the single quotes)?
edit:
use \" instead of '
Expect is an extension of Tcl, and Tcl does not speak shell-filename-globbing natively. Rather than shoe-horning a Tcl solution withing your framework, try
set -- /home/kit.ho/folder/*
expect_password "scp $* root#$IP:/usr/bin"
Files with spaces won't work properly with this solution.
Can't you leave away the password stuff completely and work with SSH public keys?
Related
I'm writing a script that will scp a tar file from my local server to a remote host. Since the script generates the file through a pre-requisite process, the name is generated dynamically. My script needs to take the name of the file and pass it to scp for transfer.
#!/usr/bin/expect -f
spawn scp test.$(date +%y%m%d_%H%M).tar user#IP-ADDRESS:/destination/folder
set pass "password"
expect "password: "
send -- "$pass\r"
expect eof
I've tried setting the filename as a variable but keep seeing the same error:
can't read "(date +%y%m%d_%H%M)": no such variable
while executing "spawn scp test.$(date +%y%m%d_%H%M).tar user#IP-ADDRESS:/destination/folder"
$(date +%y%m%d_%H%M) is not a Tcl command. If you use expect, you have to learn Tcl. To get a formatted date in Tcl, use the clock command. Also, interpolation of the result from a command in Tcl is not done by $(....), but by [....]. You can find examples for this construct here.
Decided to go another route since the team was able to provision a new Artifactory repo for this binary and alike. However, to the advice provided here I was able to make a few discoveries which I used to fix my issues:
I also had a password with $ symbol and that also caused a world of issues.
#!/bin/bash
TEST=$(date +%y%m%d_%H%M)
/usr/bin/expect <<eof
set password {pas\$word}
spawn scp "$TEST" user#IP-ADDRESS:/destination/folder
expect "*password:"
send "$pasword\r"
expect eof
I have problems with expect script. Well when I grep this file I need to put it to line under and it should look like :
/opt/ericsson/arne/bin/import.sh -f bla_bla_bla.xml -val:rall
but I don't know how to put this file in beetween this line. Because when I have put grep command in beetween in didn't work, maybe problem is -val:rall that I have after?
If someone know's how could I put name of file in File1
#!/usr/local/bin/expect --
set env(TERM) vt100
set env(SHELL) /bin/sh
set env(HOME) /usr/local/bin
set PASSWORD ter
set DUL [lindex $argv 0]
set VAR _cus_ipsec
match_max 1000
spawn ssh mashost
expect {
"assword" {send "$PASSWORD\r"}
}
expect "ran#rn23"
send -- "cd /tih/opt/bla/tih/ \r"
expect "ran#rn23"
send -- "grep -il $DUL * \r*"
expect "ran#rn23"
send -- "/opt/bl/arne/bin/imp.sh -f File1 -val:rall\r"
expect "ran#rn03"
send -- "/opt/bl/arne/b/imp.sh -f File1 -import\r"
expect "ran#rn23"
interact
Ok, thanks for the clarification, I believe I do understand what you're trying to do now.
What you need to do is change the expect statement you have after you send the grep command to one that will capture your filename. And you will probably benefit from using the regexp mode of the expect command (-re), and possibly using parenthesis to capture the filename (not used in my sample below). I do not know what are the possible filenames that you can get from your grep, so you will probably need to tweak the below quite a bit, but assuming your grep will give you a single .xml file beginning with "NAME", you could do something like the following:
send -- "grep -il $DUL * \r*"
expect -re "NAME.*\.xml"
send -- "/opt/bl/arne/bin/imp.sh -f $expect_out(0,string) -val:rall\r"
As a suggestion, you should really include some timeout options for your expect statements, and some error checking, otherwise this script will not stop if anything does not go as expected. E.g. only send if you have found what you expected, etc.
Your regexp probably will be more complicated than the one I showed you, but you can get the idea. Also, include exp_internal 1 somewhere near the top of your script to get good, solid debugging info on what your script is matching (or not matching). It will be very useful as you test it.
Let me know how that goes.
So, I've established a connection via ssh to a remote machine; and now what I would like to do is to execute few commands, grab some files and copy them back to my host machine.
I am aware that I can run
ssh user#host "command1; command2;....command_n"
and then close the connection, but how can I do the same without use the aforememtioned syntax? I have a lot of complex commands that has a bunch of quote and characters that would be a mess to escape.
Thanks!
My immediate thought is why not create a script and push it over to the remote machine to have it run locally in a text file? If you can't for whatever reason, I fiddled around with this and I think you could probably do well with a HEREDOC:
ssh -t jane#stackoverflow.com bash << 'EOF'
command 1 ...
command 2 ...
command 3 ...
EOF
and it seems to do the right thing. Play with your heredoc to keep your quotes safe, but it will get tricky. The only other thing I can offer (and I totally don't recomend this) is you could use a toy like perl to read and write to the ssh process like so:
open S, "| ssh -i ~/.ssh/host_dsa -t jane#stackoverflow.com bash";
print S "date\n"; # and so on
but this is a really crummy way to go about things. Note that you can do this in other languages.
Instead of the shell use some scripting language (Perl, Python, Ruby, etc.) and some module that takes care of the ugly work. For example:
#!/usr/bin/perl
use Net::OpenSSH;
my $ssh = Net::OpenSSH->new($host, user => $user);
$ssh->system('echo', 'Net::Open$$H', 'Quot%$', 'Th|s', '>For', 'You!');
$ssh->system({stdout_file => '/tmp/ls.out'}, 'ls');
$ssh->scp_put($local_path, $remote_path);
my $out = $ssh->capture("find /etc");
From here: Can I ssh somewhere, run some commands, and then leave myself a prompt?
The use of an expect script seems pretty straightforward... Copied from the above link for convenience, not mine, but I found it very useful.
#!/usr/bin/expect -f
spawn ssh $argv
send "export V=hello\n"
send "export W=world\n"
send "echo \$V \$W\n"
interact
I'm guessing a line like
send "scp -Cpvr someLocalFileOrDirectory you#10.10.10.10/home/you
would get you your files back...
and then:
send "exit"
would terminate the session - or you could end with interact and type in the exit yourself..
I want to start a script remotely via ssh like this:
ssh user#remote.org -t 'cd my/dir && ./myscript data my#email.com'
The script does various things which work fine until it comes to a line with nohup:
nohup time ./myprog $1 >my.log && mutt -a ${1%.*}/`basename $1` -a ${1%.*}/`basename ${1%.*}`.plt $2 < my.log 2>&1 &
it is supposed to do start the program myprog, pipe its output to mylog and send an email with some datafiles created by myprog as attachment and the log as body. Though when the script reaches this line, ssh outputs:
Connection to remote.org closed.
What is the problem here?
Thanks for any help
Your command runs a pipeline of processes in the background, so the calling script will exit straight away (or very soon afterwards). This will cause ssh to close the connection. That in turn will cause a SIGHUP to be sent to any process attached to the terminal that the -t option caused to be created.
Your time ./myprog process is protected by a nohup, so it should carry on running. But your mutt isn't, and that is likely to be the issue here. I suggest you change your command line to:
nohup sh -c "time ./myprog $1 >my.log && mutt -a ${1%.*}/`basename $1` -a ${1%.*}/`basename ${1%.*}`.plt $2 < my.log 2>&1 " &
so the entire pipeline gets protected. (If that doesn't fix it it may be necessary to do something with file descriptors - for instance mutt may have other issues with the terminal not being around - or the quoting may need tweaking depending on the parameters - but give that a try for now...)
This answer may be helpful. In summary, to achieve the desired effect, you have to do the following things:
Redirect all I/O on the remote nohup'ed command
Tell your local SSH command to exit as soon as it's done starting the remote process(es).
Quoting the answer I already mentioned, in turn quoting wikipedia:
Nohuping backgrounded jobs is for example useful when logged in via SSH, since backgrounded jobs can cause the shell to hang on logout due to a race condition [2]. This problem can also be overcome by redirecting all three I/O streams:
nohup myprogram > foo.out 2> foo.err < /dev/null &
UPDATE
I've just had success with this pattern:
ssh -f user#host 'sh -c "( (nohup command-to-nohup 2>&1 >output.file </dev/null) & )"'
Managed to solve this for a use case where I need to start backgrounded scripts remotely via ssh using a technique similar to other answers here, but in a way I feel is more simple and clean (at least, it makes my code shorter and -- I believe -- better-looking), by explicitly closing all three streams using the stream-close redirection syntax (as discussed at the following locations:
https://unix.stackexchange.com/questions/131801/closing-a-file-descriptor-vs
https://unix.stackexchange.com/questions/70963/difference-between-2-2-dev-null-dev-null-and-dev-null-21
http://www.tldp.org/LDP/abs/html/io-redirection.html#CFD
https://www.gnu.org/software/bash/manual/html_node/Redirections.html
Rather than the more widely used but (IMHO) hackier "redirect to/from /dev/null", resulting in the deceptively simple:
nohup script.sh >&- 2>&- <&-&
2>&1 works just as well as 2>&-, but I feel the latter is ever-so-slightly more clear. ;) Most people might have a space preceding the final "background job" ampersand, but since it is not required (as the ampersand itself functions like a semicolon in normal usage), I prefer to omit it. :)
I've been trying to automate some configuration backups on my cisco devices, i've already managed to do the script that accomplishes the task but I'm trying to improve it to handle errors too.
I think that's necessary to catch the errors on two steps, first just after the 'send \"$pass\r\"' to get login errors (access denied messages) and at the 'expect \": end\"' line, to be sure that the commands issued were able to pull the configuration from the device.
I've seen some ways to do it if you work on a expect script, but i want to use a bash script to be able to supply a list of devices from a .txt file.
#!/bin/bash
data=$(date +%d-%m-%Y)
dataOntem=$(date +%d-%m-%Y -d "-1 day")
hora=$(date +%d-%m-%Y-%H:%M:%S)
log=/firewall/log/bkpCisco.$data.log
user=MYUSER
pass=MYPASS
for firewall in `cat /firewall/script/firewall.cisco`
do
VAR=$(expect -c "
spawn ssh $user#$firewall
expect \"assword:\"
send \"$pass\r\"
expect \">\"
send \"ena\r\"
expect \"assword:\"
send \"$pass\r\"
expect \"#\"
send \"conf t\r\"
expect \"conf\"
send \"no pager\r\"
send \"sh run\r\"
log_file -noappend /firewall/backup/$firewall.$data.cfg.tmp
expect \": end\"
log_file
send \"pager 24\r\"
send \"exit\r\"
send \"exit\r\"
")
echo "$VAR"
done
You need alternative patterns in the expect statements where you want to catch errors. If you're looking for a specific error message you can specify that, alternatively just specify a timeout handler which will eventually trigger when the normal output fails to appear.
Eg. after send \"$pass\r\" instead of expect \">\" try:
expect \">\" {} timeout {puts stderr {Could not log in}; exit}
ie. if the expected output arrives before the timeout (default 10 sec) do nothing and continue, otherwise complain and exit from expect. You might also need an eof pattern to match the case where your ssh session ends.
Note that since you don't do any variable substitution in expect, you don't need \"\" around your strings, you can use {} or even nothing when it's one word, eg. expect conf and send {no pager}.
BTW I agree with bstpierre that this would be cleaner if you dropped bash and did the whole thing in expect, but if bash does the job that's ok.
If you don't use single quotes (expect -c '...'), then all the $variables will be substituted by bash not expect. May be easier to put the expect code in a separate file, or maybe a heredoc.