Linux Expect Script terminates too early - expect

I have an expect script that I have written that at the end calls scp to copy a large file from server a to server b. The issue I have (which is the same using rsysnc and not scp) is that the expect script is terminating before the file transfer is complete. I know that I can set the timeout in expect but as the file size grows so will the timeout. Is there anyone that has come across this issue and is there a wait function that I can use in expect?

You can set the timeout to -1 to disable the timeout feature.

Related

Process get terminated when I call system() with wait=FALSE

I'm trying to process videos on OpenCPU and because they are very big I want to call the "FFmpeg" process using "system" and allow it to keep working until it's done.
But I need to get the temporary "file directory" created by OpenCPU so I can pull that directory until the video conversion is done.
In order to do that i call the system function with the parameter wait=FALSE as shown bellow
This work fine if I use the library(opencpu) on my machine, but when I move this into the production environment (Ubuntu 14.x) the system call get interrupted just after starting.
Is this something that can be fixed using opencpu.confg? Or is it a bug?
ffmpeg_exe <- "/usr/bin/ffmpeg" # Linux path
exec_convert <- paste0("( ",ffmpeg_exe,' -i ',input_file,' ',convert_command,' ',output_file, ' 2> PROCESS_OUTPUT.txt ; ls > PROCESS_DONE.txt ',")")
system(exec_convert, wait=FALSE)
I just found out that on linux, OpenCPU does not allow for this behavior, It kills all child processes when the request returns. This is on purpose. It doesn't want orphan processes to run potentially forever on the server, opencpu is not designed for that purpose.

Stop SFTP/FTP connection while the active connection is not transferring data in Unix Shell Scripting

I'm having an issue but I can't see something related with this.
I'm having an issue, something is happening while I'm performing an FTP connection with a server is transferring a file, but for some reason sometimes is stuck but I would like to prevent have the connection opened, there is a way to see if the FTP connection is not transferring, close the connection?
I really don't have any code due I'm not sure if this is possible,
Any idea what can I do at this point?
If it is closing the connection while you are transferring files, then it's either your FTP/SFTP client, server, or network. First, Switch to a different FTP/SFTP client. Some have more tools for analysis than others. I have had to do this before. If that doesn't work, check the internet connection or contact your system/network administrator.
there is a way to see if the FTP connection is not transferring, close
the connection?
If you are downloading a file, you can indirectly see the FTP transferring by watching the file's size:
name=$1
size=0
while sleep 10
set -- `ls -s $name`
[ "$1" -gt $size ]
do size=$1
done
exit 1
The above script (let's call it growing) runs while the file (passed as a parameter) grows.
In your script you could write something like
growing file || pkill ftp &
before you start the FTP. If the file stops growing for ten seconds, ftp would be killed and the connection thereby closed. If ftp terminates normally, you could kill $! or just let growing end.

Why can't tranfer file into the remote vps with expect?

The expect has been installed, it_a_test is the vps password.
scp /home/wpdatabase_backup.sql root#vps_ip:/tmp
The command can transfer file /home/wpdatabase_backup.sql into my vps_ip:/tmp.
Now i rewrite the process into the following code:
#!/usr/bin/expect -f
spawn scp /home/wpdatabase_backup.sql root#vps_ip:/tmp
expect "password:"
send it_is_a_test\r
Why can't transfer my file into remote vps_ip with expect?
Basically, expect will work with two feasible commands such as send and expect. In this case, if send is used, then it is mandatory to have expect (in most of the cases) afterwards. (while the vice-versa is not required to be mandatory)
This is because without that we will be missing out what is happening in the spawned process as expect will assume that you simply need to send one string value and not expecting anything else from the session, making the script exits and causing the failure.
So, you just have to add one expect to wait for the closure of the scp command which can be performed by waiting for eof (End Of File).
#!/usr/bin/expect -f
spawn scp /home/wpdatabase_backup.sql root#vps_ip:/tmp
expect "password:"
send "it_is_a_test\r"
expect eof; # Will wait till the 'scp' completes.
Note :
The default timeout in expect is 10 seconds. So, if the scp completes, within 10 seconds, then no problem. Suppose, if the operation takes more than that, then expect will timeout and quit, which makes failure in scp transfer. So, you can set increase timeout if you want which can be modified as
set timeout 60; # Timeout is 1 min

WinSCP script will not exit after error, even with "option batch abort"

For some reason when my network drops my script will not abort.
Here's the command:
"%~dp0\winscp.exe" /console /script="script.txt"
exit
and the script.txt:
option batch abort
option confirm off
open ftp://user:pass#ftp.site.com/
cd /directory/
synchronize local
exit
When I pull my network cable (to test for network drop) I get:
The requested name is valid, but no data of the requested type was found.
Connection failed.
(A)bort, (R)econnect (5 s):
..
(A)bort, (R)econnect (0 s): Reconnect
It will continue to try and reconnect indefinitely.
Why doesn't the script auto abort? I am using option batch abort. Am I missing something?
This error is recoverable, so WinSCP keeps trying to resume the transfer even with the option batch abort as documented:
...
When batch mode is set to on any choice prompt is automatically replied negatively. Unless the prompt has a different default answer (such as a default “Reconnect” answer for a reconnect prompt), in what case the default answer is used (after a short time interval). See also a reconnecttime option below.
A value abort is like the on. ...
As mentioned above, you can configure, how long WinSCP tries to reconnect using the option reconnecttime. By default WinSCP tries to reconnect for 2 minutes in the on/abort mode.
Note that WinSCP used to reconnect indefinitely in older versions, by default. You must be using some old version.

Opening an ssh connection and keeping it open on startup

I need to open an ssh connection on login and keep it open, but to not acutally do anything with it. It would be best if all of it would run in the background.
I created an automator application and made it run a shell script on the bash. The script looks as follows:
sshpass -p 123456 ssh 123456#123.123.123.123
If i try to run the application i keep getting an error message, however if i execute the exact same script in an terminal it works just fine.
Is there any way i can open that connection with an automator application and keep in the background?
You can send a KeepAlive packet to stop the pipe from closing.
In your ~/.ssh/config, and the following:
Host *
ServerAliveInterval 300
ServerAliveCountMax 2
What this says is that that every 300 seconds, send a null (keep-alive) packet and give up after 2 tries.
Source: http://patrickmylund.com/blog/how-to-keep-alive-ssh-sessions/
Do you really need to involve Automator at all?
Just save the script (say, foo.sh) in a folder with the same name as the script (i.e. foo.sh as well).
Put this folder in /System/Library/StartupItems/ and it will run when you start up your machine.

Resources