Is there a way to attach lftp to a lftp process running another machine? - lftp

When you exit lftp it keeps running in the background and the attach command is used to attach lftp to that process once again if you need to but I'm wondering if there is a way to attach lftp to a lftp process running another machine to which you have ssh access? An example would be great.

It's easy to do when provided with ssh access. Just do
ssh user#host "lftp -c attach"

Related

Jenkins Execute shell is failed after executing ssh to a remote server

I am creating a Jenkins job in which am running a ssh command to execute a script for comparing two folders using diff command on a remote server. Script is running fine, output file is getting created. But after this command Jenkins execute shell block is failed.
Command:
ssh -T user#dtest.com "bash /tmp/sample.sh" >> result.txt
Log:
ssh -T user#dtest.com "bash /tmp/sample.sh" >> result.txt
stdin: is not a tty
"Execute shell" is marked as failure
I am not sure what sample.sh is supposed to do, but I understand that you are trying to capture what is logged by this script.
I would try several solutions:
ssh -T user#dtest.com "bash /tmp/sample.sh >> result.txt"
This should save your output in your remote server. Then you could copy this file from remote to local using:
scp user#dtest.com:/remote/dir/result.txt /local/dir/
More context: Copying files from server to local computer using ssh
If you are choosing this solution, you could also consider to write your result.txt directly from your script, and keep the console output for important logging purpose.
Another Solution I could think of would be to use
ssh user#dtest.com "bash /tmp/sample.sh" > result.txt
With this solution you will redirect your output directly to your local machine.
But you will need to delete the ssh "-T" option. And you will run into other problems with Jenkins. So this might not fit you.
ssh -T Disables pseudo-tty allocation, what sounds like your problem's root cause. (https://docs.oracle.com/cd/E36784_01/html/E36870/ssh-1.html)

Downloading large file on remote host. Then close ssh connection

I am trying to download a stackoverlfow dump of all posts to a remote server (actually a container on a remote host). Now as you can image the dump is large (11G). I want to start a download and then be able to exit my SSH connection to the remote host.
I have looked at tmux but it's confusing.
I know wget https://archive.org/download/stackexchange/stackoverflow.com-Posts.7z will work but I will have to stay connected for the duration of the download.
Does anyone know how I can use tmux to solve this problem?
If I've correctly understood you situation, using nohup to launch the command will do the trick.
nohup wget https://archive.org/download/stackexchange/stackoverflow.com-Posts.7z
This will prevent the killing of the wget process when the shell terminates.
You can connect via SSH, execute the above command and exit. It will keed downloading by itself.
By the way: Tmux stands for Terminal Multiplexer and it's not related to the life cycle of a process.

bash - running remote script from local machine

I tried this:
#!bin/bash
ssh user#host.com 'sudo /etc/init.d/script restart'
But I get this error:
sudo: no tty present and no askpass program specified
How can I run that script? Now when I need to run that script I do these steps:
ssh user#host.com
sudo /etc/init.d/script restart
But I don't want to manually log in to remote server all the time and then enter restart command.
Can I write local script that I could run so I would only need to enter password and it would run remote script and restart the process?
You can use -t option in ssh command to attach a pseudo-tty with your ssh command:
ssh -t -t user#host.com 'sudo /etc/init.d/script restart'
As per man ssh:
-t Force pseudo-tty allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g. when implementing menu services. Multiple -t options force tty allocation, even if ssh has no local tty.

Is there a way to make rsync execute a command before beginning its transfer

I am working on a script which will be used to transfer a file (using rsync) from a remote location and then perform some basic operations on the retrieved content.
When I initially connect to the remote location (not running an rsync daemon, I'm just using rsync to retrieve the files) I am placed in a non-standard shell. In order to enter the bash shell I need to enter "run util bash". Is there a way to execute "run util bash" before rsync begins to transfer the files over?
I am open to other suggestions if there is a way to do this using scp/ftp instead of rsync.
One way is to exectue rsync from the server, instead of from the client. An ssh reverse tunnel allows us to temporarily access the local machine from the remote server.
Assume the local machine has an ssh server on port 22
Shh into the remote host while specifying a reverse tunnel that maps a port in the remote machine (in this example let us use 2222) to port 22 in our local machine
Execute your rsync command, replacing any reference to your local machine with the reverse ssh tunnel address: my-local-user#localhost
Add a port option to rsync's ssh to have it use the 2222 port.
The command:
ssh -R 2222:localhost:22 remoteuser#remotemachine << EOF
# we are on the remote server.
# we can ssh back into the box running the ssh client via ${REMOTE_PORT}
run utils bash
rsync -e "ssh -p 2222" --args /path/to/remote/source remoteuser#localhost:/path/to/local/machine/dest
EOF
Reference to pass complicated commands to ssh:
What is the cleanest way to ssh and run multiple commands in Bash?
You can achieve it using --rsync-path also. E.g rsync --rsync-path="run util bash && rsync" -e "ssh -T -c aes128-ctr -o Compression=no -x" ./tmp root#example.com:~
--rsync-path is normally used to specify what program is to be run on the remote machine to start-up rsync. Often used when rsync is not in the default remote-shell’s path (e.g. –rsync-path=/usr/local/bin/rsync). Note that PROGRAM is run with the help of a shell, so it can be any program, script, or command sequence you’d care to run, so long as it does not corrupt the standard-in & standard-out that rsync is using to communicate.
For more details refer

Running ssh sudo asynchronously

I'm trying to run a command with sudo on a remote machine. When I do it directly with
ssh -t -t -t myserver -q "sudo otheruser<<EOF
remotescript.sh
EOF"
it works fine, but if I add & at the end of the last line then it doesn't work. Why? How can I make it work?
I fact I'm running several such commands (to different servers) from a local script and save each output in a different file and would like them to run asynchronously.
Note: running ssh with otheruser#myserver is not an option. I really need to run sudo after I logged in.
Remove requiretty from sudo config (/etc/sudoers) on the remote machine.
Also add the -f option to ssh which puts the command in background (man: "must be used when ssh is run in the background").
The "&" should not be needed when using -f.
E.g:
ssh -f -t -t -t myserver -q "sudo otheruser<<EOF
remotescript.sh
EOF"
Use expect to control your ssh. It could be used to give automated response to the remote shell. Most processes when ran asynchronously suspends itself or becomes suspended when it tries to read input from terminal since another foreground process (the main shell) is using it.
There's a post about ssh and expect lately here: https://superuser.com/questions/509545/how-to-run-a-local-script-in-remote-server-using-expect-and-bash-script
Also try to disown your process after placing it on the background with disown to keep it from job control. e.g.
whole_command &
disown
Changing its input to /dev/null might also help but it could hang forever if it really needs input from user.
whole_command <&- < /dev/null &
disown

Resources