Download files with lftp seems not aware of disk full at my side. How can I fix that? - lftp

Using lftp I download files from a remote server (mget -E) to my local server. When my local disk is full, I would think lftp would get an error from the OS (CentOS7) and try later to download or complete the download. Instead, lftp just got on writing 0 byte files at my side. Is there anything I can do to let lftp stop with an error when my local disk has 0 bytes free?

Maybe:
Settings: On startup, lftp executes ~/.lftprc and ~/.lftp/rc (or ~/.config/lftp/rc if ~/.lftp does not exist). You can place aliases and 'set' commands there. Some people prefer to see full protocol debug: use 'debug' to turn on debugging.
In Settings -> xfer:disk-full-fatal (boolean) - when true, lftp aborts a transfer if it cannot write the target file to disk because of full disk or quota; when false, lftp waits for disk space to be freed.
This is from the (lengthy) lftp manual; one version here: https://linux.die.net/man/1/lftp

Related

How to download automatically all newer files which are in remote ftp folder in shell script?

For example i have two servers 1. Server A & 2. Server B
Server A has directory called /testdir with some files, I need a shell script which will run in Server B to download (FTP) the files from Server A /testdir. This download should happen automatically whenever a new file is added in Server A /testdir and old files should be neglected.
Consider using 'lftp' incremental transfer (mirror). As an alternative, 'wget' has similar mirroring functionality:
With wget:
wget -mirror -nH -o ftp://serverA/testdir
With lftp:
lftp
open ftp://serverA/
mirror /testdir .

Transferring large files using SFTP using linux bash scripts

I am intending to send a huge file around 1+GB over to the remote side using SFTP. However, it seems to work fine in interactive mode(when I sftp#xx.xx.xx.xx and enter the password manually, then I key in the put command). But when I run it in shell, it always timeout.
I have set the client and server ClientAliveTimeout settings at /etc/ssh/sshd_config but it still occurs.
Below is the linux script code
sshpass -p "password" sftp user#xx.xx.xx.xx << END
put <local file path> <remote file path>
exit
END
The transfer of files takes 10 min when using interactive mode
when run using script, the file was incomplete based on filesize.
Update: Current transfer during interactive mode shows the small files went through but the big file was stalled halfway during transfer.
I prefere lftp for such things
lftp -u user,passwd domain.tld -e "put /path/file; quit"
lftp can handle sftp too
open sftp://username:password#server.address.com

Ansible task command with credentials

I am wondering if during an Ansible task is it safe to send credentials (password, api key) in a command line task?
No one on the remote server should see the command line (and even less credentials).
Thank you.
If you are not trusting remote server - you should never expose sensitive credentials to it, since anyone having root access on that server can easily intercept traffic, files and memory allocated by you on that server. The easiest way for someone to get you secrets would be to dump temporary files that ansible creating to do it's job on remote server, since it requires only privileges of the user you are connecting as!
There is a special environment variable called ANSIBLE_KEEP_REMOTE_FILES=1 used to troubleshoot problems. It should give you an idea about what information is actually stored by ansible on remote disks, even for a brief seconds. I've executed
ANSIBLE_KEEP_REMOTE_FILES=1 ansible -m command -a "echo 'SUPER_SECRET_INFO'" -i 127.0.0.1, all
command on my machine to see files ansible creates on remote machine. After it's execution i see temporary file in my home directory, named ~/.ansible/tmp/ansible-tmp-1492114067.19-55553396244878/command.py
So let's grep out secret info:
grep SUPER_SECRET ~/.ansible/tmp/ansible-tmp-1492114067.19-55553396244878/command.py
Result:
ANSIBALLZ_PARAMS = '{"ANSIBLE_MODULE_ARGS": {"_ansible_version": "2.2.2.0", "_ansible_selinux_special_fs": ["fuse", "nfs", "vboxsf", "ramfs"], "_ansible_no_log": false, "_ansible_module_name": "command", "_raw_params": "echo \'SUPER_SECRET_INFO\'", "_ansible_verbosity": 0, "_ansible_syslog_facility": "LOG_USER", "_ansible_diff": false, "_ansible_debug": false, "_ansible_check_mode": false}}'
As you can see - nothing is safe from the prying eyes! So if you are really concerned about your secrets - don't use anything critical on suspected hosts, use one time passwords, keys or revokable tokens to mitigate this issue.
It depends on how paranoid are you about this credentials. In general: no, it is not safe.
I guess root user on remote host can see anything.
For example, run strace -f -p$(pidof -s sshd) on remote host and try to execute any command via ssh.
By default Ansible write all invocations to syslog on remote host, you can set no_log: no for task to avoid this.

SCP file from ssh session to localhost

I have a headless file server on which I store and manage downloads and media, but occasionally I have to transfer small files back to my computer (Mac, using bash shell). The problem is that some files have more user-friendly names and commonly have spaces in them, and they are buried in the file directory hierarchy I have set up on my server.
When I'm using scp from my local machine, I don't have tab completion, so I have to manually type out the entire path and name with spaces escaped. When I ssh into the server first, the command:
scp /home/me/files/file\ name\ with\ spaces.png Me#localhost:/Users/Me/MyDirectory
fails with the error "Permission denied, please try again" even though I'm entering my local machine user password properly.
I've learned a little bit of sftp since I've been told that may be a better tool for file transfer. However, the utility seems outdated and I still don't have tab completion after establishing a connection to the server (on my Terminal when pressing Tab I just get a tab character).
My question is this: what can I do to allow tab completion while using scp from my Mac? Or am I using incorrect syntax for scp while in an ssh session, and is there something in that command I should fix? Or, is there a (better? newer?) tool other than sftp that would offer tab completion on a server?
Finally, if none of these problems have simple solutions, is there some package I could install (e.g. a completion package from Homebrew or the like) that would facilitate better tab-completion with any of these commands?
Looks to me like this is some incorrect scping.
This is the format of the command
scp ./localFile.txt remoteUser#remoteHost:/remoteFile.txt
You were so close, but you have localhost set where you should have your remoteHost.
localhost is the name that resolves to the machine that you are currently on - so in your workflow, you are sshing to a machine, and then trying to scp that file to the same machine you are already sshd into.
What you need to do, is figure out the IP address, or the physical host name of the computer that you are trying to connect to, and use that instead.
scp ./localFile.txt remoteUser#192.168.1.100:/remoteFile.txt
# where 192.168.1.100 would be the IP of your Mac
I am assuming the reason you were getting permission denied, was because you were using your the login credentials for you mac, but unknowingly trying to login again to your headless machine.

using lftp to upload to ftp site got 501 Insufficient disk space

I'm new to using ftp and recently i came across this really wired situation.
I was trying to upload a file to someone else's ftp site, and i tried to use this command
lftp -e "set ftp:passive-mode true; put /dir/to/myfile -o dest_folder/`basename /dir/to/myfile`; bye" ftp://userName:passWord#ftp.site.com
but i got the error
put: Access failed: 501 Insufficient disk space : only 0 bytes available. (To dest_folder/myfile)
and when i log on to their site and check, a 0 byte file with myfile name is uploaded.
At first i thought the ftp site is out of disk space, but i then tried log on to the site using
lftp userName:passWord#ftp.site.com
and then set passive mode
set ftp:passive-mode true
and then upload the file(using another name)
put /dir/to/myfile_1 -o dest_folder/`basename /dir/to/myfile_1`
this time the file was successfully uploaded without the 501 insufficient disk space error.
Does any one know why this happens? Thanks!
You might try using lftp -d, to enable the debug/verbose mode. Some FTP clients use the ALLO FTP command, to tell the FTP server to "allocate" some amount of bytes in advance; the FTP server can then accept/reject that. I suspect that lftp is sending ALLO to your FTP server, and it is the FTP server responding to that ALLO command with a 501 response code, causing your issue.
Per updates/comments, the OP confirmed that lftp's use of ALLO was indeed resulting in the initially reported behaviors. Subsequent errors happened because lftp was attempting to update the timestamp of the uploaded file; these attempts were also being rejected by the FTP server. lftp had tried using the MFMT and SITE UTIME FTP commands.
To disable those, and to get lftp to succeed for the OP, the following lftp settings were needed:
ftp:trust-feat no
ftp:use-allo no
ftp:use-feat no
ftp:use-site-utime no
ftp:use-site-utime2 no
With these settings, you should be able to have lftp upload a file without using the ALLO command beforehand, and without trying to modifying the server-side timestamp of the uploaded file using MFMT or SITE UTIME.
Hope this helps!

Resources