Bash - ncftpls is not working - bash

So, I'm trying to get the list of files and folders in the uppermost directory of my server and set it as a variable.
I'm using ncftpls to get the list of files and folders. It runs, but it doesn't display any files or folders.
LIST=$(ncftpls -u $USER -p $PASSWORD -P $PORT ftp://$HOST/)
echo $LIST
I tried not setting it as a variable and just running the command ncftpls, but it still won't display any of the files or folders.
The strange this is, though, when I run this script
ncftp -u $USER -p $PASSWORD -P $PORT ftp://$HOST/ <<EOF
ls
EOF
it outputs all the files and folders just fine. Although, then I can't set it as a variable (I don't think).
If anyone has any ideas on what is going on, that'd be much appreciated!

The only acceptable time to use FTP was the 1970s, when you could trust a host by the fact that it was allowed onto the internet.
Do not try to use it today. Use sftp, rsync, ssh or another suitable alternative.
You can capture output of any command with $(..):
In your case,
list=$(
ncftp -u $USER -p $PASSWORD -P $PORT ftp://$HOST/ <<EOF
ls
EOF
)
This happens to be equivalent to
list=$(ncftp -u $USER -p $PASSWORD -P $PORT ftp://$HOST/ <<< "ls")

Related

Postgres database backup not working locally (Crontab + Shell script using expect)

I am having issues on my Ubuntu server: I have two scripts which perform a pg_dump of two databases (a remote and a local one). However the backup file for the local one always ends up empty.
When I run the script manually, no problem.
The issue is when the script is ran via crontab while I am NOT logged into the machine. If I'm in a SSH session no problem, it works with crontab but when I'm not connected it does not work.
Check out my full scripts/setup under, and feel free to suggest any improvements. For now I just want it to work but if my method is insecure/unefficient I would gladly hear about alternatives :)
So far I've tried:
Using the postgres user for the local database (instead of another user I use to access the DB with my applications)
Switch pg_dump for /usr/bin/pg_dump
Here's my setup:
Crontab entry:
0 2 * * * path/to/my/script/local_databasesBackup.sh ; path/to/my/script/remote_databasesBackup.sh
scriptInitialization.sh
set LOCAL_PWD "password_goes_here"
set REMOTE_PWD "password_goes_here"
Expect script, called by crontab (local/remote_databaseBackup.sh):
#!/usr/bin/expect -f
source path/to/my/script/scriptInitialization.sh
spawn path/to/my/script/localBackup.sh expect "Password: " send "${LOCAL_PWD}\r"
expect eof exit
Actual backup script (local/remoteBackup.sh):
#!/bin/bash
DATE=$(date +"%Y-%m-%d_%H%M")
delete_yesterday_backup_and_perform_backup () {
/usr/bin/pg_dump -U postgres -W -F t localDatabaseName > /path/to/local/backup/folder/$DATE.tar
YESTERDAY_2_AM=$(date --date="02:00 yesterday" +"%Y-%m-%d_%H%M")
YESTERDAY_BACKUP_FILE=/path/to/local/backup/folder/$YESTERDAY_2_AM.tar
if [ -f "$YESTERDAY_BACKUP_FILE" ]; then
echo "$YESTERDAY_BACKUP_FILE exists. Deleting"
rm $YESTERDAY_BACKUP_FILE
else
echo "$YESTERDAY_BACKUP_FILE does not exist."
fi
}
CURRENT_DAY_NUMBER=$(date +"%d")
FIRST_DAY_OF_THE_MONTH="01"
if [ "$CURRENT_DAY_NUMBER" = "$FIRST_DAY_OF_THE_MONTH" ]; then
echo "First day of the month: Backup without deleting the previous backup"
/usr/bin/pg_dump -U postgres -W -F t localDatabaseName > /path/to/local/backup/folder/$DATE.tar
else
echo "Not the first day of the month: Delete backup from yesterday and backup"
delete_yesterday_backup_and_perform_backup
fi
The only difference between my local and remote script is the pg_dump parameters:
Local looks like this /usr/bin/pg_dump -U postgres -W -F t localDatabaseName > /path/to/local/backup/folder/$DATE.tar
Remote looks like this: pg_dump -U remote_account -p 5432 -h remote.address.com -W -F t remoteDatabase > /path/to/local/backup/folder/$DATE.tar
I ended up making two scripts because I thought it may have been the cause of the issue. However I'm pretty sure it's not at the moment.

Bash: get output of sudo command on remote using SSH

I'm getting incredibly frustrated here. I simply want to run a sudo command on a remote SSH connection and perform operations on the results I get locally in my script. I've looked around for close to an hour now and not seen anything related to that issue.
When I do:
#!/usr/bin/env bash
OUT=$(ssh username#host "command" 2>&1 )
echo $OUT
Then, I get the expected output in OUT.
Now, when I try to do a sudo command:
#!/usr/bin/env bash
OUT=$(ssh username#host "sudo command" 2>&1 )
echo $OUT
I get "sudo: no tty present and no askpass program specified". Fair enough, I'll use ssh -t.
#!/usr/bin/env bash
OUT=$(ssh -t username#host "sudo command" 2>&1 )
echo $OUT
Then, nothing happens. It hangs, never asking for the sudo password in my terminal. Note that this happens whether I send a sudo command or not, the ssh -t hangs, period.
Alright, let's forget the variable for now and just issue the ssh -t command.
#!/usr/bin/env bash
ssh -t username#host "sudo command" 2>&1
Then, well, it works no problem.
So the issue is that ssh -t inside a variable just doesn't do anything, but I can't figure out why or how to make it work for the life of me. Anyone with a suggestion?
If your script is rather concise, you could consider this:
#!/usr/bin/env bash
ssh -t username#host "sudo command" 2>&1 \
| ( \
read output
# do something with $output, e.g.
echo "$output"
)
For more information, consider this: https://stackoverflow.com/a/15170225/10470287

Syntax error when calling variable in bash

Here is my code:
#!bin/bash
id=$(sshpass -p password ssh -tt username#ipaddress -p PORT "grep --include=\*.cr -rlw '/usr/local/bin/' -e '$1' | cut -c16-")
echo $id
sshpass -p password rsync -avHPe 'ssh -p PORT' username#ipaddress:/usr/local/bin/"$id" /usr/local/bin/
id echos correctly, but I get an rsync error when trying to call the variable.
If I manually populate and run rsync, the command works, so I'm not sure what is going on.
Rsync gives me the following output on error.
rsync: link_stat "/usr/local/bin/match.cr\#015" failed: No such file or directory (2)
It seems to be grabbing extra characters? Any help is appreciated :)
Looks like your file contains Windows specific "CR LF" characters. You need to convert these to Linux specific "LF" characters in your script. You can use a tool like dos2unix or Notepad++.

Bash scp several files password issue

I am trying to copy several files from a remote server into local drive in Bash using scp.
Here's the part of the code
scp -r -q $USR#$IP:/home/file1.txt $PWD
scp -r -q $USR#$IP:/home/file2.txt $PWD
scp -r -q $USR#$IP:/root/file3.txt $PWD
However, the problem is that EVERY time that it wants to copy a file, it keeps asking for the password of the server, which is the same. I want it to ask only once and then copy all my files.
And please, do not suggest rsync nor making a key authentication file since I do not want to do that.
Are there any other ways...?
Any help would be appreciated
You can use expect script or sshpass
sshpass -p 'password' scp ...
#!/usr/bin/expect -f
spawn scp ...
expect "password:"
send "ur_password"
An disadvantage is that your password is now in plaintext
I'm assuming that if you can scp files from the remote server, you can also ssh in and create a tarball of the remote files.
The -r flag is recursive, for copying entire directories but your listing distinct files in your command, so -r becomes superfluous.
Try this from the bash shell on the remote system:
$ mkdir /home/file_mover
$ cp /home/file1.txt /home/file_mover/
$ cp /home/file2.txt /home/file_mover/
$ cp /root/file3.txt /home/file_mover/
$ tar -cvf /home/myTarball.tar /home/file_mover/
$ scp -q $USR#$IP:/home/myTarball.tar $PWD
Well, in this particular case, you can write...
scp -q $USR#$IP:/home/file[1-3].txt $PWD

Shell repeat execute ssh and scp command

I have two line need to repeat doing in for loop
ssh tam#192.168.174.43 mkdir -p $location
scp -r $i tam#192.168.174.43:$location
but each time it need to input password, how can i change code then just need input one time or more fast way
You can use public/private key generation method using ssh-keygen (https://help.ubuntu.com/community/SSH/OpenSSH/Keys)
And then use the below script.
for VARIABLE in dir1 dir2 dir3
do
ssh tam#192.168.174.43 mkdir -p $location
scp -r $i tam#192.168.174.43:$location
done
Alternative solution :
You can use sshpass
for VARIABLE in dir1 dir2 dir3
do
ssh tam#192.168.174.43 mkdir -p $location sshpass -p '<password>' <command>
scp -r $i tam#192.168.174.43:$location sshpass -p '<password>' <command>
done
While public/private keys is the easiest option, without need to change the existing script, there is another option, of using sshfs. sshfs may not be installed by default.
With this approach, you basically mount the remote file system to a local directory, over ssh protocol. Then you can simply use commands like mkdir / cp etc.
NOTE: These command are from YOUR system & not from REMOTE system.
Mounting over ssh is a one time job, which will require your manual intervention. Do this before running the script.e.g. for your case:
mkdir /tmp/tam_192.168.174.43
sshfs tam#192.168.174.43:/ /tmp/tam_192.168.174.43
tam#192.168.174.43's password: <ENTER PASSWORD HERE>
& then, in your script, use simple commands like:
mkdir -p /tmp/tam_192.168.174.43/$location
cp -r $i /tmp/tam_192.168.174.43/$location
& to unmount:
fusermount -u /tmp/tam_192.168.174.43

Resources