SFTP: return number of files in remote directory? - bash

I sent a batch of files to a remote server via SFTP. If it were a local directory I could do something like this ls -l | wc -l to get the total number of files. However, with SFTP, I get an error Can't ls: "/|" not found.

echo ls -l | sftp server | grep -v '^sftp' | wc -l
If you want to count the files in a directory the directory path should be put after the ls -l command like
echo ls -l /my/directory/ | sftp server | grep -v '^sftp' | wc -l

Use a batch file to run commands remotely and get the data back to work with in bash:
Make your batch file called mybatch.txt with these sftp commands:
cd your_directory/your_sub_directory
ls -l
Save it out and give it 777 permissions.
chmod 777 mybatch.txt
Then run it like this:
sftp your_username#your_server.com < mybatch.txt
It will prompt you for the password, enter it.
Then you get the output dumped to bash terminal. So you can pipe that to wc -l like this:
sftp your_user#your_server.com < mybatch.txt | wc -l
Connecting to your_server.com...
your_user#your_server.com's password:
8842
The 8842 is the number of lines returned by ls -l in that directory.
Instead of piping it to wc, you could dump it to a file for parsing to determine how many files/folders.

I would use sftp batch file.
Create a file called batchfile and enter "ls -l" in it.
Then run
sftp -b batchfile user#sftpHost | wc -l

The easiest way I have found is to use the lftp client which supports a shell-like syntax to transfer the output of remote ftp commands to local processes.
For example using the pipe character:
lftp -c 'connect sftp://user_name:password#host_name/directory; ls -l | wc -l'
This will make lftp spawn a local wc -l and give it the output of the remote ls -l ftp command on its stdin.
Shell redirection syntax is also supported and will write directly to local files:
lftp -c 'connect sftp://user_name:password#host_name/directory; ls -l >list.txt'
Thus a file named list.txt containing the remote file listing will be created in the current folder on the local machine. Use >> to append instead.
Works perfectly for me.

Related

How to run a command like xargs on a grep output of a pipe of a previous xargs from a command in Bash

I'm trying to understand what's happening here out of curiosity, even though I can just copy and paste the output of the terminal to do what I need to do. The following command does not print anything.
ls /opt/local/var/macports/registry/portfiles -1 | sed 's/-.*//g' | sort -u | parallel "sudo port -N install" {} 2>&1 | grep -Po "Use '\K.*(?=')" | parallel "{}"
The directory I call ls on contains a bunch of filenames starting with the string I want to extract that ends at the first dash (so stringexample-4.2009 pipes stringexample into parallel (like xargs but to run each line separately). After running the command sudo port install <stringexample>, I get error outputs like so:
Unable to activate port <stringexample>. Use 'port -f activate <stringexample>' to force the activation.
Now, I wish to run port -f activate <stringexample>. However, I cannot seem to do anything with the output port -f activate gettext that I get to the terminal.
I cannot even do ... | grep -Po "Use '\K.*(?=')" | xargs echo or ... | grep -Po "Use '\K.*(?=')" >> commands_to_run.txt (the output stream to file only creates an empty file), despite the shorter part of the command:
ls /opt/local/var/macports/registry/portfiles -1 | sed 's/-.*//g' | sort -u | parallel "sudo port -N install {}" 2>&1 | grep -Po "Use '\K.*(?=')"
printing the commands to the terminal. Why does the pipe operator not work here? If the commands I wish to run are outputting to the terminal, surely there's got to be a way to capture them.

bash script to access a file in a remote host three layers deep

So in the terminal I access the remote host through ssh -p then once I'm in i have to cd /directory1/directory2/. Then I want to find the latest directory which I do using ls -td -- */ | head -n 1 then using this I want to cd into that and tail -n 1 file1
All these commands work in the terminal but I want to automate it to where I can just type ./tailer.sh and have that be output.
Any ideas would be appreciated.
The shell script tailer.sh can look something like this
#!/bin/bash
ssh -p <PORT> <HOST_NAME> '( cd /directory1/directory2/ && LATEST_DIR=$(ls -td -- */ | head -n 1) && cd ${LATEST_DIR} && tail -n 1 file1 )'
Then give execute permissions to tailer.sh using chmod u+x tailer.sh
Run the script using ./tailer.sh

tar & split remote files saving output locally remove "tar: Removing leading `/' from member names" message from output

This is a 2 part question.
Ive made a bash script that logs into a remote server makes a list.txt and saves that locally.
#!/bin/bash
sshpass -p "xxxx" ssh user#pass ls /path/to/files/ | grep "^.*iso" > list.txt
It then starts a for loop using the list.txt
for f in $(cat list.txt); do
The next command splits the target file and saves it locally
sshpass -p "xxxx" ssh user#pass tar --no-same-owner -czf - /path/to/files/$f | split -b 10M - "$f.tar.bz2.part"
Question 1
I need help understanding the above command, why is it saving the *part files locally? Even though that is what I intend to do I would like to understand it better, How would I do this the other way round, tar and split files saving output to remote directory (flip around what is happening in the above command using the same tools sshpass is a requirement)
Question 2
When running the above command even though I have made it not verbose it still prints this message
tar: Removing leading `/' from member names
How do I get rid of it as I have my own echo output as part of the script I have tried the following after searching online but I think me piping a few commands together confuses tar and breaks the operation.
I have tried these with no luck
sshpass -p "xxxx" ssh user#pass tar --no-same-owner -czfP - /path/to/files/$f | split -b 10M - "$f.tar.bz2.part
sshpass -p "xxxx" ssh user#pass tar --no-same-owner -czf -C /path/to/files/$f | split -b 10M - "$f.tar.bz2.part
sshpass -p "xxxx" ssh user#pass tar --no-same-owner -czf - /path/to/files/$f | split -b 10M - "$f.tar.bz2.part > /dev/null 2>&1
sshpass -p "xxxx" ssh user#pass tar --no-same-owner -czf - /path/to/files/$f > /dev/null 2>&1 | split -b 10M - "$f.tar.bz2.part
All of the above break the operation and I would like it to not display any messages at all. I suspect it has something to do with regex and how the pipe passes through arguments. Any input is appreciated.
Anyways this is just part of the script the other part uploads the processed file after tar and splitting it but Ive had to break it up into a few commands a 'tar | split' locally, then uploading via rclone. It would be way more efficient if I could pipe the output of split and save it remotely via ssh.
First and foremost, you must consider the security vulnerabilities when using sshpass.
About question 1:
Using tar with -f - option will create the tar on the fly and will send to stdout.
The | separates the commands.
sshpass -p "xxxx" ssh user#pass tar --no-same-owner -czf - /path/to/files/$f - Runs remotely
split -b 10M - "$f.tar.bz2.part" - Runs in local shell
The second command reads the stdin from the first command (the tar output) and it creates the file locally.
If you want to perform all the operations in the remote machine, you could enclose the rest of the commands in quotes like this (read other sources about qouting).
sshpass -p "xxxx" ssh user#pass 'tar --no-same-owner -czf - /path/to/files/$f | split -b 10M - "$f.tar.bz2.part"'
About question 2.
tar: Removing leading '/' from member names is generated by tar command which sends errors/warnings to STDERR which in the terminal, STDERR defaults to the user's screen.
So you can suppress tar errors by adding 2>/dev/null:
sshpass -p "xxxx" ssh user#pass tar --no-same-owner -czf - /path/to/files/$f 2 > /dev/null | split -b 10M - "$f.tar.bz2.part

How to use a source file while executing a script on a remote machine over SSH

I am running a bash script on a remote machine over SSH.
ssh -T $DBHOST2 'bash -s' < $DIR/script.sh <arguments>
Within the script I am using a source file for defining functions used in the script script.sh.
DIR=`dirname $0` # to get the location where the script is located
echo "Directory is $DIR"
. $DIR/source.bashrc # source file
But since the source file is not present in the remote machine it results in an error.
Directory is .
./source.bashrc: No such file or directory
I can always define the functions along with the main script rather than using a source file, but I was wondering is there any way to use a separate source file.
Edit : Neither the source file nor the script is located in the remote machine.
Here are to ways to this - both only requiring one ssh session.
Option 1: Use tar to copy your scripts to the server
tar cf - $DIR/script.sh $DIR/source.bashrc | ssh $DBHOST2 "tar xf -; bash $DIR/script.sh <arguments>"
This 'copies' your scripts to your $DBHOST2 and executes them there.
Option 2: Use bashpp to include all code in one script
If copying files onto $DBHOST2 is not an option, use bashpp.
Replace your . calls with #include and then run it through bashpp:
bashpp $DIR/script.sh | ssh $DBHOST2 bash -s
ssh -T $DBHOST2 'bash -s' <<< $(cat source_file $DIR/script.sh)
The following acheives what I am trying to do.
1.Copy the source file to remote machine.
scp $DIR/source.bashrc $DBHOST2:./
2.Execute the local script with arguments on the remote machine via SSH
ssh $DBHOST2 "bash -s" -- < $DIR/script.sh <arguments>
3. Copy remote logfile logfile.log to local file dbhost2.log and remove the source file and logfile from the remote machine
ssh $DBHOST2 "cat logfile.log; rm -f source.bashrc logfile.log" > dbhost.log

Bash scp several files password issue

I am trying to copy several files from a remote server into local drive in Bash using scp.
Here's the part of the code
scp -r -q $USR#$IP:/home/file1.txt $PWD
scp -r -q $USR#$IP:/home/file2.txt $PWD
scp -r -q $USR#$IP:/root/file3.txt $PWD
However, the problem is that EVERY time that it wants to copy a file, it keeps asking for the password of the server, which is the same. I want it to ask only once and then copy all my files.
And please, do not suggest rsync nor making a key authentication file since I do not want to do that.
Are there any other ways...?
Any help would be appreciated
You can use expect script or sshpass
sshpass -p 'password' scp ...
#!/usr/bin/expect -f
spawn scp ...
expect "password:"
send "ur_password"
An disadvantage is that your password is now in plaintext
I'm assuming that if you can scp files from the remote server, you can also ssh in and create a tarball of the remote files.
The -r flag is recursive, for copying entire directories but your listing distinct files in your command, so -r becomes superfluous.
Try this from the bash shell on the remote system:
$ mkdir /home/file_mover
$ cp /home/file1.txt /home/file_mover/
$ cp /home/file2.txt /home/file_mover/
$ cp /root/file3.txt /home/file_mover/
$ tar -cvf /home/myTarball.tar /home/file_mover/
$ scp -q $USR#$IP:/home/myTarball.tar $PWD
Well, in this particular case, you can write...
scp -q $USR#$IP:/home/file[1-3].txt $PWD

Resources