How to get files list downloaded with scp -r - bash

Is it possible to get files list that were downloaded using scp -r ?
Example:
$ scp -r $USERNAME#HOSTNAME:~/backups/ .
3.tar 100% 5 0.0KB/s 00:00
2.tar 100% 5 0.0KB/s 00:00
1.tar 100% 4 0.0KB/s 00:00
Expected result:
3.tar
2.tar
1.tar

The output that scp generates does not seem to come out on any of the standard streams (stdout or stderr), so capturing it directly may be difficult. One way you could do this would be to make scp output verbose information (by using the -v switch) and then capture and process this information. The verbose information is output on stderr, so you will need to capture it using the 2> redirection operator.
For example, to capture the verbose output do:
scp -rv $USERNAME#HOSTNAME:~/backups/ . 2> scp.output
Then you will be able to filter this output with something like this:
awk '/Sending file/ {print $NF}' scp.output
The awk command simply prints the last word on the relevant line. If you have spaces in your filenames then you may need to come up with a more robust filter.

I realise that you asked the question about scp, but I will give you an alternative solution to the problem Copy files recursively from a server using ssh, and getting the file names that are copied.
The scp solution has at least one problem: if you copy lots of files, it takes a while as each file generates a transaction. Instead of scp, I use ssh and tar:
ssh $USERNAME#HOSTNAME:~/backups/ "cd ~/backups/ && tar -cf - ." | tar -xf -
With that, adding a tee and a tar -t gives you what you need:
ssh $USERNAME#HOSTNAME:~/backups/ "cd ~/backups/ && tar -cf - ." | tee >(tar -xf -) | tar -tf - > file_list
Note that it might not work in all shell (bash is ok) as the >(...) construct (process substitution) is not a general option. If you do not have it in your shell you could use a fifo (basically what the process substitution allows but shorter):
mkfifo tmp4tar
(tar -xf tmp4tar ; rm tmp4tar;) &
ssh $USERNAME#HOSTNAME:~/backups/ "cd ~/backups/ && tar -cf - ." | tee -a tmp4tar | tar -tf - > file_list

scp -v -r yourdir orczhou#targethost:/home/orczhou/ \
2> >(awk '{if($0 ~ "Sending file modes:")print $6}')
with -v "Sending file modes: C0644 7864 a.sql" should be ouput to stderr
use 'awk' to pick out the file list

Related

Bash input problem after computing size of folder with du for pv when gpg prompts user

I'm working on a script to cipher a bunch of folders through the use of tar, gzip and gpg, plus pv with du and awk to keep track of progress. Here is the line that causes problems
tar cf - "$f" | pv -s $(($(du -sk "$f" | awk '{print $1}') * 1024)) | gzip | gpg -e -o "$output/$(basename "$f").tar.gz.gpg"
This works well most of the time. However, if the output file already exists, gpg prompts the user, asking if we want to override the file or not. And in this case, when the script exits, the console kind of breaks: what I type does not appear anymore, pressing Enter does not create a new line, and so on.
The problem does not appear if the outfile does not exist yet, nor if the -s option of pv is missing or computed without du and awk (ex: $((500 * 500)). This won't break the console, but obviously the progress bar would be completely off)
The problem is reproducable even by using this command line outside of a script and replacing $f and $output with desired options.
Perhaps one or a combination of these changes will help.
Change the gpg command to write to stdout, redirected to the file you want: gpg -e -o - > "$output/$(basename "$f").tar.gz.gpg".
Calculate the file size with stat: stat -c "%s" "$f".
The whole line might then look like this:
tar cf - "$f" | pv -s $(stat -c "%s" "$f") | gzip | gpg -e -o - > "$output/$(basename "$f").tar.gz.gpg"

Pipe to another command bash

I cannot seem to get my bash script to work, i want to pipe the output from the gunzip command to another command but it is not working, can anyone help me?
The gunzip command outputs a tar file that i want to then use the tar command to put back yo the original file.
# let the user choose what they want to Restore
echo -n "Select the file or directory you want to Restore"
read Chosendata
echo -e "Starting Restore"
# unziping files
gunzip ${Chosendata} | tar xvf - #Here
# end the restore.
echo -e "Restore complete"
Use gunzip -c.
-c, --stdout write on standard output, keep original files unchanged
Or tar only: tar -xzf ${Chosendata}.

Pipe script and binary data to stdin via ssh

I want to execute a bash script remotely which consumes a tarball and performs some logic to it. The trick is that I want to use only one ssh command to do it (rather than scp for the tarball followed by ssh for the script).
The bash script looks like this:
cd /tmp
tar -zx
./archive/some_script.sh
rm -r archive
I realize that I can simply reformat this script into a one-liner and use
tar -cz ./archive | ssh $HOST bash -c '<commands>'
but my actual script is complicated enough that I must pipe it to bash via stdin. The challenge here is that ssh provides only one input pipe (stdin) which I want to use for both the bash script and the tarball.
I came up with two solutions, both of which include the bash script and the tarball in stdin.
1. Embed base64-encoded tarball in a heredoc
In this case the server receives a bash script with the tarball is embedded inside a heredoc:
base64 -d <<'EOF_TAR' | tar -zx
<base64_tarball>
EOF_TAR
Here's the complete example:
ssh $HOST bash -s < <(
# Feed script header
cat <<'EOF'
cd /tmp
base64 -d <<'EOF_TAR' | tar -zx
EOF
# Create local tarball, and pipe base64-encoded version
tar -cz ./archive | base64
# Feed rest of script
cat <<'EOF'
EOF_TAR
./archive/some_script.sh
rm -r archive
EOF
)
In this approach however, tar does not start extracting the tarball until it is fully transferred over the network.
2. Feed tar binary data after the script
In this case the bash script is piped into stdin followed by the raw tarball data. bash passes control to tar which processes the tar portion of stdin:
ssh $HOST bash -s < <(
# Feed script.
cat <<'EOF'
function main() {
cd /tmp
tar -zx
./archive/some_script.sh
rm -r archive
}
main
EOF
# Create local tarball and pipe it
tar -cz ./archive
)
Unlike the first approach, this one allows tar to start extracting the tarball as it is being transferred over the network.
Side note
Why do we need the main function, you ask? Why feed the entire bash script first, followed by binary tar data? Well, if the binary data were put in the middle of the bash script, there would be an error since tar consumes past the end of the tarfile, which in this case would eat up some of the bash script. So, the main function is used to force the whole bash script to come before the tar data.

Self-extracting script in sh shell

How would I go about making a self extracting archive that can be executed on sh?
The closest I have come to is:
extract_archive () {
printf '<archive_contents>' | tar -C "$extract_dir" -xvf -
}
Where <archive_contents> contains a tarball with null characters, %, ' and \ characters escaped and enclosed between single quotes.
Is there any better way to do this so that no escaping is required?
(Please don't point me to shar, makeself etc. I want to write it from scratch.)
Alternative variant is to use marker for end of shell script and use sed to cut-out shell script itself.
Script selfextract.sh:
#!/bin/bash
sed '0,/^#EOF#$/d' $0 | tar zx; exit 0
#EOF#
How to use:
# create sfx
cat selfextract.sh data.tar.gz >example_sfx.sh
# unpack sfx
bash example_sfx.sh
Since shell scripts are not compiled, but executed statement by statement, you can mix binary and text content using a pattern like this (untested):
#!/bin/sh
sed -e '1,/^exit$/d' "$0" | tar -C "${1-.}" -zxvf -
exit
<binary tar gzipped content here>
You can add those two lines to the top of pretty much any tar+gzip file to make it self extractable.
To test:
$ cat header.sh
#!/bin/sh
sed -e '1,/^exit$/d' "$0" | tar -C "${1-.}" -zxvf -
exit
$ tar -czf header.tgz header.sh
$ cat header.sh header.tgz > header.tgz.sh
$ sh header.tgz.sh
header.sh
Some good articles on how to do exactly that could be found at:
http://www.linuxjournal.com/node/1005818.
https://community.linuxmint.com/tutorial/view/1998
Yes, you can do it natively with xtar.
Build xtar elf64 tar self-extractor header (you free to modify it to support elf32, pe and other executable formats), it is based on lightweight bsdtar untar and std elf lib.
cc contrib/xtar.c -o ./xtar
Copy xtar binary to yourTar.xtar
cp ./xtar yourTar.xtar
Append yourTar.tar archive to the end of yourTar.xtar
cat yourTar.tar >> yourTar.xtar
chmod +x yourTar.xtar

lftp: how to recursively set permissions; firstly by directory than by file

When securing a Drupal or WordPress installation on a shared host that does not expose SSH access (a lousy situation, fwiw) lftp seems like the right approach to batch setting permissions for directories and files. The find command boasts that you can redirect its output, so one should be able to run a find, grep exclude to only match lines ending in "/" meaning a directory, and then set the permissions on such matches to 755 and perform the inverse on file matches and set to 644 and then fine tune specific files, such as settings.php and so forth.
lftp prompt> find . | grep "/$" | xargs chmod -v 755
Isn't working and I'm sure I have failed to chain these commands in the correct sequence and format.
How to get this to work?
Update: by "isn't working" I mean that the above command produces no output to the console, nor to the lftp error log. It isn't running these commands locally, fwiw. I'll reduce the command as a demonstration:
find . | grep "/$"
Will take the output of "find" and return matches, here, directories, by nature of the string match:
./daily/
./ffmpeg-installer/
./hourly/
./includes/
./includes/database/
./includes/database/mysql/
./and_so_forth_on_down
Which is cool, since I wish to perform a chmod (and internal command for lftp, with support varying by ftp server) So I expand the command like this:
find . | grep "/$" | xargs echo
Which outputs — nothing. No error output, either. The pipe from grep to xargs isn't happening.
My goal is to form the equivalent of:
chmod 755 ./daily/
chmod 755 ./ffmpeg-installer/
In lftp, the chmod command is performing an ftp-server-permissions change, not a local perms change.
For an explanation of why this does not work as expected, read on - for a solution to the given problem, scroll down.
The answer can be found in the manpage for lftp, which states that
"[s]ome commands allow redirecting their output (cat, ls, ...) to file or via pipe to external command."
So, when you are using a pipe like this on a command that does support redirection in lftp, you are piping its output to your local tools, which will eventually result in chmod trying to change the permissions for a file/directory on our local machine, and most likely fail in case you don't coincidally have the same directory layout in your current directory locally - which is probably the problem you encountered.
The grep + xargs pipe does work, I just tested the following:
lftp> find -d 2 | grep "/$"
./
./applications/
./lost+found/
./netinfo/
./packages/
./security/
./systems/
lftp> find -d 2 | grep "/$" | xargs echo
./ ./applications/ ./lost+found/ ./netinfo/ ./packages/ ./security/ ./systems/
My wild guess is that it did not appear to work for you because you did not specify a max-depth to find and the network connection + buffering in the pipe got in the way. When I try the same on a directory containing many files/subfolders it takes really long to finish and print. Did the command actually finish for you without output?
But still, what you are trying to do is not possible. As I stated, the right-hand-side of the pipe works with external commands (even if an inbuilt of the same name exists) as explained by the manual, so
lftp> chmod 644 foobar
and
lftp> echo "foobar" | xargs chmod 644
are not equivalent.
Yes, chmod is an inbuilt but used in a pipe in the client it will not execute the inbuilt - the manpage clearly states this and you can easily test this yourself. Try the following commands and check their output:
lftp> echo foo | uname -a
lftp> echo foo | ls -al
lftp> echo foo | chmod --help
lftp> chmod --help
Solution
As far as a solution to your problem is concerned, you can try something along the lines of:
#!/bin/bash
server="ftp.foo.bar"
root_folder="/my/path"
{
{
lftp "${server}" <<EOF
cd "${root_folder}"
find | grep "/$"
quit
EOF
} | awk '{ printf "chmod 755 \"%s\"\n", $0 }'
{
lftp "${server}" <<EOF
cd "${root_folder}"
find | grep -v "/$"
quit
EOF
} | awk '{ printf "chmod 644 \"%s\"\n", $0 }'
} | lftp "${server}"
This logs in to your server, cds to the folder where you want to recursively start changing the permissions, uses find + grep to find all directories, logs out, pipes this file list into awk to build chmod commands around it, repeats the whole process for files and then pipes the whole list of commands into a new lftp invocation to actually run the generated chmod commands.
You will also have to add your credentials to the lftp invocations and you might want to comment out the final | lftp "${server}" to check if it produces the desired output before you actually run the whole thing. Please report back if this works for you!

Resources