How would I go about making a self extracting archive that can be executed on sh?
The closest I have come to is:
extract_archive () {
printf '<archive_contents>' | tar -C "$extract_dir" -xvf -
}
Where <archive_contents> contains a tarball with null characters, %, ' and \ characters escaped and enclosed between single quotes.
Is there any better way to do this so that no escaping is required?
(Please don't point me to shar, makeself etc. I want to write it from scratch.)
Alternative variant is to use marker for end of shell script and use sed to cut-out shell script itself.
Script selfextract.sh:
#!/bin/bash
sed '0,/^#EOF#$/d' $0 | tar zx; exit 0
#EOF#
How to use:
# create sfx
cat selfextract.sh data.tar.gz >example_sfx.sh
# unpack sfx
bash example_sfx.sh
Since shell scripts are not compiled, but executed statement by statement, you can mix binary and text content using a pattern like this (untested):
#!/bin/sh
sed -e '1,/^exit$/d' "$0" | tar -C "${1-.}" -zxvf -
exit
<binary tar gzipped content here>
You can add those two lines to the top of pretty much any tar+gzip file to make it self extractable.
To test:
$ cat header.sh
#!/bin/sh
sed -e '1,/^exit$/d' "$0" | tar -C "${1-.}" -zxvf -
exit
$ tar -czf header.tgz header.sh
$ cat header.sh header.tgz > header.tgz.sh
$ sh header.tgz.sh
header.sh
Some good articles on how to do exactly that could be found at:
http://www.linuxjournal.com/node/1005818.
https://community.linuxmint.com/tutorial/view/1998
Yes, you can do it natively with xtar.
Build xtar elf64 tar self-extractor header (you free to modify it to support elf32, pe and other executable formats), it is based on lightweight bsdtar untar and std elf lib.
cc contrib/xtar.c -o ./xtar
Copy xtar binary to yourTar.xtar
cp ./xtar yourTar.xtar
Append yourTar.tar archive to the end of yourTar.xtar
cat yourTar.tar >> yourTar.xtar
chmod +x yourTar.xtar
Related
I have attempted to write a shell script that creates another self extracting tar archive that is zipped and encoded in base64. I don't know where to go form here and have little to no experience in shell scripting.
As is this script creates tar archive that is zipped and encoded, but the self extracting does not work when i try to run the ./tarName from the terminal. Any advice is appreciated
#!/bin/sh
tarName=$1;
if [ -e $tarName.tar.gz ]
then /bin/echo "$tarName already exists"
exit 0
fi
shift;
for files;
do
tar -czvf tmpTarBall.tar.gz $files;
done
echo "#!/bin/sh" >> $tarName.tar.gz;
echo "base64 -d $tarName.tar.gz" >> $tarName.tar.gz;
echo "tar -xzvf $tarName.tar.gz" >> $tarName.tar.gz;
chmod +x ./$tarName.tar.gz;
base64 tmpTarBall.tar.gz >> $tarName.tar.gz;
rm tmpTarBall.tar.gz;
----------UPDATE
Did some looking around and this is what I have now, still doesn't work. Can anyone explain to me why?
#!/bin/sh
tarName=$1;
if [ -e $tarName.tar.gz ]
then /bin/echo "$tarName already exists"
exit 0
fi
shift;
for files;
do
tar -czvf tmpTarBall.tar.gz $files;
done
cat > extract.sh;
echo "#!/bin/sh" >> extract.sh;
echo "sed '0,/^#TARBALL#$/d' $0 | $tarName.tar.gz | base64 -d | tar -xzv; exit 0" >> extract.sh;
echo "#TARBALL#" >> extract.sh;
cat extract.sh tmpTarBall.tar.gz > $tarName.tar.gz;
chmod +x ./$tarName.tar.gz;
rm extract.sh tmpTarBall.tar.gz;
When I try to run the tarName.tar.gz i get errors:
./tarName.tar.gz: 2: ./tarName.tar.gz: tarName.tar.gz: not found
gzip: stdin: unexpected end of file
tar: Child returned status 1
tar: Error is not recoverable: exiting now
Desired output
In outline, the script you want to generate should look like:
base64 -d <<'EOF' | tar -xzf -
…base-64 encoded data…
EOF
The base64 command decodes its standard input, which is provided as a here document terminated by a line containing just EOF. The output is written to
tar with options to extract gzipped data read from standard input.
Minimal script
So, a minimal generator script looks like:
echo "base64 -d <<'EOF' | tar -czf -"
tar -czf - "$#" | base64 -w 72
echo "EOF"
This echoes the base64 … | tar … line, then uses tar to generate on standard output a zipped tar file containing the files or directories named on the command line, and the output is piped to the GNU coreutils version of base64 with the option to specify that output lines should be 72 characters wide (plus the newline). This is all followed by EOF to mark the end of the here document.
You can add shebang lines (#!/bin/sh) to either or both scripts. There's no need to choose a more specific shell; this uses only core shell scripting constructs that would work back to the days of yore — before POSIX was a gleam in anyone's eye.
Possible complications
Complications that are possible include support for Mac OS X base64 which has a usage message like this:
Usage: base64 [-dhvD] [-b num] [-i in_file] [-o out_file]
-h, --help display this message
-D, --decode decodes input
-b, --break break encoded string into num character lines
-i, --input input file (default: "-" for stdin)
-o, --output output file (default: "-" for stdout)
The -v option and the -d option both generate base64: invalid option -- v (for the appropriate letter), plus the usage. There doesn't seem to be a way to get version information from it. However, GNU's base64 does generate a useful message when you request base64 --version. The first line of standard output will contain something like:
base64 (GNU coreutils) 8.22
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Written by Simon Josefsson.
This is written to standard output. So, you could auto-detect whether you have the GNU base64 and adapt accordingly. You'd need one test in the generator script, and a copy of the test in the generated script. That's definitely a more refined program.
Is it necessary to do this yourself? There is an existing tool called makeself that can do this for you. If you do need to write this yourself, here are some thoughts:
Your output file is an archive with a shell script stuck to the front of it. The extract process runs the entire output file through base64 and tar, not just the archive. The base64 call turns the script portion into garbage, which then confuses tar. What you need to do is to add some code that will separate the script from the archive, then run the remaining commands on just the archive portion. One possible way to do this is to tweak your extract script to something like this:
#!/bin/sh
linenum=$(grep -n "__END_OF_SCRIPT_MARKER__" $tarName.tar.gz | tail -1 | sed -e 's/:.*//')
tail -n +$(($linenum + 1)) $tarName.tar.gz | base64 -d | tar -xzv
exit 0
__END_OF_SCRIPT_MARKER__
Make sure there is nothing in the script portion following the marker text except a newline character (which the markup on this website doesn't make visible). With this, you're using grep to find the line number that contains the marker, then stripping off that many lines with tail. What remains will be the archive portion, which is processed normally by the rest of your code. The exit line ensures that the shell doesn't try to execute the marker text or the archive contents as code. You can keep the extract code in a less compressed format if you'd rather, but you'll end up having to create a temporary file for the archive portion and ensure that it gets deleted.
I want to execute a bash script remotely which consumes a tarball and performs some logic to it. The trick is that I want to use only one ssh command to do it (rather than scp for the tarball followed by ssh for the script).
The bash script looks like this:
cd /tmp
tar -zx
./archive/some_script.sh
rm -r archive
I realize that I can simply reformat this script into a one-liner and use
tar -cz ./archive | ssh $HOST bash -c '<commands>'
but my actual script is complicated enough that I must pipe it to bash via stdin. The challenge here is that ssh provides only one input pipe (stdin) which I want to use for both the bash script and the tarball.
I came up with two solutions, both of which include the bash script and the tarball in stdin.
1. Embed base64-encoded tarball in a heredoc
In this case the server receives a bash script with the tarball is embedded inside a heredoc:
base64 -d <<'EOF_TAR' | tar -zx
<base64_tarball>
EOF_TAR
Here's the complete example:
ssh $HOST bash -s < <(
# Feed script header
cat <<'EOF'
cd /tmp
base64 -d <<'EOF_TAR' | tar -zx
EOF
# Create local tarball, and pipe base64-encoded version
tar -cz ./archive | base64
# Feed rest of script
cat <<'EOF'
EOF_TAR
./archive/some_script.sh
rm -r archive
EOF
)
In this approach however, tar does not start extracting the tarball until it is fully transferred over the network.
2. Feed tar binary data after the script
In this case the bash script is piped into stdin followed by the raw tarball data. bash passes control to tar which processes the tar portion of stdin:
ssh $HOST bash -s < <(
# Feed script.
cat <<'EOF'
function main() {
cd /tmp
tar -zx
./archive/some_script.sh
rm -r archive
}
main
EOF
# Create local tarball and pipe it
tar -cz ./archive
)
Unlike the first approach, this one allows tar to start extracting the tarball as it is being transferred over the network.
Side note
Why do we need the main function, you ask? Why feed the entire bash script first, followed by binary tar data? Well, if the binary data were put in the middle of the bash script, there would be an error since tar consumes past the end of the tarfile, which in this case would eat up some of the bash script. So, the main function is used to force the whole bash script to come before the tar data.
I have many tar.bz2 files in a directory, and would like to extract them to another directory.
Here is my bash script:
for i in *.tar.bz2 do;
sudo tar -xvjf $i.tar.bz2 -C ~/myfiles/
done
It doesn't work. How can I make it work? Thanks!
Your variable $i contains the entire file name (as you have applied the regex *.tar.bz2). So inside your for loop you don't need to attach the extension.
Try:
for i in *.tar.bz2; do
sudo tar -xvjf "$i" -C ~/myfiles/
done
You also have ; misplaced.
Is it possible to get files list that were downloaded using scp -r ?
Example:
$ scp -r $USERNAME#HOSTNAME:~/backups/ .
3.tar 100% 5 0.0KB/s 00:00
2.tar 100% 5 0.0KB/s 00:00
1.tar 100% 4 0.0KB/s 00:00
Expected result:
3.tar
2.tar
1.tar
The output that scp generates does not seem to come out on any of the standard streams (stdout or stderr), so capturing it directly may be difficult. One way you could do this would be to make scp output verbose information (by using the -v switch) and then capture and process this information. The verbose information is output on stderr, so you will need to capture it using the 2> redirection operator.
For example, to capture the verbose output do:
scp -rv $USERNAME#HOSTNAME:~/backups/ . 2> scp.output
Then you will be able to filter this output with something like this:
awk '/Sending file/ {print $NF}' scp.output
The awk command simply prints the last word on the relevant line. If you have spaces in your filenames then you may need to come up with a more robust filter.
I realise that you asked the question about scp, but I will give you an alternative solution to the problem Copy files recursively from a server using ssh, and getting the file names that are copied.
The scp solution has at least one problem: if you copy lots of files, it takes a while as each file generates a transaction. Instead of scp, I use ssh and tar:
ssh $USERNAME#HOSTNAME:~/backups/ "cd ~/backups/ && tar -cf - ." | tar -xf -
With that, adding a tee and a tar -t gives you what you need:
ssh $USERNAME#HOSTNAME:~/backups/ "cd ~/backups/ && tar -cf - ." | tee >(tar -xf -) | tar -tf - > file_list
Note that it might not work in all shell (bash is ok) as the >(...) construct (process substitution) is not a general option. If you do not have it in your shell you could use a fifo (basically what the process substitution allows but shorter):
mkfifo tmp4tar
(tar -xf tmp4tar ; rm tmp4tar;) &
ssh $USERNAME#HOSTNAME:~/backups/ "cd ~/backups/ && tar -cf - ." | tee -a tmp4tar | tar -tf - > file_list
scp -v -r yourdir orczhou#targethost:/home/orczhou/ \
2> >(awk '{if($0 ~ "Sending file modes:")print $6}')
with -v "Sending file modes: C0644 7864 a.sql" should be ouput to stderr
use 'awk' to pick out the file list
Bash tab completion adds extra space after the first completion which stops further completion if the compeletion target is a file in multi-level folders.
For example, I have a file in the path ~/Documents/foo/bar.txt, and I want to list it.
I face the following problem, when input
a#b:~$ls Docu <TAB>
I get
a#b:~$ls Documents |(<-this is the cursor, so there is an extra space afer Documents)
So I cannot further tab complete. I have to backspace to delete the extra space.
Normally I want to get:
a#b:~$ls Docu <TAB>
a#b:~$ls Documents/ <TAB>
a#b:~$ls Documents/foo/ <TAB>
a#b:~$ls Documents/foo/bar.txt
Just for the record: There is also a bug in the adobereader-enu (acroread) package that breaks bash completion. In this case you can just delete the symlink:
rm /etc/bash_completion.d/acroread.sh
See also: https://bugs.launchpad.net/ubuntu/+source/acroread/+bug/769866
I have had this same problem with my bash completion in both Ubuntu 11.10 and 12.04. I found that I was able to get many commands to start working correctly by editing /etc/bash_completion. Specifically, I commented out the following section:
####
# makeinfo and texi2dvi are defined elsewhere.
#
#for i in a2ps awk bash bc bison cat colordiff cp csplit \
# curl cut date df diff dir du enscript env expand fmt fold gperf gprof \
# grep grub head indent irb ld ldd less ln ls m4 md5sum mkdir mkfifo mknod \
# mv netstat nl nm objcopy objdump od paste patch pr ptx readelf rm rmdir \
# sed seq sha{,1,224,256,384,512}sum shar sort split strip tac tail tee \
# texindex touch tr uname unexpand uniq units vdir wc wget who; do
# have $i && complete -F _longopt -o default $i
#done
Now ls works well again. I have not figured out yet why mv is still mis-behaving.
This has been answered here at askubuntu. It is related to the bug here
Relevant answer from the above thread:
edit /etc/bash_completion line 1587, change default to filenames (make a backup first).
i also got around the problem by changing
_filedir with _filedir_pdf
in /etc/bash_completion.d/acroread.sh
(Ubuntu 12.04)
acroread bash completion changes the _filedir function thereby altering the behaviour of a lot of other alsobash completion functions