self extracting tar archive (shell scripting) - shell

I have attempted to write a shell script that creates another self extracting tar archive that is zipped and encoded in base64. I don't know where to go form here and have little to no experience in shell scripting.
As is this script creates tar archive that is zipped and encoded, but the self extracting does not work when i try to run the ./tarName from the terminal. Any advice is appreciated
#!/bin/sh
tarName=$1;
if [ -e $tarName.tar.gz ]
then /bin/echo "$tarName already exists"
exit 0
fi
shift;
for files;
do
tar -czvf tmpTarBall.tar.gz $files;
done
echo "#!/bin/sh" >> $tarName.tar.gz;
echo "base64 -d $tarName.tar.gz" >> $tarName.tar.gz;
echo "tar -xzvf $tarName.tar.gz" >> $tarName.tar.gz;
chmod +x ./$tarName.tar.gz;
base64 tmpTarBall.tar.gz >> $tarName.tar.gz;
rm tmpTarBall.tar.gz;
----------UPDATE
Did some looking around and this is what I have now, still doesn't work. Can anyone explain to me why?
#!/bin/sh
tarName=$1;
if [ -e $tarName.tar.gz ]
then /bin/echo "$tarName already exists"
exit 0
fi
shift;
for files;
do
tar -czvf tmpTarBall.tar.gz $files;
done
cat > extract.sh;
echo "#!/bin/sh" >> extract.sh;
echo "sed '0,/^#TARBALL#$/d' $0 | $tarName.tar.gz | base64 -d | tar -xzv; exit 0" >> extract.sh;
echo "#TARBALL#" >> extract.sh;
cat extract.sh tmpTarBall.tar.gz > $tarName.tar.gz;
chmod +x ./$tarName.tar.gz;
rm extract.sh tmpTarBall.tar.gz;
When I try to run the tarName.tar.gz i get errors:
./tarName.tar.gz: 2: ./tarName.tar.gz: tarName.tar.gz: not found
gzip: stdin: unexpected end of file
tar: Child returned status 1
tar: Error is not recoverable: exiting now

Desired output
In outline, the script you want to generate should look like:
base64 -d <<'EOF' | tar -xzf -
…base-64 encoded data…
EOF
The base64 command decodes its standard input, which is provided as a here document terminated by a line containing just EOF. The output is written to
tar with options to extract gzipped data read from standard input.
Minimal script
So, a minimal generator script looks like:
echo "base64 -d <<'EOF' | tar -czf -"
tar -czf - "$#" | base64 -w 72
echo "EOF"
This echoes the base64 … | tar … line, then uses tar to generate on standard output a zipped tar file containing the files or directories named on the command line, and the output is piped to the GNU coreutils version of base64 with the option to specify that output lines should be 72 characters wide (plus the newline). This is all followed by EOF to mark the end of the here document.
You can add shebang lines (#!/bin/sh) to either or both scripts. There's no need to choose a more specific shell; this uses only core shell scripting constructs that would work back to the days of yore — before POSIX was a gleam in anyone's eye.
Possible complications
Complications that are possible include support for Mac OS X base64 which has a usage message like this:
Usage: base64 [-dhvD] [-b num] [-i in_file] [-o out_file]
-h, --help display this message
-D, --decode decodes input
-b, --break break encoded string into num character lines
-i, --input input file (default: "-" for stdin)
-o, --output output file (default: "-" for stdout)
The -v option and the -d option both generate base64: invalid option -- v (for the appropriate letter), plus the usage. There doesn't seem to be a way to get version information from it. However, GNU's base64 does generate a useful message when you request base64 --version. The first line of standard output will contain something like:
base64 (GNU coreutils) 8.22
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Written by Simon Josefsson.
This is written to standard output. So, you could auto-detect whether you have the GNU base64 and adapt accordingly. You'd need one test in the generator script, and a copy of the test in the generated script. That's definitely a more refined program.

Is it necessary to do this yourself? There is an existing tool called makeself that can do this for you. If you do need to write this yourself, here are some thoughts:
Your output file is an archive with a shell script stuck to the front of it. The extract process runs the entire output file through base64 and tar, not just the archive. The base64 call turns the script portion into garbage, which then confuses tar. What you need to do is to add some code that will separate the script from the archive, then run the remaining commands on just the archive portion. One possible way to do this is to tweak your extract script to something like this:
#!/bin/sh
linenum=$(grep -n "__END_OF_SCRIPT_MARKER__" $tarName.tar.gz | tail -1 | sed -e 's/:.*//')
tail -n +$(($linenum + 1)) $tarName.tar.gz | base64 -d | tar -xzv
exit 0
__END_OF_SCRIPT_MARKER__
Make sure there is nothing in the script portion following the marker text except a newline character (which the markup on this website doesn't make visible). With this, you're using grep to find the line number that contains the marker, then stripping off that many lines with tail. What remains will be the archive portion, which is processed normally by the rest of your code. The exit line ensures that the shell doesn't try to execute the marker text or the archive contents as code. You can keep the extract code in a less compressed format if you'd rather, but you'll end up having to create a temporary file for the archive portion and ensure that it gets deleted.

Related

/bin/sh Need to identify file content type

I'm working on busybox and have only /bin/sh available.
I would like to understand if the file I'm processing with my script are to be treated as ASCII (just read and do what I need to do) or gzip (so unzip first then do what I need to do).
The "file" command here would be perfect, but unfortunately it's just not available, hence I don't know what procedure to call as the input file I'm processing can be either format.
I'm wondering if there's a simple workaround I'm missing here to find this out...
Implicit in your question is that you have a gunzip command, and are trying to figure out whether you need to invoke it.
One command that can tell you that... is gzip.
contents_of_file() {
local file="$1"
if gzip -t <"$file" >/dev/null 2>&1; then
gunzip -c <"$file"
else
cat <"$file"
fi
}
That said, you can also ask grep if a file has no non-printable, non-whitespace characters:
is_plain_text() {
if grep -q -e '[^[:graph:][:space:]]' <"$1"; then
echo "$1 has non-ASCII characters"
else
echo "$1 is plain text"
fi
}

zgrep tar.gz file with file location results & match [duplicate]

Am trying to grep pattern from dozen files .tar.gz but its very slow
am using
tar -ztf file.tar.gz | while read FILENAME
do
if tar -zxf file.tar.gz "$FILENAME" -O | grep "string" > /dev/null
then
echo "$FILENAME contains string"
fi
done
If you have zgrep you can use
zgrep -a string file.tar.gz
You can use the --to-command option to pipe files to an arbitrary script. Using this you can process the archive in a single pass (and without a temporary file). See also this question, and the manual.
Armed with the above information, you could try something like:
$ tar xf file.tar.gz --to-command "awk '/bar/ { print ENVIRON[\"TAR_FILENAME\"]; exit }'"
bfe2/.bferc
bfe2/CHANGELOG
bfe2/README.bferc
I know this question is 4 years old, but I have a couple different options:
Option 1: Using tar --to-command grep
The following line will look in example.tgz for PATTERN. This is similar to #Jester's example, but I couldn't get his pattern matching to work.
tar xzf example.tgz --to-command 'grep --label="$TAR_FILENAME" -H PATTERN ; true'
Option 2: Using tar -tzf
The second option is using tar -tzf to list the files, then go through them with grep. You can create a function to use it over and over:
targrep () {
for i in $(tar -tzf "$1"); do
results=$(tar -Oxzf "$1" "$i" | grep --label="$i" -H "$2")
echo "$results"
done
}
Usage:
targrep example.tar.gz "pattern"
Both the below options work well.
$ zgrep -ai 'CDF_FEED' FeedService.log.1.05-31-2019-150003.tar.gz | more
2019-05-30 19:20:14.568 ERROR 281 --- [http-nio-8007-exec-360] DrupalFeedService : CDF_FEED_SERVICE::CLASSIFICATION_ERROR:408: Classification failed even after maximum retries for url : abcd.html
$ zcat FeedService.log.1.05-31-2019-150003.tar.gz | grep -ai 'CDF_FEED'
2019-05-30 19:20:14.568 ERROR 281 --- [http-nio-8007-exec-360] DrupalFeedService : CDF_FEED_SERVICE::CLASSIFICATION_ERROR:408: Classification failed even after maximum retries for url : abcd.html
If this is really slow, I suspect you're dealing with a large archive file. It's going to uncompress it once to extract the file list, and then uncompress it N times--where N is the number of files in the archive--for the grep. In addition to all the uncompressing, it's going to have to scan a fair bit into the archive each time to extract each file. One of tar's biggest drawbacks is that there is no table of contents at the beginning. There's no efficient way to get information about all the files in the archive and only read that portion of the file. It essentially has to read all of the file up to the thing you're extracting every time; it can't just jump to a filename's location right away.
The easiest thing you can do to speed this up would be to uncompress the file first (gunzip file.tar.gz) and then work on the .tar file. That might help enough by itself. It's still going to loop through the entire archive N times, though.
If you really want this to be efficient, your only option is to completely extract everything in the archive before processing it. Since your problem is speed, I suspect this is a giant file that you don't want to extract first, but if you can, this will speed things up a lot:
tar zxf file.tar.gz
for f in hopefullySomeSubdir/*; do
grep -l "string" $f
done
Note that grep -l prints the name of any matching file, quits after the first match, and is silent if there's no match. That alone will speed up the grepping portion of your command, so even if you don't have the space to extract the entire archive, grep -l will help. If the files are huge, it will help a lot.
For starters, you could start more than one process:
tar -ztf file.tar.gz | while read FILENAME
do
(if tar -zxf file.tar.gz "$FILENAME" -O | grep -l "string"
then
echo "$FILENAME contains string"
fi) &
done
The ( ... ) & creates a new detached (read: the parent shell does not wait for the child)
process.
After that, you should optimize the extracting of your archive. The read is no problem,
as the OS should have cached the file access already. However, tar needs to unpack
the archive every time the loop runs, which can be slow. Unpacking the archive once
and iterating over the result may help here:
local tempPath=`tempfile`
mkdir $tempPath && tar -zxf file.tar.gz -C $tempPath &&
find $tempPath -type f | while read FILENAME
do
(if grep -l "string" "$FILENAME"
then
echo "$FILENAME contains string"
fi) &
done && rm -r $tempPath
find is used here, to get a list of files in the target directory of tar, which we're iterating over, for each file searching for a string.
Edit: Use grep -l to speed up things, as Jim pointed out. From man grep:
-l, --files-with-matches
Suppress normal output; instead print the name of each input file from which output would
normally have been printed. The scanning will stop on the first match. (-l is specified
by POSIX.)
Am trying to grep pattern from dozen files .tar.gz but its very slow
tar -ztf file.tar.gz | while read FILENAME
do
if tar -zxf file.tar.gz "$FILENAME" -O | grep "string" > /dev/null
then
echo "$FILENAME contains string"
fi
done
That's actually very easy with ugrep option -z:
-z, --decompress
Decompress files to search, when compressed. Archives (.cpio,
.pax, .tar, and .zip) and compressed archives (e.g. .taz, .tgz,
.tpz, .tbz, .tbz2, .tb2, .tz2, .tlz, and .txz) are searched and
matching pathnames of files in archives are output in braces. If
-g, -O, -M, or -t is specified, searches files within archives
whose name matches globs, matches file name extensions, matches
file signature magic bytes, or matches file types, respectively.
Supported compression formats: gzip (.gz), compress (.Z), zip,
bzip2 (requires suffix .bz, .bz2, .bzip2, .tbz, .tbz2, .tb2, .tz2),
lzma and xz (requires suffix .lzma, .tlz, .xz, .txz).
Which requires just one command to search file.tar.gz as follows:
ugrep -z "string" file.tar.gz
This greps each of the archived files to display matches. Archived filenames are shown in braces to distinguish them from ordinary filenames. For example:
$ ugrep -z "Hello" archive.tgz
{Hello.bat}:echo "Hello World!"
Binary file archive.tgz{Hello.class} matches
{Hello.java}:public class Hello // prints a Hello World! greeting
{Hello.java}: { System.out.println("Hello World!");
{Hello.pdf}:(Hello)
{Hello.sh}:echo "Hello World!"
{Hello.txt}:Hello
If you just want the file names, use option -l (--files-with-matches) and customize the filename output with option --format="%z%~" to get rid of the braces:
$ ugrep -z Hello -l --format="%z%~" archive.tgz
Hello.bat
Hello.class
Hello.java
Hello.pdf
Hello.sh
Hello.txt
All of the code above was really helpful, but none of it quite answered my own need: grep all *.tar.gz files in the current directory to find a pattern that is specified as an argument in a reusable script to output:
The name of both the archive file and the extracted file
The line number where the pattern was found
The contents of the matching line
It's what I was really hoping that zgrep could do for me and it just can't.
Here's my solution:
pattern=$1
for f in *.tar.gz; do
echo "$f:"
tar -xzf "$f" --to-command 'grep --label="`basename $TAR_FILENAME`" -Hin '"$pattern ; true";
done
You can also replace the tar line with the following if you'd like to test that all variables are expanding properly with a basic echo statement:
tar -xzf "$f" --to-command 'echo "f:`basename $TAR_FILENAME` s:'"$pattern\""
Let me explain what's going on. Hopefully, the for loop and the echo of the archive filename in question is obvious.
tar -xzf: x extract, z filter through gzip, f based on the following archive file...
"$f": The archive file provided by the for loop (such as what you'd get by doing an ls) in double-quotes to allow the variable to expand and ensure that the script is not broken by any file names with spaces, etc.
--to-command: Pass the output of the tar command to another command rather than actually extracting files to the filesystem. Everything after this specifies what the command is (grep) and what arguments we're passing to that command.
Let's break that part down by itself, since it's the "secret sauce" here.
'grep --label="`basename $TAR_FILENAME`" -Hin '"$pattern ; true"
First, we use a single-quote to start this chunk so that the executed sub-command (basename $TAR_FILENAME) is not immediately expanded/resolved. More on that in a moment.
grep: The command to be run on the (not actually) extracted files
--label=: The label to prepend the results, the value of which is enclosed in double-quotes because we do want to have the grep command resolve the $TAR_FILENAME environment variable passed in by the tar command.
basename $TAR_FILENAME: Runs as a command (surrounded by backticks) and removes directory path and outputs only the name of the file
-Hin: H Display filename (provided by the label), i Case insensitive search, n Display line number of match
Then we "end" the first part of the command string with a single quote and start up the next part with a double quote so that the $pattern, passed in as the first argument, can be resolved.
Realizing which quotes I needed to use where was the part that tripped me up the longest. Hopefully, this all makes sense to you and helps someone else out. Also, I hope I can find this in a year when I need it again (and I've forgotten about the script I made for it already!)
And it's been a bit a couple of weeks since I wrote the above and it's still super useful... but it wasn't quite good enough as files have piled up and searching for things has gotten more messy. I needed a way to limit what I looked at by the date of the file (only looking at more recent files). So here's that code. Hopefully it's fairly self-explanatory.
if [ -z "$1" ]; then
echo "Look within all tar.gz files for a string pattern, optionally only in recent files"
echo "Usage: targrep <string to search for> [start date]"
fi
pattern=$1
startdatein=$2
startdate=$(date -d "$startdatein" +%s)
for f in *.tar.gz; do
filedate=$(date -r "$f" +%s)
if [[ -z "$startdatein" ]] || [[ $filedate -ge $startdate ]]; then
echo "$f:"
tar -xzf "$f" --to-command 'grep --label="`basename $TAR_FILENAME`" -Hin '"$pattern ; true"
fi
done
And I can't stop tweaking this thing. I added an argument to filter by the name of the output files in the tar file. Wildcards work, too.
Usage:
targrep.sh [-d <start date>] [-f <filename to include>] <string to search for>
Example:
targrep.sh -d "1/1/2019" -f "*vehicle_models.csv" ford
while getopts "d:f:" opt; do
case $opt in
d) startdatein=$OPTARG;;
f) targetfile=$OPTARG;;
esac
done
shift "$((OPTIND-1))" # Discard options and bring forward remaining arguments
pattern=$1
echo "Searching for: $pattern"
if [[ -n $targetfile ]]; then
echo "in filenames: $targetfile"
fi
startdate=$(date -d "$startdatein" +%s)
for f in *.tar.gz; do
filedate=$(date -r "$f" +%s)
if [[ -z "$startdatein" ]] || [[ $filedate -ge $startdate ]]; then
echo "$f:"
if [[ -z "$targetfile" ]]; then
tar -xzf "$f" --to-command 'grep --label="`basename $TAR_FILENAME`" -Hin '"$pattern ; true"
else
tar -xzf "$f" --no-anchored "$targetfile" --to-command 'grep --label="`basename $TAR_FILENAME`" -Hin '"$pattern ; true"
fi
fi
done
zgrep works fine for me, only if all files inside is plain text.
it looks nothing works if the tgz file contains gzip files.
You can mount the TAR archive with ratarmount and then simply search for the pattern in the mounted view:
pip install --user ratarmount
ratarmount large-archive.tar mountpoint
grep -r '<pattern>' mountpoint/
This is much faster than iterating over each file and piping it to grep separately, especially for compressed TARs. Here are benchmark results in seconds for a 55 MiB uncompressed and 42 MiB compressed TAR archive containing 40 files:
Compression
Ratarmount
Bash Loop over tar -O
none
0.31 +- 0.01
0.55 +- 0.02
gzip
1.1 +- 0.1
13.5 +- 0.1
bzip2
1.2 +- 0.1
97.8 +- 0.2
Of course, these results are highly dependent on the archive size and how many files the archive contains. These test examples are pretty small because I didn't want to wait too long. But, they already exemplify the problem well enough. The more files there are, the longer it takes for tar -O to jump to the correct file. And for compressed archives, it will be quadratically slower the larger the archive size is because everything before the requested file has to be decompressed and each file is requested separately. Both of these problems are solved by ratarmount.
This is the code for benchmarking:
function checkFilesWithRatarmount()
{
local pattern=$1
local archive=$2
ratarmount "$archive" "$archive.mountpoint"
'grep' -r -l "$pattern" "$archive.mountpoint/"
}
function checkEachFileViaStdOut()
{
local pattern=$1
local archive=$2
tar --list --file "$archive" | while read -r file; do
if tar -x --file "$archive" -O -- "$file" | grep -q "$pattern"; then
echo "Found pattern in: $file"
fi
done
}
function createSampleTar()
{
for i in $( seq 40 ); do
head -c $(( 1024 * 1024 )) /dev/urandom | base64 > $i.dat
done
tar -czf "$1" [0-9]*.dat
}
createSampleTar myarchive.tar.gz
time checkEachFileViaStdOut ABCD myarchive.tar.gz
time checkFilesWithRatarmount ABCD myarchive.tar.gz
sleep 0.5s
fusermount -u myarchive.tar.gz.mountpoint
In my case the tarballs have a lot of tiny files and I want to know what archived file inside the tarball matches. zgrep is fast (less than one second) but doesn't provide the info I want, and tar --to-command grep is much, much slower (many minutes)1.
So I went the other direction and had zgrep tell me the byte offsets of the matches in the tarball and put that together with the list of offsets in the tarball of all archived files to find the matching archived files.
#!/bin/bash
set -e
set -o pipefail
function tar_offsets() {
# Get the byte offsets of all the files in a given tarball
# based on https://stackoverflow.com/a/49865044/60422
[ $# -eq 1 ]
tar -tvf "$1" -R | awk '
BEGIN{
getline;
f=$8;
s=$5;
}
{
offset = int($2) * 512 - and((s+511), compl(512)+1)
print offset,s,f;
f=$8;
s=$5;
}'
}
function tar_byte_offsets_to_files() {
[ $# -eq 1 ]
# Convert the search results of a tarball with byte offsets
# to search results with archived file name and offset, using
# the provided tar_offsets output (single pass, suitable for
# process substitution)
offsets_file="$1"
prev_offset=0
prev_offset_filename=""
IFS=' ' read -r last_offset last_len last_offset_filename < "$offsets_file"
while IFS=':' read -r search_result_offset match_text
do
while [ $last_offset -lt $search_result_offset ]; do
prev_offset=$last_offset
prev_offset_filename="$last_offset_filename"
IFS=' ' read -r last_offset last_len last_offset_filename < "$offsets_file"
# offsets increasing safeguard
[ $prev_offset -le $last_offset ]
done
# now last offset is the first file strictly after search result offset so prev offset is
# the one at or before it, and must be the one it is in
result_file_offset=$(( $search_result_offset - $prev_offset ))
echo "$prev_offset_filename:$result_file_offset:$match_text"
done
}
# Putting it together e.g.
zgrep -a --byte-offset "your search here" some.tgz | tar_byte_offsets_to_files <(tar_offsets some.tgz)
1 I'm running this in Git for Windows' minimal MSYS2 fork unixy environment, so it's possible that the launch overhead of grep is much much higher than on any kind of real Unix machine and would make `tar --to-command grep` good enough there; benchmark solutions for your own needs and platform situation before selecting.

Redirect file to command, whose stdout is redirected to another command

In a bash script, I want to redirect a file called test to gzip -f, then redirect STDOUT to tar -xfzv, and possibly STDERR to echo This is what I tried:
$ gzip -f <> ./test | tar -xfzv
Here, tar just complains that no file was given. How would I do what I'm attempting?
EDIT: the test file IS NOT a .tar.gz file, sorry about that
EDIT: I should be unzipping, then zipping, not like I had it written here
tar's -f switch tells it that it will be given a filename to read from. Use - for a filename to make tar read from stdout, or omit -f switch. Please read man tar for further information.
I'm not really sure about what you're trying to achieve in general, to be honest. The purpose of read-write redirection and -f gzip switch here is unclear. If the task is to unpack a .tar.gz, better use tar xvzf ./test.tar.gz.
As a side note, you cannot 'redirect stderr to echo', echo is just a built-in, and if we're talking about interactive terminal session, stderr will end up visible on your terminal anyway. You can redirect it to file with 2>$filename construct.
EDIT: So for the clarified version of the question, if you want to decompress a gzipped file, run it through bcrypt, then compress it back, you may use something like
gzip -dc $orig_file.tar.gz | bcrypt [your-switches-here] | gzip -c > $modified_file.tar.gz
where gzip's -d stands for decompression, and -c stands for 'output to stdout'.
If you want to encrypt each file individually instead of encrypting the whole tar archive, things get funnier because tar won't read input from stdin. So you'll need to extract your files somewhere, encrypt them and then tgz them back. This is not the shortest way to do that, but in general it works like this:
mkdir tmp ; cd tmp
tar xzf ../$orig_file.tar.gz
bcrypt [your-switches-here] *
tar czf ../$modified_file.tar.gz *
Please note that I'm not familiar with bcrypt switches and workflow at all.

Pipe script and binary data to stdin via ssh

I want to execute a bash script remotely which consumes a tarball and performs some logic to it. The trick is that I want to use only one ssh command to do it (rather than scp for the tarball followed by ssh for the script).
The bash script looks like this:
cd /tmp
tar -zx
./archive/some_script.sh
rm -r archive
I realize that I can simply reformat this script into a one-liner and use
tar -cz ./archive | ssh $HOST bash -c '<commands>'
but my actual script is complicated enough that I must pipe it to bash via stdin. The challenge here is that ssh provides only one input pipe (stdin) which I want to use for both the bash script and the tarball.
I came up with two solutions, both of which include the bash script and the tarball in stdin.
1. Embed base64-encoded tarball in a heredoc
In this case the server receives a bash script with the tarball is embedded inside a heredoc:
base64 -d <<'EOF_TAR' | tar -zx
<base64_tarball>
EOF_TAR
Here's the complete example:
ssh $HOST bash -s < <(
# Feed script header
cat <<'EOF'
cd /tmp
base64 -d <<'EOF_TAR' | tar -zx
EOF
# Create local tarball, and pipe base64-encoded version
tar -cz ./archive | base64
# Feed rest of script
cat <<'EOF'
EOF_TAR
./archive/some_script.sh
rm -r archive
EOF
)
In this approach however, tar does not start extracting the tarball until it is fully transferred over the network.
2. Feed tar binary data after the script
In this case the bash script is piped into stdin followed by the raw tarball data. bash passes control to tar which processes the tar portion of stdin:
ssh $HOST bash -s < <(
# Feed script.
cat <<'EOF'
function main() {
cd /tmp
tar -zx
./archive/some_script.sh
rm -r archive
}
main
EOF
# Create local tarball and pipe it
tar -cz ./archive
)
Unlike the first approach, this one allows tar to start extracting the tarball as it is being transferred over the network.
Side note
Why do we need the main function, you ask? Why feed the entire bash script first, followed by binary tar data? Well, if the binary data were put in the middle of the bash script, there would be an error since tar consumes past the end of the tarfile, which in this case would eat up some of the bash script. So, the main function is used to force the whole bash script to come before the tar data.

Self-extracting script in sh shell

How would I go about making a self extracting archive that can be executed on sh?
The closest I have come to is:
extract_archive () {
printf '<archive_contents>' | tar -C "$extract_dir" -xvf -
}
Where <archive_contents> contains a tarball with null characters, %, ' and \ characters escaped and enclosed between single quotes.
Is there any better way to do this so that no escaping is required?
(Please don't point me to shar, makeself etc. I want to write it from scratch.)
Alternative variant is to use marker for end of shell script and use sed to cut-out shell script itself.
Script selfextract.sh:
#!/bin/bash
sed '0,/^#EOF#$/d' $0 | tar zx; exit 0
#EOF#
How to use:
# create sfx
cat selfextract.sh data.tar.gz >example_sfx.sh
# unpack sfx
bash example_sfx.sh
Since shell scripts are not compiled, but executed statement by statement, you can mix binary and text content using a pattern like this (untested):
#!/bin/sh
sed -e '1,/^exit$/d' "$0" | tar -C "${1-.}" -zxvf -
exit
<binary tar gzipped content here>
You can add those two lines to the top of pretty much any tar+gzip file to make it self extractable.
To test:
$ cat header.sh
#!/bin/sh
sed -e '1,/^exit$/d' "$0" | tar -C "${1-.}" -zxvf -
exit
$ tar -czf header.tgz header.sh
$ cat header.sh header.tgz > header.tgz.sh
$ sh header.tgz.sh
header.sh
Some good articles on how to do exactly that could be found at:
http://www.linuxjournal.com/node/1005818.
https://community.linuxmint.com/tutorial/view/1998
Yes, you can do it natively with xtar.
Build xtar elf64 tar self-extractor header (you free to modify it to support elf32, pe and other executable formats), it is based on lightweight bsdtar untar and std elf lib.
cc contrib/xtar.c -o ./xtar
Copy xtar binary to yourTar.xtar
cp ./xtar yourTar.xtar
Append yourTar.tar archive to the end of yourTar.xtar
cat yourTar.tar >> yourTar.xtar
chmod +x yourTar.xtar

Resources