I have a pkg file created by Install Maker for Mac.
I want to replace one file in pkg. But I must do this under Linux system, because this is a part of download process. When user starts to download file server must replace one file in pkg.
I have a solution how unpack pkg and replace a file but I dont know how pack again to pkg.
http://emresaglam.com/blog/1035
http://ilostmynotes.blogspot.com/2012/06/mac-os-x-pkg-bom-files-package.html
Packages are just .xar archives with a different extension and a specified file hierarchy. Unfortunately, part of that file hierarchy is a cpio.gz archive of the actual installables, and usually that's what you want to edit. And there's also a Bom file that includes information on the files inside that cpio archive, and a PackageInfo file that includes summary information.
If you really do just need to edit one of the info files, that's simple:
mkdir Foo
cd Foo
xar -xf ../Foo.pkg
# edit stuff
xar -cf ../Foo-new.pkg *
But if you need to edit the installable files:
mkdir Foo
cd Foo
xar -xf ../Foo.pkg
cd foo.pkg
cat Payload | gunzip -dc |cpio -i
# edit Foo.app/*
rm Payload
find ./Foo.app | cpio -o | gzip -c > Payload
mkbom Foo.app Bom # or edit Bom
# edit PackageInfo
rm -rf Foo.app
cd ..
xar -cf ../Foo-new.pkg
I believe you can get mkbom (and lsbom) for most linux distros. (If you can get ditto, that makes things even easier, but I'm not sure if that's nearly as ubiquitously available.)
Here is a bash script inspired by abarnert's answer which will unpack a package named MyPackage.pkg into a subfolder named MyPackage_pkg and then open the folder in Finder.
#!/usr/bin/env bash
filename="$*"
dirname="${filename/\./_}"
pkgutil --expand "$filename" "$dirname"
cd "$dirname"
tar xvf Payload
open .
Usage:
pkg-upack.sh MyPackage.pkg
Warning: This will not work in all cases, and will fail with certain files, e.g. the PKGs inside the OSX system installer. If you want to peek inside the pkg file and see what's inside, you can try SuspiciousPackage (free app), and if you need more options such as selectively unpacking specific files, then have a look at Pacifist (nagware).
You might want to look into my fork of pbzx here: https://github.com/NiklasRosenstein/pbzx
It allows you to stream pbzx files that are not wrapped in a XAR archive. I've experienced this with recent XCode Command-Line Tools Disk Images (eg. 10.12 XCode 8).
pbzx -n Payload | cpio -i
In addition to what #abarnert said, I today had to find out that the default cpio utility on Mountain Lion uses a different archive format per default (not sure which), even with the man page stating it would use the old cpio/odc format. So, if anyone stumbles upon the cpio read error: bad file format message while trying to install his/her manipulated packages, be sure to include the format in the re-pack step:
find ./Foo.app | cpio -o --format odc | gzip -c > Payload
#shrx I've succeeded to unpack the BSD.pkg (part of the Yosemite installer) by using "pbzx" command.
pbzx <pkg> | cpio -idmu
The "pbzx" command can be downloaded from the following link:
pbzx Stream Parser
If you are experiencing errors during PKG installation following the accepted answer, I will give you another procedure that worked for me (please note the little changes to xar, cpio and mkbom commands):
mkdir Foo
cd Foo
xar -xf ../Foo.pkg
cd foo.pkg
cat Payload | gunzip -dc | cpio -i
# edit Foo.app/*
rm Payload
find ./Foo.app | cpio -o --format odc --owner 0:80 | gzip -c > Payload
mkbom -u 0 -g 80 Foo.app Bom # or edit Bom
# edit PackageInfo
rm -rf Foo.app
cd ..
xar --compression none -cf ../Foo-new.pkg
The resulted PKG will have no compression, cpio now uses odc format and specify the owner of the file as well as mkbom.
Bash script to extract pkg: (Inspired by this answer:https://stackoverflow.com/a/23950738/16923394)
Save the following code to a file named pkg-upack.sh on the $HOME/Downloads folder
#!/usr/bin/env bash
filename="$*"
dirname="${filename/\./_}"
mkdir "$dirname"
# pkgutil --expand "$filename" "$dirname"
xar -xf "$filename" -C "$dirname"
cd "$dirname"/*.pkg
pwd
# tar xvf Payload
cat Payload | gunzip -dc |cpio -i
# cd usr/local/bin
# pwd
# ls -lt
# cp -i * $HOME/Downloads/
Uncomment the last four lines, if you are using a rudix package.
Usage:
cd $HOME/Downloads
chmod +x ./pkg-upack.sh
./pkg-upack.sh MyPackage.pkg
This was tested with the ffmpeg and mawk package from rudix.org (https://rudix.org) search for ffmpeg and mawk packages on this site.
Source : My open source projects : https://sourceforge.net/u/nathan-sr/profile/
Related
I'm working on a python scrip that verify the integrity of some downloaded projects.
On my nas, I have all my compressed folder: folder1.tar.gz, folder2.tar.gz, …
On my Linux computer, the equivalent uncompressed folder : folder1, folder2, …
So, i want to compare the integrity of my files without any UnTar or download !
I think i can do it on the nas with something like (with md5sum):
sshpass -p 'pasword' ssh login#my.nas.ip tar -xvf /path/to/my/folder.tar.gz | md5sum | awk '{ print $1 }'
this give me a hash, but I don't know how to get an equivalent hash to compare with the normal folder on my computer. Maybe the way I am doing it is wrong.
I need one command for the nas, and one for the Linux computer, that output the same hash ( if the folders are the same, of course )
If you did that, tar xf would actually extract the files. md5sum would only see the file listing, and not the file content.
However, if you have GNU tar on the server and the standard utility paste, you could create checksums this way:
mksums:
#!/bin/bash
data=/path/to/data.tar.gz
sums=/path/to/data.md5
paste \
<(tar xzf "$data" --to-command=md5sum) \
<(tar tzf "$data" | grep -v '/$') \
| sed 's/-\t//' > "$sums"
Run mksums above on the machine with the tar file.
Copy the sums file it creates to the computer with the folders and run:
cd /top/level/matching/tar/contents
md5sums -c "$sums"
paste joins lines of files given as arguments
<( ...) runs a command, making its output appear in a fifo
--to-command is a GNU tar extension which allows running commands which will receive their data from stdin
grep filters out directories from the tar listing
sed removes the extraneous -\t so the checksum file can be understood by md5sum
The above assumes you don't have any very-oddly named files (for example, the names can't contain newlines)
My script is running perfectly on co-workers devices (MacOSX with Docker Desktop same as me), but gives me every time the same error and it does not move or only half, the libraries in the deps directory:
OSError: [Errno 18] Invalid cross-device link: '/tmp/pip-target-dzwe_2kc/lib/python/numpy' ->
'/foo/python/numpy'
My script :
#!/bin/bash
export PKG_DIR='python'
export SIDE_DEPS_DIR='deps'
rm -rf ${PKG_DIR} && mkdir -p ${PKG_DIR}
rm -rf ${SIDE_DEPS_DIR} && mkdir -p ${SIDE_DEPS_DIR}
docker run --rm -v $(pwd):/foo -w /foo lambci/lambda:build-python3.8 \
pip3 install -r requirements.txt -t ${PKG_DIR}
# move stuff to deps
find /${PKG_DIR} -maxdepth 1 -type d \
\( -name "pandas*" -o -name "numpy*" -o -name "numpy.libs*" -o -name "scipy*" -o -name "scipy.libs*" \) -exec mv '{}' ${SIDE_DEPS_DIR} \;
# zip side dependencies
zip -r ge_deps.zip deps
# zip layer
zip -r layers-python38-great-expectations.zip python
It's a script which uses a public lambda docker image to create a lambda layer (basically a zip that contains libraries) and which removes unwanted libraries to put them in another folder deps.
The above code will use the public Docker image lambci / lambda and will install in the empty python directory, libraries which come from a python package which is called 'great-expectations' and which helps to test pipelines of data (which is specified in requirements.txt and is great-expectations==0.12.7)
I have been stuck with this problem for a while and have not found a solution.
Had this exact problem just now.
/tmp and /foo are different devices - /tmp is within the docker OS and /foo is mapped to your local OS.
pip seems to be using shutil.rename() to move the built package from tmp to the final output location (/foo). This fails because they are different devices. Ideally pip would use shutil.move() instead, which will deal with a cross-device move.
As a workaround, you can change the temp folder used by PIP by setting TMPDIR before invoking the pip command. i.e. export TMPDIR=/foo/tmp before calling pip in the docker image. So, the whole command might be something like
docker run --rm -v $(pwd):/foo -w /foo lambci/lambda:build-python3.8 \
/bin/bash -c "export TMPDIR=/foo/tmp && pip3 install -r requirements.txt -t ${PKG_DIR}"
(multiple commands soln taken from https://www.edureka.co/community/10736/how-to-run-multiple-commands-in-docker-at-once - open to better suggestions!)
This will likely be slower because it's using the local OS for temp files, but it avoids the attempted 'rename' across devices from the temp folder to the final output folder.
I installed pigz via homebrew on my Macbook Air (OS X 10.10.5) to get better performance for compress/decompress.
To compress, I use tar --use-compress-program=pigz -cf test.tgz test and it's ok.
But the command to uncompress, tar --use-compress-program=pigz -xf test.tgz output error:
tar: Unrecognized archive format
tar: Error exit delayed from previous errors.
Or sometimes it output:
tar: Unrecognized archive format
pigz: abort: write error on <stdout> (Broken pipe)
tar: Child process exited with status 32
tar: Error exit delayed from previous errors.
I read the manual of tar, and have no clue why it doesn't work.
I noticed that even tar --use-compress-program=gzip -xf test.tgz generate the same error. So is this a bug of OSX's tar implementation?
Note: I know pipe style pigz -d test.tgz | tar -xf works and in this case I could also just use tar -xf test.tgz which call built-in gzip. But I just want to confirm whether it is a bug.
The program works as designed: there is no provision in its command-line to pass along the options needed to use gzip for decompressing. Instead of "gzip" for decompressing, you should use the wrapped gzcat, e.g.,
tar --use-compress-program gzip -cf foo.compressed foo
tar --use-compress-program gzcat -tf foo.compressed
A quick check shows that this does not work:
tar --use-compress-program 'gzip -d' -tf foo.compress
although that could change some time (it is doable, but not done).
According to pigz's manual page, it has unpigz, which is what you can use for that program.
This is an issue between BSD tar and GNU tar.
I fixed this by installing gnu-tar from homebrew and placing that on the path.
See this similar question on superuser: https://superuser.com/questions/318809/linux-os-x-tar-incompatibility-tarballs-created-on-os-x-give-errors-when-unt
on linux I renamed gzip to gzip.sav and made a soft link to pigz so tar, dolphin, yum, and any other program that calls gzip actually calls pigz
In Git Bash I've tried to use this command:
$ git archive -o test.tar.gz master
gzip: compressed data not written to a terminal. Use -f to force compression.
For help, type: gzip -h
The file test.tar.gz is empty, but my repository is not empty and creating a zip file works fine (contains all my source files)! Why does the tarball format fail to produce an archive?
This appears to be a compatibility problem between the way git archive wants to pipe content from tar to gzip and the way Windows handles pipes. You can generate the same error message by piping tar into gzip manually:
$ tar -c file.txt | gzip
gzip: compressed data not written to a terminal. Use -f to force compression.
For help, type: gzip -h
These two commands work for me on Windows 7, and should be functionally identical to the one you're trying:
$ git archive -o test.tar master
$ gzip test.tar
Pipe it to gzip:
git archive master | gzip > test.tar.gz
Even if you are not using Git Bash, from regular Command Prompt you can just:
git archive --format=tar.gz master > test.tar.gz
#!/bin/bash
mkdir /tmp
curl -O http://www.mucommander.com/download/nightly/mucommander-current.app.tar.gz /tmp/mucommander.tgz
tar -xvzf /tmp/mucommander.tgz */mucommander.app/*
cp -r /tmp/mucommander.app /Applications
rm -r /tmp
I'm trying to create a shell script to download and extract muCommander to my applications directory on a Mac.
I tried cd into the tmp dir, but then the script stops when I do that.
I can extract all using the -C argument, but the current tgz path is muCommander-0_9_0/mucommander.app, which could change on later builds, so I'm trying to keep it generic.
Can anyone give me pointers where I'm going wrong?
Thanks in advance.
Strip the first path component when you untar the archive, from tar(1):
--strip-components count
(x mode only) Remove the specified number of leading path ele-
ments. Pathnames with fewer elements will be silently skipped.
Note that the pathname is edited after checking inclusion/exclu-
sion patterns but before security checks.
Update
Here is a working bash example of how to, fairly generically, copy the contents of the tgz file to /Applications.
shopt -s nocaseglob
TMPDIR=/tmp
APP=mucommander
TMPAPPDIR=$TMPDIR/$APP
mkdir -p $TMPAPPDIR
curl -o $TMPDIR/$APP.tgz http://www.mucommander.com/download/nightly/mucommander-current.app.tar.gz
tar --strip-components=1 -xvzf $APP.tgz -C $TMPAPPDIR
mv $TMPAPPDIR/${APP}* /Applications
# rm -rf $TMPAPPDIR $TMPDIR/$APP
The rm command is commented out for now, verify that it does no harm before you use it.
The following will update your muCommander.
#for the safety, remove old temporary extraction from the /tmp
rm -rf /tmp/muCommander.app
#kill the running mucommander - you dont want replace the runnung app
ps -ef | grep ' /Applications/muCommander.app/' | grep -v grep | awk '{print $2}' | xargs kill
#download, extract, remove old, move new, open
#each command run only when the previous ended with success
curl http://www.mucommander.com/download/nightly/mucommander-current.app.tar.gz |\
tar -xzf - -C /tmp --strip-components=1 '*/muCommander.app' && \
rm -rf /Applications/muCommander.app && \
mv /tmp/muCommander.app /Applications && \
open /Applications/muCommander.app
Beware, after the '\' must following new line, and not any spaces...