How to delete few files from zip in bash? - bash

In my zip archive I have many txt files. Some of them have names ending on _temp_file.txt.
I know I can delete files from my zip archive with zip -d command, but how do I remove all files that have that ending? Is that even possible?

Try using the command:
zip -d archive.zip "*_temp_file.txt"
that should remove anything ending with _temp_file.txt from the archive.

Related

How to remove the content of folders before unzipping many zips? (bash script)

I have many folders on FTP with their content, and some of them I must update from time to time. I update them by unzipping zip files I receive. Names of zips may be various, but in a zip, there is always the main folder with exactly the same name of a folder that should be updated on FTP. No more other files/folders in zips other than the main folder with its content. So I wrote a simple script below to update them:
unzip -o \*.zip
rm -f *.zip
The problem is, sometimes there are files that should be deleted in these folders - they no longer exist in zips with updates. And I realized that when I unzip and overwrite, nothing is deleted what should be. Is it possible to modify this script, to remove a whole folder before unzipping to be sure? The proper name of a folder to update is not the name of zip, but the name of the main folder in zip, and because of that I don't know how to solve this. I couldn't find an existing solution for this. Also, sometimes I upload many zips at once, and there are thousands of folders on FTP so it would be hard to write a single command for every single folder.
You can use the unzip companion program zipinfo to list the contents of the zip files. Add the pattern */ to list only directories. Then pipe to xargs to remove them.
zipinfo -1 '*.zip' '*/' | xargs rm -rf 2>/dev/null
This will remove all existing directories (which match in an existing zip file) at once. You can then run the rest of your script to extract the new ones.
You could add cut -d / -f 1 | sort -u | before xargs to filter out any subdirectories for rm, but it shouldn't matter even if there are some.
xargs splits lines by whitespace, so a directory name containing whitespace could result in a different directory being removed. For GNU xargs, you can add --delimiter='\n' to stop that (there's also --null, but zip truncates new lines in file names anyway). You can also just exclude directories containing spaces by piping through grep -v '[[:space:]]'.
Another approach which may be useful is to process one zip file at a time:
for zip in *.zip; do
dirs=$(zipinfo -1 "$zip" '*/') || continue
IFS=$'\n' read -rd '' -a dirs<<<"$dirs"
rm -rf "${dirs[#]}"
unzip -o "$zip"
done
This method is also fine with whitespace. Splitting dirs in to an array just means rm will still succeed if there is more than one directory in an archive. If zipinfo fails, it probably means the archive is corrupt or unreadable, hence || continue. You can remove that if you want to attempt extraction regardless.

View several lines from a file in a tar.gz file

I have a very large .tar file that contains several .gz files. I would like to view a few lines in any of the individual files without untarring. I can view the files using:
tar -tzf TarFile # doesn't actually end in .tar
I get:
TarFile/
FileA.gz
FileB.gz
FileC.gz
FileD.gz
I would like to view just a few lines from any of the individual files. Normally I would use:
zless MyFile
Is there a way to combine the two commands so I can view a few lines from any of the individual files?
tar -xOf TarFile FileB.gz | zless
Explanation:
tar
-x
-O extract to standard output
-f Tarfile
FileB.gz the file in the tar archive to extract
| zless pipe the extracted file data to zless
This will be expensive to do more than once as it requires tar to scan the archive each time you run the command. If the tar archive is large (and the file you want is early in the tarball) you might also benefit from using --occurrence=1 on that command line to get tar to stop processing the tar file immediately when it finds a file that matches the file you told it to extract.

How to unzip to the same directory in bash

I have hundreds of directories, each containing several zip files. I would like to iterate over each directory and unzip all zip files, placing the contents of the zip files into the same directory as the zip files themselves (without creating new sub-directories). Here's the bash script I have:
#!/bin/bash
src="/path/to/directories"
for dir in `ls "$src/"`
do
unzip "$src/$dir/*"
done
This script does the unzipping, but it creates thousands of sub-directories and dumps them on my desktop! How can I get the desired behavior? I'm on Mac OSX if that makes a difference.
#!/bin/bash
src=/path/to/directories
for dir in "$src"/*
do
(cd "$dir" && unzip '*')
done

Add zip files from one archive to another using command line

I have two zip archives. Say, set1 has 10 csv files created using Mac OS X 10.5.8 compress option, and set2 has 4 csv files similarly created. I want to take the 4 files from zipped archive set2 and add them to list of files in archive set1. Is there a way I can do that?
I tried the following in Terminal:
zip set1.zip set2.zip
This adds the whole archive set2.zip to set1.zip, i.e., in set1.zip now I have:
file1.csv, file2.csv,..., file10.csv, set2.zip
What I instead want is:
file1.csv, file2.csv,..., file10.csv, file11.csv, ..., file14.csv
where, set2.zip is the archive containing file11.csv, ..., file14.csv.
Thanks.
I don't know of a built-in OS X tool, but there's a zipmerge utility as part of the libzip package (hg repository available).
unzip set2.zip -d .tmpdir; cd .tmpdir; zip ../set1.zip *; cd ..; rm -r .tmpdir;
This script here should do it.
zipjoin.sh
#!/bin/bash
#Example: ./zipjoin.sh merge_into.zip merge_from.zip
mkdir .tmp
unzip $2 -d .tmp
zip $1 .tmp/*
rm -r .tmp
Hope that helps!

Automated unzipping of files

I have a folder full of zipped files (about 200). I would like to transform this into a folder consisting only of unzipped files. What would be the easiest and quickest way to do this?
Please note that I would like to remove the zipped file from the folder once it us unzipped.
Also, I'm on a Mac.
Thanks!
You can do something like:
for file in `ls *.zip`; do unzip -f $file; rm $file; done
We are looping through all the zip files in the directory, unzipping it and then deleting it.
Note that the -f option of zip will overwrite any file without prompting if it finds a duplicate.
You need to run the above one-line command on the command line from the directory that has the all the zip files. That one line is equivalent to:
for file in `ls *.zip` # ls *.zip gets the list of all zip file..iterate through that list one by one.
do # for each file in the list do the following:
unzip -f $file # unzip the file.
rm $file # delete it.
done
I found this answer which is a simple one liner to gunzip all .gz compressed files within a folder.
Basically you cd to the folder and then run
gunzip *.gz
If you want to only unzip files with a certain prefix you put that before the *
gunzip example*.gz
Easy as cake!

Resources