zip all files and folders recursively in bash - bash

I am working on a project, where compilation of the project involves, zipping up various files and folders and subfolders (html/css/js) selectively. Working on the windows platform, and I could continue to just use the CTRL+A and then SHIFT-click to unselect, but it does get a little tedious. I am working with cygwin, so I was wondering if it is possible to issue a command to zip selected files/folders recursively whilst excluding others, in one command? I already have zip command installed, but I seem to be zipping up the current zip file too and the .svn file too.
I would like this to be incorporated into a shell script if possible, so the simpler the better.

After reading the man pages, I think the solution that I was looking for is as follws:
needs to recurse directories (-r),
needs to exclude certail files/directories (-x)
It works in the current directory, but the . can be replaced with the path of any directory
zip -x directories_to_exclude -r codebase_latest.zip .
I have incorporated this into a short shell script that deletes files, tidy up some code, and then zips up all of the files as needed.

You should read man page of zip command:
-R
--recurse-patterns
Travel the directory structure recursively starting at the current directory; for example:
zip -R foo "*.c"
In this case, all the files matching *.c in the tree starting at the current directory are stored into a zip archive named foo.zip. Note that *.c will match
file.c, a/file.c and a/b/.c. More than one pattern can be listed as separate arguments. Note for PKZIP users: the equivalent command is
pkzip -rP foo *.c
Patterns are relative file paths as they appear in the archive, or will after zipping, and can have optional wildcards in them. For example, given the cur‐
rent directory is foo and under it are directories foo1 and foo2 and in foo1 is the file bar.c,
zip -R foo/*
will zip up foo, foo/foo1, foo/foo1/bar.c, and foo/foo2.
zip -R */bar.c
will zip up foo/foo1/bar.c. See the note for -r on escaping wildcards.
You can also have a look HERE

Related

Copy whole directory but exclude all folders and subfolders with certain name

I'm not allowed to use rsync on the cluster I'm working on so I need to use cp. I want to copy a large directory including all files and subfolders etc. but without any folders that have the name "outdir".
I tried cp -r -v ./!(outdir) ../target-directory/
but it still copies all folders and contents in deeper directories with the name outdir. It only included the outdir folders in the highest directory.
I also tried cp -r ./*/!(outdir) ../target-directory/ but that one copied all files into the folder without keeping any hirarchy or folders etc.
I also tried certain find commands but it didn't work, but maybe I was just doing something stupid. I'm a beginner with bash so if you could explain your answer and what the flags etc. do that would really be helpfull, I've been trying forever now, on what I think shouldn't be that hard to do.
Instead of cp, you can use tar with option --exclude to control what you want copied or not.
The full command is:
tar --exclude="outdir" -cvpf - . | (cd TARGET_DIRECTORY; tar -xpf -)
So any path that contains the "outdir" pattern will be excluded.
Without the --exclude option, it will copy the entire structure of your current directory under TARGET_DIRECTORY.
You can replace the . in the first tar by your desired source directory.

Copying multiple files with same name in the same folder terminal script

I have a lot of files named the same, with a directory structure (simplified) like this:
../foo1/bar1/dir/file_1.ps
../foo1/bar2/dir/file_1.ps
../foo2/bar1/dir/file_1.ps
.... and many more
As it is extremely inefficient to view all of those ps files by going to the
respective directory, I'd like to copy all of them into another directory, but include
the name of the first two directories (which are those relevant to my purpose) in the
file name.
I have previously tried like this, but I cannot get which file is from where, as they
are all named consecutively:
#!/bin/bash -xv
cp -v --backup=numbered {} */*/dir/file* ../plots/;
Where ../plots is the folder where I copy them. However, they are now of the form file.ps.~x~ (x is a number) so I get rid of the ".ps.~*~" and leave only the ps extension with:
rename 's/\.ps.~*~//g' *;
rename 's/\~/.ps/g' *;
Then, as the ps files have hundreds of points sometimes and take a long time to open, I just transform them into jpg.
for file in * ; do convert -density 150 -quality 70 "$file" "${file/.ps/}".jpg; done;
This is not really a working bash script as I have to change the directory manually.
I guess the best way to do it is to copy the files form the beginning with the names
of the first two directories incorporated in the copied filename.
How can I do this last thing?
If you just have two levels of directories, you can use
for file in */*/*.ps
do
ln "$file" "${file//\//_}"
done
This goes over each ps file, and hard links them to the current directory with the /s replaced by _. Use cp instead of ln if you intend to edit the files but don't want to update the originals.
For arbitrary directory levels, you can use the bash specific
shopt -s globstar
for file in **/*.ps
do
ln "$file" "${file//\//_}"
done
But are you sure you need to copy them all to one directory? You might be able to open them all with yourreader */*/*.ps, which depending on your reader may let browse through them one by one while still seeing the full path.
You should run a find command and print the names first like
find . -name "file_1.ps" -print
Then iterate over each of them and do a string replacement of / to '-' or any other character like
${filename/\//-}
The general syntax is ${string/substring/replacement}. Then you can copy it to the required directory. The complete script can be written as follows. Haven't tested it (not on linux at the moment), so you might need to tweak the code if you get any syntax error ;)
for filename in `find . -name "file_1.ps" -print`
do
newFileName=${filename/\//-}
cp $filename YourNewDirectory/$newFileName
done
You will need to place the script in the same root directory or change the find command to look for the particular directory if you are placing the above script in some other directory.
References
string manipulation in bash
find man page

sync/copy folders from source dir only if folders exist in target dir

I'm trying to write a bash script in ubuntu to do a copy of some files.. I'm working on a small Android project, where i'm translating the apps each week. I'm VERY new to bash scripting, so please bear with me ;)
I want my script to check the target directory and see if my source directory contains the same folders. If it does, it should copy (and overwrite if needed) my source folders to the target dir, preserving the structure. But also adding whatever extra files and folders i might have within those source folders.
Let's say i have folder1, folder2, folder3 in my source dir, but only folder1 and folder2 in the target dir. Then i only need folder1 and folder2 from the source dir copied to the target dir.
The content of the target dir changes often, that's why i need the check before it copies the folders/files over.
Btw, the folders in both source and target dir are named like: folder1.apk - it has an extension so it looks like a file..
Hope i provided enough info ;)
EDIT:
I ended up doing this:
for dir in `find * -maxdepth 0 -type d`; do
cp -r -f /source/$dir /destination
done
Don't know if it's the best way, but seems to do the job ;)
You would probably take a look at rsync tool, which has lot of options and easy to use (no need to use own scripts). For example, one of the options that will be useful in your case:
--existing skip creating new files on receiver
So, the following should do the job:
rsync -vur --existing ~/project/source /mnt/target/
And one of the possible benefits that you can sync files the same way through network if you will need to or even use it as a daemon to automatically sync files.

Batch script to move files into a zip

Is anybody able to point me in the right direction for writing a batch script for a UNIX shell to move files into a zip one at at time and then delete the original.
I cant use the standard zip function because i don't have enough space to fit the zip being created.
So any suggestions please
Try this:
zip -r -m source.zip *
Not a great solution but simple, i ended up finding a python script that recursively zips a folder and just added a line to delete the file after it is added to the zip
You can achieve this using find as
find . -type f -print0 | xargs -0 -n1 zip -m archive
This will move every file into the zip preserving the directory structure. You are then left with empty directories that you can easily remove. Moreover using find gives you a lot of freedom on what files you want to compress.
I use :
zip --move destination.zip src_file1 src_file2
Here the detail of "--move" option from the man pages
--move
Move the specified files into the zip archive; actually, this
deletes the target directories/files after making the specified zip
archive. If a directory becomes empty after removal of the files, the
directory is also removed. No deletions are done until zip has
created the archive without error. This is useful for conserving disk
space, but is potentially dangerous so it is recommended to use it in
combination with -T to test the archive before removing all input
files.

extract specific folder in shell command using unzip

My backup.zip has the following structure.
OverallFolder
lots of files and subfolders inside
i used this unzip backup.zip -d ~/public_html/demo
so i end up with ~/public_html/demo/OverallFolder/my other files.
How do i extract so that i end up with all my files INSIDE OverallFolder GOING DIRECTLY into ~public_html/demo?
~/public_html/demo/my other files
like this?
if you can't find any options to do that, this is the last resort
mv ~/public_html/demo/OverallFolder/* ~/public_html/demo/
(cd ~public_html/demo; unzip $OLDPWD/backup.zip)
This, in a subshell, changes to your destination directory, unzips the file from your source directory, and when the subshell exits, leaves you back in your source directory.
That, or something similar, should work in most shells.

Resources