I have a directory that I have been storing a lot of files so I'm working on a script to watch the disk space if it gets more than 80% then it will compress the files.
all the files end with file.#
my question is how to zip all files that end with a number without zipping the zipped files
I did the most of the script but I'm stuck with at this point
please your help
You can zip the files that are outputted by this command find . -not -name "*.zip".
Find is a command that is used well, to "find" files based on various criteria.
You can read more about it using man find or (online version) here
Simply run the zip command with -x argument to exclude already zipped files from being added to the compressed archive. The command will look like:
zip -r compressed.zip . -x "*.zip"
Related
I am creating a zip file of some files (image files), but need to limit it such that only the latest files are added to the zip file. I need files that are more than 2 days old. Exact time is not relevant.
This is what I have been doing, but how do I limit it based on date? This is Linux and is run from a batch .sh file..
zip -r /destination_path/media_backup.zip /from_path/media
Use find in conjunction with zip.
As indicated in man zip, use option -#.
find . -mtime +2 -print | zip source -#
A single - can also be used instead of -#.
-x and -i can be used to read the files in include form a list file as well.
See man for other details.
I have stored many files in the doc format which I can only open with libreoffice on my mac.
The command:
./soffice --headless --convert-to docx --outdir /home/user ~/Downloads/*.doc
does exactly what it should: it converts all the *.doc files to libreoffice *.docx file.
My problem is that I have folders and subfolders with these files.
Is there any way to search through all the folders from a starting directory and to let "soffice" do its job in each of these folders, storing the new versions (*.docx) exactly where the original (*.doc) was found.
Unfortunately, I am not well-versed in apple script or in terminal to make this work. Yet there are 8000 doc files in hundreds and hundreds of folders that require the update to docx.
Thanks for your help.
Is there any way to search through all the folders from a starting directory and to let "soffice" do its job in each of these folders, storing the new versions (.docx) exactly where the original (.doc) was found.
This can actually be done in Terminal using a find command:
find '/path/to/directory' -type f -iname '*.doc' -execdir /Applications/LibreOffice.app/Contents/MacOS/soffice --headless --convert-to docx '{}' \;
In the example command above, change '/path/to/directory' to the actual pathname of the target directory that contains the subdirectories containing the .doc files.
What this find command does, in brief, is it finds all .doc files within the hierarchical directory structure of the '/path/to/directory' and executes the command in each subdirectory within containing the .doc files and converts each one in place in its subdirectory.
Notes:
Always insure you have proper backups before running commands in Terminal and the commands as typed will produce the wanted behavior!
This command assumes no .docx files already exist of the same name as the .doc files in the subdirectories, as it automatically overwrites any existing .docx files of the same name as the .doc files. As far as I can tell, the soffice command does not provide an option not to overwrite existing files. If this is going to be an issue, then a different solution will be necessary.
I would like to do some file name comparison with the bash script to determine the file should run a compress routine or not.
Here what I want to do, look through the UPLOAD folder and all sub-folders (couple hundreds of folders in total), if filenameA.jpg and filenameA.orig are both exist in the same folder that means it is compressed before and no need to compress it again, otherwise will compress the filenameA.jpg file.
This way only compress the newer added file and not file already compressed before.
Can someone tell me how to do the if / loop statement using bash script? I plan to run it by Cron job.
Thank you for your help.
Use find to recursively search for all files named *.jpg.
For each file returned you would check for a corresponding ".orig" file, and based on the result compress of not.
Something like this perhaps should get you started:
find UPLOAD -type f -name '*.jpg' | while read JPG
do
ORIG="${JPG%.jpg}.orig"
if [ -s ${ORIG} ]
then
echo "File ${JPG} already compressed to ${ORIG}"
else
echo "File ${JPG} need compressing ..."
gzip -c ${JPG} > ${ORIG}
fi
done
I have a folder in which there are many many folder and in each of these I have lots and lots of files. I have no idea which folder each files might be located in. I will periodically receive a list of files I need to copy to a predefined destination.
The script will run on a Unix machine.
So, my little script should:
read received list
find all files in the list
copy each file to a predefined destination via SCP
step 1 and 3, I think I'll manage on my own, but how will I do step 2?
I was thinking about using "find" to locate each file and when found, write the location in a string array. When all files are found I loop through the string array, running the "SCP" command for each file-location.
I think this should work, but I've never written a bash script before so could anyone help me a little to get started? I just need a basic "find" command which finds a filename and returns the file location if the file is found.
find $dir -name $name -exec scp {} $destination \;
Is anybody able to point me in the right direction for writing a batch script for a UNIX shell to move files into a zip one at at time and then delete the original.
I cant use the standard zip function because i don't have enough space to fit the zip being created.
So any suggestions please
Try this:
zip -r -m source.zip *
Not a great solution but simple, i ended up finding a python script that recursively zips a folder and just added a line to delete the file after it is added to the zip
You can achieve this using find as
find . -type f -print0 | xargs -0 -n1 zip -m archive
This will move every file into the zip preserving the directory structure. You are then left with empty directories that you can easily remove. Moreover using find gives you a lot of freedom on what files you want to compress.
I use :
zip --move destination.zip src_file1 src_file2
Here the detail of "--move" option from the man pages
--move
Move the specified files into the zip archive; actually, this
deletes the target directories/files after making the specified zip
archive. If a directory becomes empty after removal of the files, the
directory is also removed. No deletions are done until zip has
created the archive without error. This is useful for conserving disk
space, but is potentially dangerous so it is recommended to use it in
combination with -T to test the archive before removing all input
files.