On a Windows box, I need to extract a RAR archive so that individual files in it go into specific directories. I can provide, say, a text file that lists each file and the target directory for it? Then I need help creating a batch file that will actually extract these files into their target locations.
E.g.
RAR archive x.rar contains
a.a
b.b
c.c
Text file x.txt says
a.a C:\foo
b.b C:\bar
c.c C:\foo
Result of running batch file on x.rar and x.txt should be:
in C:\foo we have a.a and c.c
in C:\bar we have b.b
You can pass rar a list of file names to extract with -n#<listfile>. So if you create a single file for each directory you want to extract to this should be a viable option. However, the file you describe to have doesn't quite match the format; you'd need to group it by target directory (much more fun in PowerShell by the way).
However, if the archive you're extracting is a solid archive this will take far longer since you essentially have to decompress the whole archive over and over again.
The best and probably easiest method would then probably to extract the archive once and then sort all files in their respective directories.
Related
Suppose I have 100 .ps files in a directory, called doc1.ps, doc2.ps, ... , doc100.ps. I would like to write a Makefile to do the following: (1) when I run "make", all the files matching the pattern doc*.ps should be converted to the pdf format (without deleting the original copy) using the command line program ps2pdf. (2) Any .ps files not matching the name pattern doc*.ps should be left untouched. (3) Whenever a file doc*.ps is updated, running "make" again should only update the PDF copy of this specific file, without converting all of them again. How can this be done?
P.S. I don't want to type the names of the .ps files explicitly into the Makefile, because this is tedious when there are many files. I'd like to have Makefile handle the matching of wildcard filenames.
zip.UnzipMatching("qa_output","*.xml",true)
With this syntax I can unzip every Xml in every directory from my zip file and create the same directory structure.
But how can I unzip only the xml in the root directory?
I cannot understand how to write the filter.
I tried with "/*.xml" but nothing is extracted.
If I write "*/*.xml" I only extract xml files from subdirectory (and I skip the xml in the root directory!).
Can anyone help me?
example of a zip files content:
a1.xml
b1.xml
c1.xml
dir1\a2.xml
dir1\c2.xml
dir2\dir3\c3.xml
with unzipmatching("qa_output","*.xml", true) I extract all this files with the original directory structure, but I want to extract only a1.xml, b1.xml and c1.xml.
Is there a way to write filter to achieve this result, or a different command, or a different approach?
I think what you want is to call UnzipMatchingInto: All files (matching the pattern) in the Zip are unzipped into the specfied dirPath regardless of the directory path information contained in the Zip. This has the effect of collapsing all files into a single directory. If several files in the Zip have the same name, the files unzipped last will overwrite the files already unzipped.
I have files in a directory, with a pattern in the filename ("IMP-"). I need to copy the files from the directory A to the directory B. But I also keep the files in the directory A. So in order to copy only the new files in directory B, I need, first to list each time I do a copy, the filenames in a text file (list.txt), and then to copy only the files that aren't listed in the text file.
exemple
Directory A (/home/ftp/recep/)
files, for example can be :
/home/recep/IMP-avis2018.txt
/home/recep/IMP-avis2018.pdf
/home/recep/IMP-avis2017.jpg
/home/recep/IMP-avis2017.pdf
Directory B (/home/ftp/transfert/)
In need to copy all files with IMP* to directory B (/home/ftp/transfert/).
And when a new file is receive in drectory A, I need this file, and only this file, to be copied in directory B (where files only stay 2 hours max)
I tought maybe I could do something with rsync, but I could'n find an adequate option.
So maybe it could be a bash script.
Actions would be :
have a simple basic text file containing already proceed files (for example liste.txt)
find files in directory A containing pattern IMP
for each of these files, read the liste.txt file and if the file is not listed in liste.txt, copy it to the directory B
You could try the option -n. The man page says:
-n, --no-clobber
do not overwrite an existing file (overrides a previous -i option)
So
cp -n A/* B/
should copy all files from A to B, except those that are already in B.
Another way would be rsync:
rsync -vu A/* B/
This syncs the files from A to B and prints the file that were actually copied.
What I am currently struggling is to create multiple files and storing it into each directory.
I have made 100 directories like this:
mkdir ./dir.{01..100}
What I want is to create 100 text files for each directory. So that the result will show like:
click on dir.01 at home dir, which has files named: 01.txt to 100.txt
Also, I want to compress all 100 directories, each containing 100 text files into one using tar.
I am struggling with:
creating 100 text files each in 100 directories
using tar to zip all 100 directories together.
I am more interested in making creating 100 text files IN 100 directories. Also I am MUCH MORE interested in how to use tar to join all 100 directories together in specific file (fdtar) for instance.
If you are fine with empty files,
touch ./dir.{01..100}/{01..100}.txt
If you need each file to contain something, use that as the driver in a loop:
for file in ./dir.{01..100}/{01..100}.txt; do
printf "This is the file %s\n" "$file">"$file"
done
This could bump into ARG_MAX ("argument list too long") on some platforms, but it works fine on MacOS and should work fine on any reasonably standard Linux.
Splitting the loop into an inner and an outer loop could work around that problem:
for dir in ./dir.{01..100}; do
for file in {01..100}.txt; do
printf "This is file %s/%s\n" >"$dir/$file"
done
done
If I understand you need two things. First, you have 100 directories and need to create a file in each. With a for loop in bash run from the parent directory where all other directories you have created are:
for n in dir.*
do
f=`echo $n | sed s/dir\.//`
echo "This is file $n" >"$n/$f.txt"
done
Regarding tar that is even easier because tar will take multiple directories and glue them together. From the parent directory try:
tar cvf fd.tar dir.*
The c option will create the archive. v will tell tar to print all it is doing so you know what is happening. f directories.tar will create the archive with that name.
When you undo the tar operation, you will use:
tar xvf fd.tar
In this case x will extract the contents of the tar archive and will create all 100 directories for you at the directory from which you invoke it.
Note that I have used fd.tar and not fdtar as the .tar extension is the customary way to signal that the file is a tar archive.
I 7Zip'd a multi-gig folder which contained many folders each with many files using the split to volumes (9Meg) option. 7Zip created files of type .zip.001,
.zip.002, etc. When I extract .001 it appears to work correctly but I get an 'unexpected end of data' error. 7Zip does not automatically go to .002. When I extract .002, it also gives the same error and it does not continue the original folder/file structure. Instead it extracts a zip file in the same folder as the previously extracted files. How do I properly extract split files to obtain the original folder/file structure? Thank you.