How to extract a specific folder using IZARC (IZARCe) - windows

I want to extract a specific directory form a huge zip file (>5GB) that is somewhat corrupted because of an inevitable bad maintained build system that creates the zip.
The tools such as winrar/7Zip GUI apps have no issues extracting the files, but some command line tools such as mks unzip and 7za fails to extract from the corrupted archive.
After a lot of digging around and trying out many such command line utilities I found out that IZARC successfully extracts files from the archive.
I am running the following command:
IZARCe.exe -e -d -o D:\aHugeZipFile.zip -pD:\temp #"source.txt"
The listing file source.txt contains just one entry:
source/lib/*
which is the only directory in the archive, from where the contents are to be extracted.
But, it is resulting in:
IZArc Command Line Extraction Add-On Version 1.1 (Build: 130)
Copyright(c) 2007 Ivan Zahariev, All Rights Reserved.
http://www.izarc.org contact#izarc.org
Archive File: aHugeZipFile.zip
WARNING: Nothing to do!
I have tried specifying:
/source/lib/*
source/lib/*
source/lib/
source/lib
*source/lib/*
in the listing file, all to no avail! :(
Any pointers on where the error is occurring, and how to fix the issue will be of great help. Thank you in advance!

Using relative or absolute paths for listfiles doesn't appear to work with IZArc. Try using wildcards such as ., *.doc, etc instead of paths in the listfile. Be aware that there appears to be a limitation for the folder depth that IZArc will extract to as well as a tendency to generate CRC errors when files with the same name are present in the same archive, even if they are in different directories.
I would suggest using 7-Zip command-line instead. It can recurse deeply through a file structure without error and can use relative directories and wildcards in its listfiles.
The following 7-Zip command was tested and worked perfectly.
7za x somearchive.zip -o"C:\Documents and Settings\me\desktop\temp_folder\test2" -ir#source.txt -aoa -scsWIN
the source.txt file may contain contain a combination of relative paths and/or wildcards on separate lines such as:
Output/, Folder2/, *, or *.doc.
In the command above: x (extract with full paths), -ir (include filenames, recurse subdirectories), -aoa (overide existing files without prompt), -scsWIN (set charset for list files). You may need to adjust these commands for your situation.

Related

Dowloading files as a single .zip on windows server

a client have a download area where users can download or browse single files. Files are divided in folder, so there are documents, catalogues, newsletter and so on, and their extension can vary: they can be .pdf, .ai or simple .jpeg. He asked me if I can provide a link to download every item in a specific folder as a big, compressed file. Problem is, I'm on a Windows server, so I'm a bit clueless if there's a way. I can edit pe pages of this area, so I can include jquery and scripts with a little freedom. Any hint?
Windows archiver is TAR and you are needing to build a TARbALL (Historically all related files in one Tape ARchive)
I have a file server which is mapped as S:\ (it does not have TAR command, and Tar cannot use URL but can use device:)
For any folders contents (including sub folders) it is easy to remotely save all current files in a zip with a single command (for multiple root locations they need a loop or a list)
It will build the Tape Archive as a windows.zip using the -a (auto) switch but you need to consider the desired level of nesting by collect all contents at the desired root location.
TAR -a[other options] file.zip [folder / files]
Points to watch out for
ensure here is not an older archive
it will comment error/warnings like the two given during run, however, should complete without fail.
Once you have the zip file you can offer post as a web asset such as
<a href="\\server\folder\all.zip" download="all.zip">Get All<a>
for other notes see https://stackoverflow.com/a/68728992/10802527

sql loader without .dat extension

Oracle's sqlldr defaults to a .dat extension. That I want to override. I don't like to rename the file. When googled get to know few answers to use . like data='fileName.' which is not working. Share your ideas, please.
Error message is fileName.dat is not found.
Sqlloder has default extension for all input files data,log,control...
data= .dat
log= .log
control = .ctl
bad =.bad
PARFILE = .par
But you have to pass filename without apostrophe and dot
sqlloder pass/user#db control=control data=data
sqloader will add extension. control.ctl data.dat
Nevertheless i do not understand why you do not want to specify extension?
You can't, at least in Unix/Linux environments. In Windows you can use the trailing period trick, specifying either INFILE 'filename.' in the control file or DATA=filename. on the command line. WIndows file name handling allows that; you can for instance do DIR filename. at a command prompt and it will list the file with no extension (as will DIR filename). But you can't do that with *nix, from a shell prompt or anywhere else.
You said you don't want to copy or rename the file. Temporarily renaming it might be the simplest solution, but as you may have a reason not to do that even briefly you could instead create a hard or soft link to the file which does have an extension, and use that link as the target instead. You could wrap that in a shell script that takes the file name argument:
# set variable from correct positional parameter; if you pass in the control
# file name or other options, this might not be $1 so adjust as needed
# if the tmeproary file won't be int he same directory, need to be full path
filename=$1
# optionally check file exists, is readable, etc. but overkill for demo
# can also check temporary file does not already exist - stop or remove
# create soft link somewhere it won't impact any other processes
ln -s ${filename} /tmp/${filename##*/}.dat
# run SQL*Loader with soft link as target
sqlldr user/password#db control=file.ctl data=/tmp/${filename##*/}.dat
# clean up
rm -f /tmp/${filename##*/}.dat
You can then call that as:
./scriptfile.sh /path/to/filename
If you can create the link in the same directory then you only need to pass the file, but if it's somewhere else - which may be necessary depending on why renaming isn't an option, and desirable either way - then you need to pass the full path of the data file so the link works. (If the temporary file will be int he same filesystem you could use a hard link, and you wouldn't have to pass the full path then either, but it's still cleaner to do so).
As you haven't shown your current command line options you may have to adjust that to take into account anything else you currently specify there rather than in the control file, particularly which positional argument is actually the data file path.
I have the same issue. I get a monthly download of reference data used in medical application and the 485 downloaded files don't have file extensions (#2gb). Unless I can load without file extensions I have to copy the files with .dat and load from there.

(OS X Shell) Copying files based on text file containing file names without extensions

Preface: I’m not much of a shell-scripter, in fact not a shell-scripter at all.
I have a folder (folder/files/) with many thousand files in it, with varying extensions and random names. None of the file names have spaces in them. There are no subfolders.
I have a plain text file (filelist.txt) with a few hundred file names, all of them without extensions. All the file names have corresponding files in folder/files/, but with varying extensions. Some may have more than one corresponding file in folder/files/ with different extensions.
An example from filelist.txt:
WP_20160115_15_11_20_Pro
P1192685
100-1373
HPIM2836
These might, for example, correspond to the following files in folder/files/:
WP_20160115_15_11_20_Pro.xml
P1192685.jpeg
100-1373.php
100-1373.docx
HPIM2836.avi
(Note the two files named 100-1373 with different extensions.)
I am working on an OS X (10.11) machine. What I need to do is copy all the files in folder/files/ that match a file name in filelist.txt into folder/copiedfiles/.1
I’ve been searching and Googling like mad for a bit now, and I’ve found bucketloads of people explaining how to extract file names without extensions, find and copy all files that have no extension, and various tangentially related issues—but I can’t find anything that really helps me figure out how to do this in particular. Doing a cp ˋcat filelist.txtˋ folder/copiedfiles/ would work (as far as I can tell) if the file names in the text file included extensions; but they don’t, so it doesn’t.
What is the simplest (and preferably fastest) way to do this?
1 What I need to do is exactly the same as in this question, but that one is specifically asking about batch-file, which is a very different kettle of sea-dwellers.
This should do it:
while read filename
do
find /path/to/folder/files/ -maxdepth 1 -type f \
-name "$filename*" -exec cp {} /path/to/folder/copiedfiles/ \;
done</path/to/filelist.txt

Recursively copy file-types from directory tree

I'm trying to find a way to copy all *.exe files (and more, *.dtd, *.obj, etc.) from a directory structure to another path.
For example I might have:
Code
\classdirA
\bin
\classA.exe
\classdirB
\bin
\classB.exe
\classdirC
\bin
\classC.exe
\classdirD
\bin
\classD.exe
And I want to copy all *.exe files into a single directory, say c:\bins
What would be the best way to do this?
Constraints for my system are:
Windows
Can be Perl, Ruby, or .cmd
Anyone know what I should be looking at here?
Just do in Ruby, using method Dir::glob :
# this will give you all the ".exe" files recursively from the directory "Code".
Dir.glob("c:/Code/**/*.exe")
** - Match all directories recursively. This is used to descend into the directory tree and find all files in sub-directories of the current directory, rather than just files in the current directory. This wildcard is explored in the example code.
* - Match zero or more characters. A glob consisting of only the asterisk and no other characters or wildcards will match all files in the current directory. The asterisk is usually combined with a file extension, if not more characters to narrow down the search.
Nice blog Using Glob with Directories.
Now to copy the files to your required directory, you need to look into the method, FileUtils.cp_r :
require 'fileutils'
FileUtils.cp_r Dir.glob("c:/Code/**/*.exe"), "c:\\bins"
I just have tested, that FileUtils.cp method will also work, in this case :
require 'fileutils'
FileUtils.cp Dir.glob("c:/Code/**/*.exe"), "c:\\bins"
My preference here is to use ::cp method. Because Dir::glob is actually collecting all the files having .exe extensions recursively, and return them as an array. Now cp method is enough here, now just taking each file from the array and coping it to the target file.
Why I am not liking in such a situation, the method ::cp_r ?
Okay, let me explain it here also. As the method name suggests, it will copy all the files recursively from the source to target directory. If there is a need to copy specific files recursively, then ::cp_r wouldn't be able to do this by its own power ( as it can't do selections by itself, which ::glob can do ). Thus in such a situation, you have to give it the specific file lists, it would then copy then to the target directory. If this is the only task, I have to do, then I think we should go with ::cp, rather than ::cp_r.
Hope my explanation helps.
From cmd command line
for /r "c:\code" %f in (*.exe) do copy "%~ff" "c:\bins"
For usage inside a batch file, double the percent signs (%% instead of %)
Windows shell (cmd) command:
for /r code %q in (*.exe) do copy "%q" c:\bin
Double the % characters if you place this in a batch file.

How to copy only new files using bash scripting

I have to use bash scripting to copy files from one folder to another. If the destination folder has a file with the same name but older timestamp, it should not copy. Only newer files should be copied. I could have used cp -u, but I was asked not to use it. Essentially I have to use the test command testing for "ot". Please let me know how could this be done. I believe two for loops one to read the files in the source and one for the destination directories can be used and the the time stamp compared. The problem is that both for loops produce the absolute path names along with the file name. So not sure how to compare them
Thanks
You can profit from the parameter substitution:
for file in "$folder1"/* ; do
filename=${file##*/} # Remove everything to the last slash.
Or, you can change the directory:
cd "$folder1"
for file in * ; do
## you have to use full or relative path to $folder2 here

Resources