I have a problem in concatenating compressed files and extracting them in Windows Powershell environment.
My machine is Windows 10 and I have compressed dataset dataset as follow:
lr_v1_paa
lr_v1_pab
lr_v1_pac
I concatenated the above files into lr_v1.tar using Windows powershell command:
cat lr_v1_pa* > lr_v1.tar
and ran tar -xf lr_v1.tar to extract concatenated files. But, I got the following error message:
tar.exe: Error opening archive: Unrecognized archive format
Why tar.exe in powershell cannot recognize .tar format?
As you've found, cat lrs2_v1_parta* > lrs2_v1.tar doesn't work well with binary files in PowerShell.
To concatenate multiple binary files into one target file, here's how I would do it:
# discover input files
$inputFiles = Get-Item lrs2_v1_parta*
# create new target file
$destFile = New-Item -Name lrs2_v1.tar -ItemType File
# open writable file stream to target file
$outStream = $destFile.OpenWrite()
try{
foreach($file in $inputFiles){
# open file stream for reading from input file
$inStream = $file.OpenRead()
try {
# copy input file stream to destination file stream
$inStream.CopyTo($outStream) |Out-Null
}
finally {
# clean up input file handle
$inStream.Dispose()
}
}
}
finally {
# clean up output file stream
$outStream.Dispose()
}
You should expect to find a non-corrupt lrs2_v1.tar in the current folder after this operation :)
Related
This is what I am trying to do. I have a json file in which I have set of files like test1.txt,
test2.txt,
new/destination/test3.txt,
test4.txt
But when the file is in directory format, I am getting error.
files=["test.txt",
"test2.txt",
"new/destination/test3.txt",
"test4.txt"
]```
Inside a Groovy script I have a sh:
In a for loop I am trying to copy those files to different directory,
`for( file in files) {
sh ("cp -pr source_directory/"+file + " destination_directory/"+file +"| true")
echo 'file to copy ' + file
}`
Result I am looking for :
** destination_directory/test1.txt
destination_directory/test2.txt
destination_directory/new/destination/test3.txt **
I know cp command wont create new directory, is there any way I can solve this?
Im trying to copy xml-files from a directory to an other directory.
XML File now = OV_1e4f58d6.xml
After Copy = OV_1e4f58d6_Regression.xml
I have tried with this code but im getting the error "cp: cannot stat ‘–suffix=_REGRESSION’: No such file or directory"
cp --backup=simple –suffix=_Regression /u80/OMS/inputXML/BSL_KUPO_D1/*.xml /u80/OMS/DocumentService/importBSL
Thank you for Help
I have a folder of images in jpg format called "finalpics" and also another folder ("sourcepics") which has several subfolders containing RAW files in various formats.
I need a script (batch file?) that will copy all the files from "sourcepics" and its subfolders to another folder ("sourcefinal") only if that file exists in "finalpics".
As an example:
"finalpics" contains files called mypic1.jpg, mypic2.jpg, mypic3.jpg.
"sourcepics" contains files called mypic1.dng, mypic2.psd, mypic3.cr2, yourpic1.dng, yourpic2.psd, yourpic3.cr2.
I'd want the script to copy the 'mypic' files but not the 'yourpic' files to "sourcefinal".
There's over a thousand jpgs in "finalpics" but probably 40,000 files in the various subfolders of "sourcepics".
Hope that makes sense.
Thanks for looking.
I think this PowerShell code will do what you're after; it will copy files of the same name (ignoring file extension) from "SourcePics" to "SourceFinal" if they exist in FinalPics:
# Define your folder locations:
$SourcePicsFolder = 'C:\SourcePics'
$FinalPicsFolder = 'C:\FinalPics'
$SourceFinalFolder = 'C:\SourceFinal'
# Get existing files into arrays:
$SourcePics = Get-ChildItem -Path $SourcePicsFolder -Recurse
$FinalPics = Get-ChildItem -Path $FinalPicsFolder -Recurse
# Loop all files in the source folder:
foreach($file in $SourcePics)
{
# Using the basename property (which ignores file extension), if the $FinalPics
# array contains a basename equal to the basename of $file, then copy it:
if($FinalPics.BaseName -contains $file.BaseName)
{
Copy-Item -Path $file.FullName -Destination $SourceFinalFolder
}
}
Note: There is no filtering based on file type (e.g. it will copy all files). Also, if your 'SourcePics' folder has two images of the same filename but in different subfolders, and a file of this name also exists in 'FinalPics', then you may get an error about file already existing when it tries to copy for the second time. To overwrite, use the -Force parameter on the Copy-Item command.
I tested the above code with some .dng files in 'SourcePics' and .jpg files in 'FinalPics' and it worked (ignoring the yourpic files).
I have an IDML file that I unzipped. I now want to compress the expanded folder back into an IDML file. I have access to a Mac or Linux machine.
What are the ways I can do this?
Zipping the file using zip (command line) or with Keka, BetterZip or Archive Utility don't work. InDesign issues the error:
Cannot open the file. Adobe InDesign may not support the file format, a plug-in that supports the file format may be missing, or the file may be open in another application.
The problem with regular zip is that the zip archive contains a “mimetype” file that shouldn’t be compressed if you want InDesign to identify the newly-created IDML. So the way you have to re-zip the file (and the way the ePub scripts work) is like this:
They first create a zip archive which contains only the mimetype file, uncompressed. zip -X0 'myfile.idml' mimetype
Then they add the rest of the files/folders into the zip archive, this time with full compression. zip -rDX9 "myfile.idml" * -x "*.DS_Store" -x mimetype
In shell script terms, the ePub scripts do this (assuming the current directory is the one containing all the IDML contents):
zip -X0 'myfile.idml' mimetype # create the zip archive 'myfile.idml', containing only the 'mimetype' file with no compression
zip -rDX9 "myfile.idml" * -x "*.DS_Store" -x mimetype # add everything else to the ‘myfile.idml’ archive, EXCEPT .DS_Store files and the ‘mimetype’ file (which is already there from the previous step)
To save you time reading the zip man page, here’s what all these options mean:
-X = “no extra” — do not save extra file attributes like user/group ID for each file
-0 = “compression level zero” — no compression
-r = “recurse paths” — go through everything in the directory, including nested subfolders
-D = “no directory entries” — don’t put special stuff in the zip archive for directories
-9 = “compression level 9 (optimal)”
-x = “exclude these files”
Follow this voodoo, and you should be able to create legal IDML files.
Source: http://indesignsecrets.com/topic/how-do-i-re-zipcompress-an-expanded-idml-file
A big thanks to Chuck Weger and David Blatner at http://indesignsecrets.com
From within InDesign
use this jsx script to expand an IDML
//DESCRIPTION: Expands an IDML file into folder format
ExpandIDML();
function ExpandIDML(){
var fromIDMLFile = File.openDialog("Please Select The IDML File to unpackage");
if(!fromIDMLFile){return}
var fullName = fromIDMLFile.fullName;
fullName = fullName.replace(/\.idml/,"");
var toFolder = new Folder(fullName);
app.unpackageUCF(fromIDMLFile,toFolder);
}
And that one to produce an IDML package:
//DESCRIPTION:Produces an IDML package from the contents of a directory:
CreateIDML();
function CreateIDML(){
var fromFolder = Folder.selectDialog("Please Select The Folder to package as IDML");
if(!fromFolder){return}
var fullName = fromFolder.fullName;
// var name = fromFolder.name;
// var path = fromFolder.path;
var toIDMLFile = new File(fullName+".idml");
app.packageUCF(fromFolder,toIDMLFile);
}
I have a function in a powershell script that is supposed to untar my CppUnit.tar.bz2 file. I have installed 7-zip, and in my function I have the following:
Function untar ($targetFile) {
$z ="7z.exe"
$defaultDestinationFolder = 'C:\Program Files\'
$destinationFolder = (Get-Item $defaultDesitantionFolder).fullname
$tarbz2Source = $targetFile
& "$z" x -y $tarbz2Source
$tarSource = (get-item $targetFile).basename
& "$z" x -y $tarSource -o $destinationFolder
Remove-Item $tarSource
}
Running this extracts all the files where I want them, BUT all the files get ",v" as their ending:
...
Extracting cppunit-cvs-repo-archive\cppunit\cppunit\Attic
Extracting cppunit-cvs-repo-archive\cppunit\cppunit\Attic\estring.h,v
Extracting cppunit-cvs-repo-archive\cppunit\cppunit\Attic\TestSuite.h,v
Extracting cppunit-cvs-repo-archive\cppunit\cppunit\Attic\Test.h,v
Extracting cppunit-cvs-repo-archive\cppunit\cppunit\Attic\TestCase.h,v
Extracting cppunit-cvs-repo-archive\cppunit\cppunit\Attic\TextTestResult.h,v
Extracting cppunit-cvs-repo-archive\cppunit\cppunit\Attic\Makefile.am,v
Extracting cppunit-cvs-repo-archive\cppunit\cppunit\Attic\TestSuite.cpp,v
Extracting cppunit-cvs-repo-archive\cppunit\cppunit\Attic\Exception.cpp,v
Extracting cppunit-cvs-repo-archive\cppunit\cppunit\Attic\cppunit.dsw,v
Extracting cppunit-cvs-repo-archive\cppunit\cppunit\Attic\TestFailure.h,v
Extracting cppunit-cvs-repo-archive\cppunit\cppunit\Attic\TestCaller.h,v
Extracting cppunit-cvs-repo-archive\cppunit\cppunit\Attic\TestResult.h,v
Extracting cppunit-cvs-repo-archive\cppunit\cppunit\Attic\TextTestResult.cpp,v
Extracting cppunit-cvs-repo-archive\cppunit\cppunit\Attic\TestRegistry.h,v
Extracting cppunit-cvs-repo-archive\cppunit\cppunit\Attic\TestFailure.cpp,v
Extracting cppunit-cvs-repo-archive\cppunit\cppunit\Attic\Exception.h,v
Extracting cppunit-cvs-repo-archive\cppunit\cppunit\Attic\TestRegistry.cpp,v
Extracting cppunit-cvs-repo-archive\cppunit\cppunit\Attic\cppunit.dsp,v
Extracting cppunit-cvs-repo-archive\cppunit\cppunit\Attic\TestResult.cpp,v
Extracting cppunit-cvs-repo-archive\cppunit\cppunit\Attic\TestCase.cpp,v
Everything is Ok
Folders: 149
Files: 1128
Size: 20671974
Compressed: 21626880
Can anyone tell me how I can fix this?
The ,v suffix indicates that these are not the files themselves, but version history files maintained by CVS - each ,v file contains not only the latest version of the file, but the deltas to reconstruct any previous version of the file. The fact that they're all in an Attic subdirectory indicates that they were all removed via cvs remove at some point. These and the fact that the base directory is cppunit-cvs-repo-archive says you need to treat the unpacked archive as a CVS repository, and use the appropriate tools to check out the files you want to work with, not just "fix" what looks like wrong names...