Is it possible to convert a solid 7z file to a non-solid 7z file? - 7zip

As per link: Converting Zip to 7z
7-zip can't convert from one type to another.
But this discussion is almost 6 yrs old.
What I want to know is does the latest version support the conversion of solid archive into non-solid archive?If yes, how it can be done?
Thanks in advance

You'd have to unzip the file and rezip it with 7z with non-solid option on. There's no real option to just "convert" it without redoing the entire archive.

Related

Is there a way to convert a code-file to an image with syntaxhiglighting

I try to convert Pascal-Code Files to an image (jpg, png) an find pongo-view as a good solution. Is there a way to add syntax-highlighting in the Output files?
I am happy about any hints :)
Thanks
I found an old repo to add syntax highligthing with pango markup (https://github.com/LinuxJedi/pango-syntax-highlighter/). So the new one can now convert Pascal files to images with syntax highligthing.
https://github.com/thiemol/pango-syntax-highlighter
python3 pangosyntaxhighlight.py cpp myfile.cpp output.txt && pango-view --markup --font=mono -qo image.png output.txt
Works with:C,C++,Java, Go', Python,Scala, GLShaderLexer, xml.
Now also with Delphi/Pacal, PhP. You can simple add the right lexer.

Using wildcards with "convert". Or "convert"ing a group of files

I regularly scan in my Homework for class. My scanner exports raw jpg files to usb, and from there I can use gimp to edit and save the files as a pdf. One time saver I've found is to export my multi-page homeworks as a .mng file and then use the convert function to turn it into a pdf. I do it this way because Gimp automatically merges all layers when exporting to a pdf.
convert HW.mng HW.pdf
this works well for individual files, but at the end of every week I can have dozens of files to convert.
I have tried using wildcards in the filenames for convert:
convert *.mng *.pdf
This always runs successfully and never throws an error, but never produces any pdfs.
Both
convert HW*.mng HW*.pdf
and
convert "HW*.mng" "HW*.pdf"
yeild the error
convert: unable to open image `HW*.pdf': Invalid argument # error/blob.c/OpenBlob/2712.
which I think means the error lies in exporting with a wildcard.
Is there any way to convert all of a specific file type to another using convert? Or should I try using a different program?
You can see this StackExchange post. The accepted answer basically does what you want.
for file in *.mng; do convert -input "$file" -output "${file/%mng/pdf}"; done
For convert in particular, use mogrify (which is part of ImageMagick as well) as suggested by Mark Setchell in a comment. mogrify can be used to edit/convert files in batches. The command for your case would be
mogrify -format pdf -- *.mng

shellscript to convert .TIF to a .PDF

I'm wanting to progress through a directory's subdirectories and either convert or place .TIF images into a pdf. I have a directory structure like this:
folder
item_one
file1.TIF
file2.TIF
...
fileN.TIF
item_two
file1.TIF
file2.TIF
...
...
I'm working on a Mac and considered using sips to change my .TIF files to .PNG files and then use pdfjoin to join all the .PNG files into a single .PDF file per folder.
I have used:
for filename in *; do sips -s format png $filename --out $filename.png; done
but this only works for the .TIF files in a single directory. How would one write a shellscript to progress through a series of directories as well?
once the .PNG files were created I'd do essentially the same thing but using:
pdfjoin --a4paper --fitpaper false --rotateoversize false *.png
Is this a valid way of doing this? Is there a better, more efficient way of performing such an action? Or am I being an idiot and should be doing this with some sort of software, like ImageMagick or something?
Try using the find command with the exec switch to call your image conversion solution. Alternatively, instead of using the exec switch, you could pipe the output of find to xargs. There is lots of information online about using find. Here's one example from StackOverflow.
As far as the image conversion, I think that really depends on your requirements for speed and efficiency. If you've verified the process you described, and this is a one-time process, and it only takes seconds or minutes to run, then you're probably fine. On the other hand, if you need to do this frequently, then it might be worth investing the time to find a one-step conversion solution that takes less time than your current, two-pass solution.
Note that, instead of two passes, you may be able to pipe the output of sips to pdfjoin; however, that would require some investigation to verify.

Importing EXIF from filename

Ok, so there are a plethora of examples and apps using ExifTool to convert EXIF data to .jpg names. But what if you want to go the other way around? I have a number of files that use ODBC date, but contain no meta EXIF date. How can I - with what app or EXIFTool commandline - update the EXIF_DATE from a filename?
2012-02-24_1330073217.jpg
I found a Windows app
EXIFDate by filenamepattern that does this, but I'm a mac user. :/
This command will do what you want:
exiftool "-alldates<filename" DIR
where DIR is one or more directory and/or file names.

Listing the contents of a LZMA compressed file?

Is it possible to list the contents of a LZMA file (.7zip) without uncompressing the whole file? Also, can I extract a single file from the LZMA file?
My problem: I have a 30GB .7z file that uncompresses to >5TB. I would like to manipulate the original .7z file without needing to do a full uncompress.
Yes. Start with XZ Utils. There are Perl and Python APIs.
You can find the file you want from the headers. Each file is compressed separately, so you can extract just the one you want.
Download lzma922.tar.bz2 from the LZMA SDK files page on Sourceforge, then extract the files and open up C/Util/7z/7zMain.c. There, you will find routines to extract a specific archive file from a .7z archive. You don't need to extract all the data from all the entries, the example code shows how to extract just the one you are interested in. This same code has logic to list the entries without extracting all the compressed data.
I solved this problem by installing 7zip (https://www.7-zip.org/) and using the parameter l. For example:
7z l file.7z
The output has some descriptive information and the list of files in the compressed files. Then, I call this inside python using the subprocess library:
import subprocess
output = subprocess.Popen(["7z","l", "file.7z"], stdout=subprocess.PIPE)
output = output.stdout.read().decode("utf-8")
Don't forget to make sure the program 7z is accessible in your PATH variable. I had to do this manually in Windows.

Resources