Ok, so there are a plethora of examples and apps using ExifTool to convert EXIF data to .jpg names. But what if you want to go the other way around? I have a number of files that use ODBC date, but contain no meta EXIF date. How can I - with what app or EXIFTool commandline - update the EXIF_DATE from a filename?
2012-02-24_1330073217.jpg
I found a Windows app
EXIFDate by filenamepattern that does this, but I'm a mac user. :/
This command will do what you want:
exiftool "-alldates<filename" DIR
where DIR is one or more directory and/or file names.
Related
I'm in my master thesis and I have to extract images from about 500 pdf files, some people recommended hexapdf to me for this. I was able to install Ruby and hexapdf and now I'm kinda stuck getting the images out of the pdf's since I don't have a coding background. Any tips?
Thanks in advance.
I tried using the basic command for only one pdf to see what happened by using 'hexapdf images' followed by the pdf name but the result was 'no such file or directory # rb_sysopen'.
If you're getting no such file or directory # rb_sysopen, then that signals that the file you are trying to open does not exist. It sounds like this is probably the PDF that you are trying to extract images from.
I would check that you are following help provided by hexapdf documentation and that the path to your PDF is correct. If the file with your code and the PDF are in the same directory and you are running your code from that file, then you would do something like:
require 'hexapdf'
doc = HexaPDF::Document.open('my_pdf_document_filename.pdf')
If the file is somewhere else on the machine, it may be easiest to use a full file path instead of a relative file path which will depend on your system and such (e.g. /Users/username/thesis/image_processing/files/my_pdf_document_filename.pdf).
I regularly scan in my Homework for class. My scanner exports raw jpg files to usb, and from there I can use gimp to edit and save the files as a pdf. One time saver I've found is to export my multi-page homeworks as a .mng file and then use the convert function to turn it into a pdf. I do it this way because Gimp automatically merges all layers when exporting to a pdf.
convert HW.mng HW.pdf
this works well for individual files, but at the end of every week I can have dozens of files to convert.
I have tried using wildcards in the filenames for convert:
convert *.mng *.pdf
This always runs successfully and never throws an error, but never produces any pdfs.
Both
convert HW*.mng HW*.pdf
and
convert "HW*.mng" "HW*.pdf"
yeild the error
convert: unable to open image `HW*.pdf': Invalid argument # error/blob.c/OpenBlob/2712.
which I think means the error lies in exporting with a wildcard.
Is there any way to convert all of a specific file type to another using convert? Or should I try using a different program?
You can see this StackExchange post. The accepted answer basically does what you want.
for file in *.mng; do convert -input "$file" -output "${file/%mng/pdf}"; done
For convert in particular, use mogrify (which is part of ImageMagick as well) as suggested by Mark Setchell in a comment. mogrify can be used to edit/convert files in batches. The command for your case would be
mogrify -format pdf -- *.mng
Im currently using tags such as exiftool -FileModifyDate(<)datetimeoriginal, etc. in terminal/cmd...
Im switching from icloud and the dates in the metadata are exif (meaning finder and windows explorer just see the date they were downloaded)..
It's working but for any sloMo videos that are M4V, they dont change.. I have the originals which do have the right dates and was wondering if there is a way to match file names (123.mp4 = 123.m4v) and copy the metadata over... But I also want to do it in batches. (since every month I will be offloading my iphone every month or so) Thanks!
It will depend upon your directory structure, but your command should be something like this:
exiftool -TagsFromFile %d%f.mp4 "-FileModifyDate<datetimeoriginal" -ext m4v DIR
This assumes the m4v files are in the same directory as the mp4 files. If not, change the %d to the directory path to the mp4 files.
Breakdown:
-TagsFromFile: Instructs exiftool that it will be copying tags from one file to another.
%d%f.mp4: This is the source file for the copy. %d is a exiftool variable for the directory of the current m4v file being processed. %f is the filename of the current m4v file being processed, not including the extension. The thing to remember is that you are processing m4v files that are in DIR and this arguments tells exiftool how to find the source mp4 file for the tag copy. A common mistake is to think that exiftool is finding the source files (mp4 in this case) to copy to the target files (m4v) when exiftool is doing the reverse.
"-FileModifyDate<datetimeoriginal": The tag copy operation you want to do. Copies the DateTimeOriginal tag in the file to the system FileModifyDate.
-ext m4v: Process only m4v files.
Replace DIR with the filenames/directory paths you want to process. Add -r to recurse into sub-directories. If this command is run under Unix/Mac, reverse any double/single quotes to avoid bash interpretation.
I want to extract a specific directory form a huge zip file (>5GB) that is somewhat corrupted because of an inevitable bad maintained build system that creates the zip.
The tools such as winrar/7Zip GUI apps have no issues extracting the files, but some command line tools such as mks unzip and 7za fails to extract from the corrupted archive.
After a lot of digging around and trying out many such command line utilities I found out that IZARC successfully extracts files from the archive.
I am running the following command:
IZARCe.exe -e -d -o D:\aHugeZipFile.zip -pD:\temp #"source.txt"
The listing file source.txt contains just one entry:
source/lib/*
which is the only directory in the archive, from where the contents are to be extracted.
But, it is resulting in:
IZArc Command Line Extraction Add-On Version 1.1 (Build: 130)
Copyright(c) 2007 Ivan Zahariev, All Rights Reserved.
http://www.izarc.org contact#izarc.org
Archive File: aHugeZipFile.zip
WARNING: Nothing to do!
I have tried specifying:
/source/lib/*
source/lib/*
source/lib/
source/lib
*source/lib/*
in the listing file, all to no avail! :(
Any pointers on where the error is occurring, and how to fix the issue will be of great help. Thank you in advance!
Using relative or absolute paths for listfiles doesn't appear to work with IZArc. Try using wildcards such as ., *.doc, etc instead of paths in the listfile. Be aware that there appears to be a limitation for the folder depth that IZArc will extract to as well as a tendency to generate CRC errors when files with the same name are present in the same archive, even if they are in different directories.
I would suggest using 7-Zip command-line instead. It can recurse deeply through a file structure without error and can use relative directories and wildcards in its listfiles.
The following 7-Zip command was tested and worked perfectly.
7za x somearchive.zip -o"C:\Documents and Settings\me\desktop\temp_folder\test2" -ir#source.txt -aoa -scsWIN
the source.txt file may contain contain a combination of relative paths and/or wildcards on separate lines such as:
Output/, Folder2/, *, or *.doc.
In the command above: x (extract with full paths), -ir (include filenames, recurse subdirectories), -aoa (overide existing files without prompt), -scsWIN (set charset for list files). You may need to adjust these commands for your situation.
Is it possible to list the contents of a LZMA file (.7zip) without uncompressing the whole file? Also, can I extract a single file from the LZMA file?
My problem: I have a 30GB .7z file that uncompresses to >5TB. I would like to manipulate the original .7z file without needing to do a full uncompress.
Yes. Start with XZ Utils. There are Perl and Python APIs.
You can find the file you want from the headers. Each file is compressed separately, so you can extract just the one you want.
Download lzma922.tar.bz2 from the LZMA SDK files page on Sourceforge, then extract the files and open up C/Util/7z/7zMain.c. There, you will find routines to extract a specific archive file from a .7z archive. You don't need to extract all the data from all the entries, the example code shows how to extract just the one you are interested in. This same code has logic to list the entries without extracting all the compressed data.
I solved this problem by installing 7zip (https://www.7-zip.org/) and using the parameter l. For example:
7z l file.7z
The output has some descriptive information and the list of files in the compressed files. Then, I call this inside python using the subprocess library:
import subprocess
output = subprocess.Popen(["7z","l", "file.7z"], stdout=subprocess.PIPE)
output = output.stdout.read().decode("utf-8")
Don't forget to make sure the program 7z is accessible in your PATH variable. I had to do this manually in Windows.