How to change Media Created Date in Exiftool? - windows

I have some MP4 videos that don't have Media Created date and time because they were recorded with Instagram camera. I want to set the date and time and I found out that I can do that with Exiftool software. I know that the software works by typing command lines, but I don't know where I should type what. I searched on Google and didn't find any helpful results.
I downloaded the software and got "exiftool(-k).exe" file. What should I do now? I have no idea how it works. I hope somebody can write simple steps just to set that Media Created date.

First you will probably want to rename the exiftool(-k).exe to just exiftool.exe and place it someplace in your PATH (see install exiftool-Windows). There is also Oliver Betz's Alternative Exiftool build for Windows which includes an installer and is a bit more security friendly. See this post on the exiftool forums. For those that use Chocolatey, the Chocolatey exiftool package will add exiftool to the PATH and is a well maintained package
Then you will want to use one of these commands. In the case of your example filename, the filename appears to have been named for the time it was taken i.e YearMonthDay_HourMinutesSeconds. In that case you can simply use this command (see exiftool FAQ #5)
exiftool -api QuickTimeUTC "-CreateDate<Filename" 20181223_000542.mp4
This will work correctly as long as the video was taken in the same time zone as the computer you are currently using. If not, you will have to add the time zone like this:
exiftool -api QuickTimeUTC "-CreateDate<${Filename}-04:00" 20181223_000542.mp4
This is because the CreateDate tag for MP4 files is supposed to be UTC and Windows properties will read it as such. Mac Finder will also correctly adjust from UTC. With the -api QuickTimeUTC option, exiftool will automatically adjust the time to UTC.
If you need to set the time to something different than what the filename is, then you would use this, adding the time zone if needed:
exiftool "-CreateDate=2018:12:23 00:05:42" 20181223_000542.mp4
These commands creates backup files. Add -overwrite_original option to suppress the creation of backup files. If on a Mac, the slower -overwrite_original_in_place option could be used to preserve any MDItem/XAttr data Add the -r (-recurse) option to recurse into subdirectories. If this command is run under Unix/Mac/Powershell, reverse any double/single quotes to avoid shell variable interpretation.
You can process as many files and/or directories as you can put on the command line, so if you wanted to process all the files in c:\Dir1 and C:\Dir2, you would just list both of them at the end of the command c:\Dir1 C:\Dir2

Related

Change photo date taken to value in date modified

I came across an old set of photo's taken in the 90's.
These photo's have 2 dates in the exif. A date modified (the correct date) and a date taken (a date somewhere last year). When importing this into my photo database it has the wrong date associated.
I'd like to run an apple automator or applescript to copy the modified date into the taken date.
Anyone willing to help me with this?
Please make a backup first, and then just run the following commands in a directory that is a COPY of your files.
You can do that with exiftool which you can install using homebrew with:
brew install exiftool
I believe the command you want in Terminal is:
exiftool "-EXIF:DateTimeOriginal<FileModifyDate" image.jpg
Just do one file for now, and check before and after with:
exiftool image.jpg
If it all looks good, you can do all the files with:
exiftool "-EXIF:DateTimeOriginal<FileModifyDate" *.jpg

How to extract date from filename in batch, and unzip to multiple directories using batch/7z

I am trying to code a script to automatically process some of our daily ftp files.
I have already coded the files to download from the source ftp using WinSCP and calling it in a .bat file, and would ideally like to call it within the same bat. Scripting Language does not matter, as long as I can run/call it from the original batch.
I need will extract the date from a filename, and unzip the contents into corresponding folders. The source file is delivered automatically daily via FTP, and the filename is:
SOFL_CLAIM_TC201702270720000075.zip
The bolded section is the date that I would like to extract.
The contents of the .zip include two types of content, multiple PDFs and a .dat file.
For the supplied date of 20170227, the pdfs need to get extracted to a folder following the format:
\%root%\FNOIs\2017\02-Feb\02-27-2017
At the same time, the .dat file needs to get extracted to multiple folders following the format:
\%root%\Claim Add\2017 Claim Add\02-2017
\%root2%\vendorFTP\VendorFolder
After extracting, I need to move the source zip to
\%root%\Claim Add\2017 Claim Add\02-2017
What is the best way off accomplishing all of this?
I am assuming it would be the for /f batch command, but I am new to batch coding and cannot figure out how to start it from scratch.
I also have 7zip installed, but do not understand how to use the command-line options.
You have asked for a lot in one question, and not shown any code or demonstrated effort on your part.
For the first part, once you have the filename in a variable:
set FILENAME=SOFL_CLAIM_TC201702270720000075.zip
You can get the date part with:
echo %FILENAME:~13,-14%
The syntax: :13,-14 means "Remove the first 13 letters and the last 14 letters." That should leave you with just the date.
When you integrate that into your script, Show Your Code

How to convert images from TIFF to JPG preserving the comments and tags

I am using preview (that comes with OS X El Capitan) feature to convert a file form TIFF format into JPG for example. I expected the export process will include the original comments, but it doesn't happen (it applies also for the tag fields).
The generated JPG file has no comment
The compression and change image format work, but the META INFO such as comment or tags are not exported.
Any suggestion or workaround about how to include that information. I need to convert about 500 images so manually copy/paste doesn't work for me.
Updated Answer
In the light of your comments, I think the best way forward is to try and identify how/where the comments are stored for each platform (Windows vs macOS) and then to decide which method you want to use going forward.
macOS Finder/Spotlight comments will not be legible on Windows, so if you want Windows compatibility, you need to standardise on JPEG or EXIF comments.
I recommend using exiftool which you can install with homebrew, using:
brew install exiftool
Then I suggest you try extracting the comments from your files to see how/where they are stored:
exiftool -a image.jpg
will show you all tags in image.jpg. Your comments may be under:
comment - which is the JPEG comment, or
EXIF:UserComment - which is the EXIF comment
If you find your comments in the JPEG or the EXIF section, you can extract just the comments with:
exiftool -comment image.jpg # extract JPEG comment
exiftool -EXIF:UserComment image.jpg # extract EXIF UserComment
Add the option -s3 to suppress the field-names in the above to save having to parse them out.
Likewise, you can set the comments with:
exiftool -comment="FUNKY JPEG COMMENT" image.jpg # set JPEG comment
exiftool -EXIF:UserComment="FUNKY EXIF USER COMMENT" image.jpg # set EXIF UserComment
You can also extract the EXIF user comments to a CSV with:
exiftool -EXIF:UserComment -csv *.jpg
SourceFile,UserComment
a.jpg,FUNKY EXIF:UserComment
b.jpg,b FUNKY EXIF:UserComment
You can also apply comments from a CSV.
You should also be able to extract macOS/Spotlight/Finder comments using the script in my main answer:
$HOME/macOSGetFinderComment "/Users/someone/soneFile.tif"
Original Answer
I would suggest you try the following using ImageMagick.
First, use the Finder, or any other tool you are familiar with, to make a copy of your photos including the entire directory structure to some new place where we cannot damage your existing photos. So, let's say you copy (NOT move) the entire tree of TIFs to a subdirectory called "NEW" inside your HOME directory.
Then start the Terminal and change directory to "NEW":
cd NEW
Easy Method
If all the TIFs are in a single directory or two, just use mogrify:
mogrify -format jpg *.tif
Harder Method
If the TIF files are in multiple directories, you will need to work a bit harder. Inside Terminal copy and paste this:
find NEW -name \*.tif -exec sh -c 'new="${1%.tif}.jpg"; convert "{}" "$new"' _ {} \;
That starts looking in the "NEW" directory for files named "*.tif". When it finds one, it starts a new shell (sh) passing it the filename of the TIF. It then works out the new filename by replacing a trailing "tif" with "jpg" and invokes ImageMagick convert to do the conversion.
As regards the Finder/Spotlight comments, here is a little script to get the Finder comment of a file:
#!/bin/bash
# macOSGetFinderComment
# Pass an absolute path to the file!
file=$1
osascript<<EOF
tell application "Finder" to get comment of item POSIX file "$file"
EOF
And here is one to set the Finder/Spotlight comment:
#!/bin/bash
# macOSSetFinderComment
# Pass an absolute path to the file!
file=$1
comm=$2
osascript<<EOF
tell application "Finder" to set comment of item POSIX file "$file" to "$comm"
EOF
So, I would save those 2 scripts in your HOME directory and then make them executable with:
cd
chmod +x macOS*FinderComment
Then save this file in your HOME directory under $HOME/CopyComments:
#!/bin/bash
shopt -s nullglob
for f in $(pwd)/*.tif; do
comment=$($HOME/macOSGetFinderComment "$f")
new="${f%.tif}.jpg"
echo Setting comment of $new to $comment
$HOME/macOSSetFinderComment "$new" "$comment"
done
and make it executable with:
chmod +x $HOME/CopyComments
and run it with:
cd NEW
$HOME/CopyComments
I have posted this problem also in Apple Community, here is the solution proposed by VikingOSX. It is a big piece of code, so better download it from here or directly from the Apple Community Link mentioned. Here is a description about the solution as described in the original post:
Prompts for a source folder, and a destination folder.
Duplicates folder hierarchy from source to destination folder.
Selects all TIFF images in the folder hierarchy and converts them to JPEG.
For sub-folders and their files, transfers the original Finder comments, color tags and tag name(s) to the destination hierarchy.
The compression level for the JPG file is high, it can be modified for: medium or low in the line: save this_img as JPEG in outfile_name with compression level medium with icon
Limitation: Source folder can only contain one-level of sub-folders. Ignoring this will result in unplanned results.
Additional Comments
Uses a with timeout clause to allow for large number of files. AppleScript does not yet support Finder tag names, so this script uses AppleScript/Objective-C to get and set those tag name(s). Due to this extension, the script now requires AppleScript 2.4 and must be run on OS 10.10 or later.
Due to the AppleScript/Objective-C code, the script cannot be run interactively as a script/script bundle without using the control+command+R keyboard shortcut. A test is made when the script starts, and will warn appropriately. It is best to save the script as an application to avoid this keyboard shortcut altogether.
Usage
Save the script and then copy and paste the file contains into the Script Editor (you can find the application in the folder: Utilities under the name: Script Editor), compile and save the file with the format: Application, then double click on it to run the script application.
I have tested the script under with Mac Air 2010, with OS El Capitan, for a folder with 884 TIFF files with 2.25GB size and it takes about 18 minutes to convert them into JPG files with medium compression level. The generated files will contain the tags and comments from the original equivalent TIFF file.
Disclaimer
Comment and tags generated in one platform for example Windows or mac OS are not visualized in the other platform. Tags created in Windows are treated in mac OS as keywords (Comand+i for visualizing them), but comments generated in Windows are not visualized in mac OS. This is general incompatibility problem that apply for photos in any format (for example TIFF or JPG).
EDIT (updated solution for solving cross-platform problem with comments)
Taking the idea from #MarkSetchell, I adapted the original script to at least solve the cross-platform problem from macOS to Windows, i.e. a comment from macOS can be seen in Windows platform. The idea is to use EXIF metadata. Then the Applescript will invoke the shell script for invoking the exiftool:
set uxFilepath to POSIX path of NewIMG
do shell script "/usr/local/bin/exiftool -overwrite_original -EXIF:UserComment=\"" & cmtstr & "\" " & uxFilepath
Windows processes the UserComment metadata from EXIF as a regular file comment. Now same comment on the TIF file will be on the JPG and also because such comments were copied (copy-paste) into an EXIF metadata the same information will be visualized under Windows. The same idea can be used for other file properties, in case Windows/Mac read it.
The EXIF metadata in macOS can be visualized from command line as suggest #MarkSetchell, but also from Finder: Command+o (to launch preview app), then Command+i (to launch the inspector). Then click on tap: "More Info", then the tab EXIF.
For the opposite process will require an script that does the opposite, i.e., copy EXIF comment using exiftool, into macOS comment. I have verified that in such case the Windows comment will appear under the label: XPComment. The script uses: UserComment, but it works using XPComment as label in both directions.

using output template error via youtube-dl

I admit, I'm new to Linux, but I pieced together the following in the Ubuntu terminal to download all of the videos from a YouTube channel:
youtube-dl -o "/media/ubuntu/3A3A9F353A9EED5F/%(uploader)s/%(autonumber)s.%(title)s.%(ext)s" --download-archive ~/.mydownloads -citw ytuser:DirectFix
However, I keep getting this error:
youtube-dl: error: using output template conflicts with using title, video ID or auto number
What do I need to do so I can download the files straight to a separate internal drive, rename the files, and keep track of the videos I've already downloaded?
Your options -citw do not make any sense. Simply remove them (maybe leave -i) and the download will work.
In detail:
-c forces youtube-dl to always resume downloads. By default, youtube-dl does already resume downloads. At best, this option is superfluous. At worst, you may be forcing youtube-dl to resume a download in another quality, which will result in a broken video file.
-i makes youtube-dl continue if it cannot download a video from the playlist. Unlike the other options, it is regularly useful. Be aware that you may miss errors though, if you need a complete download.
-t is the equivalent of -o "%(title)s-%(id)s.%(ext)s". As such, it is causing the immediate error at hand, since you are passing in two different output templates and youtube-dl doesn't know which one to pick.
-w forces youtube-dl to never overwrite an existing file. This is useful for metadata files, which you don't use in the first place. Even then, most users will want the updated information.

Retrieving latest file in a directory from a remote server

I was hoping to crack this myself, but it seems I have fallen at the first hurdle because I can't make head nor tale of other options I've read about.
I wish to access a database file hosted as follows (i.e. the hhsuite_dbs is a folder containing several databases)
http://wwwuser.gwdg.de/~compbiol/data/hhsuite/databases/hhsuite_dbs/pdb70_08Oct15.tgz
Periodically, they update these databases, and so I want to download the lastest version. My plan is to run a bash script via cron, most likely monthly (though I've yet to even tackle the scheduling aspect of the task).
I believe the database is refreshed fortnightly, so if my script runs monthly I can expect there to be a new version. I'll then be running downstream programs that require the database.
My question is then, how do I go about retrieving this (and for a little more finesse I'd perhaps like to be able to check whether the remote file has changed in name or content to avoid a large download if unnecessary)? Is the best approach to query the name of the file, or the file property of date last modified (given that they may change the naming syntax of the file too?). To my naive brain, some kind of globbing of the pdb70 (something I think I can rely on to be in the filename) then pulled down with wget was all I had come up with so far.
EDIT Another confounding issue that has just occurred to me is that the file I want wont necessarily be the newest in the folder (as there are other types of databases there too), but rather, I need the newest version of, in this case, the pdb70 database.
Solutions I've looked at so far have mentioned weex, lftp, curlftpls but all of these seem to suggest logins/passwords for the server which I don't have/need if I was to just download it via the web. I've also seen mention of rsync, but of a cursory read it seems like people are steering clear of it for FTP uses.
Quite a few barriers in your way for this.
My first suggestion is that rather than getting the filename itself, you simply mirror the directory using wget, which should already be installed on your Ubuntu system, and let wget figure out what to download.
base="http://wwwuser.gwdg.de/~compbiol/data/hhsuite/databases/hhsuite_dbs/"
cd /some/place/safe/
wget --mirror -nd "$base"
And new files will be created in the "safe" directory.
But that just gets you your mirror. You're still after is the "newest" file.
Luckily, wget sets the datestamp of files it downloads, if it can. So after mirroring, you might be able to do something like:
newestfile=$(ls -t /some/place/safe/pdb70*gz | head -1)
Note that this fails if ever there are newlines in the filename.
Another possibility might be to check the difference between the current file list and the last one. Something like this:
#!/bin/bash
base="http://wwwuser.gwdg.de/~compbiol/data/hhsuite/databases/hhsuite_dbs/"
cd /some/place/safe/
wget --mirror -nd "$base"
rm index.html* *.gif # remove debris from mirroring an index
ls > /tmp/filelist.txt.$$
if [ -f /tmp/filelist.txt ]; then
echo "Difference since last check:"
diff /tmp/filelist.txt /tmp/filelist.txt.$$
fi
mv /tmp/filelist.txt.$$ /tmp/filelist.txt
You can parse the diff output (man diff for more options) to determine what file has been added.
Of course, with a solution like this, you could run your script every day and hopefully download a new update within a day of it being ready, rather than a fortnight later. Nice thing about --mirror is that it won't download files that are already on-hand.
Oh, and I haven't tested what I've written here. That's one monstrously large file.

Resources