Mac OS Terminal Find Specific Keyword image and Copy - terminal

I have tons of photo's and I assign a keyword called "background" to photo's that I want as my background.
My Photo's are located in a folder called "Photos" that folder had lots of sub folders.
Is there a terminal command that finds all photo's in folder "Photos" that have the keyword "background" and copy those photos to let's say "Folder B"?
I do have Exiftool by the way, that might help.
Ralph
edit:
'Achtergrond' means Background
I tried now: exiftool -o ~/test/MapA -if '$Subject=Achtergrond' ~/test/MapB
Also tried with this:
-if '$Subject eq "Achtergrond"'
exiftool -G1 -a -s -api MDItemTags=1 File.jpg| grep Achtergrond
[MacOS] MDItemKeywords : Achtergrond
[XMP-dc] Subject : Achtergrond
exiftool File.JPG | grep Achtergrond
Subject : Achtergrond
and I tried:
exiftool -o ~/test/MapA -if '$XMP-dc:Subject eq "Achtergrond"' ~/test/MapB
1 directories scanned
0 image files read
What am I missing here?

The basic command to do this with exiftool would be
exiftool -o '/path/to/Folder B/' -if '$Keywords=~/background/i' /path/to/Photos/
You do need to check where your keywords are actually stored. Depending upon what program you used to tag them, the background tag might be stored in XMP:Subject, IPTC:Keywords, or MDItemKeywords. Maybe even MDItemUserTags, I'm not overly familiar with how the Mac system tags works.
I'd suggest running
exiftool -G1 -a -s -api MDItemTags=1 FILE.JPG
on a file that you know contains the "background" tag and looking for that tag that contains "background". If it's something other than Keywords, then replace Keywords in the above command with that tag name
Breakdown of the above command:
-o '/path/to/Folder B/': This tells exiftool to copy files to the path '/path/to/Folder B/'. The trailing slash is needed if the output directory doesn't already exist, as otherwise exiftool will just create a file named "Folder B". Quotes are need around the path if there are spaces in it or the spaces need to be escaped with a backslash.
-if '$Keywords=~/background/i': This performs a case insensitive RegEx check on the Keywords tag to see if it contains "background". If it does, then the command will be executed on that file, otherwise that file will be skipped.

Related

pandoc to make each directory a chapter

I have a lot of markdown files in various directories each with the same format (# title, then ## sub-title).
can I make the --toc respect the folder layout, in that the folder itself is the name of chapter, and each markdown file is content of this chapter.
so far pandoc totally ignores my folder names, it works the same as putting all the markdown files within the same folder.
My approach to this is to create index files in each folder with first level heading and downgrade headings in other files by one level.
I use Git and by default I'm using default structure, having first level headings in files, but when I want to generate ebook using pandoc I'm modifying files via automated Linux shell script. After that, I revert changed files via Git.
Here's the script:
find ./docs/*/ -name "*.md" ! -name "*index.md" -exec perl -pi -e "s/^(#)+\s/#$&/g" {} \;
./docs/*/ means I'm looking only for files inside subfolders of docs directory like docs/foo/file1.md, docs/bar/file2.md.
I'm also interested only in *.md files, excluding *index.md files.
In index.md files (that I name usually 00-index.md to make them appear as first), I put a first level heading # and because those files are excluded from find portion of the script, their headings aren't downgraded.
Next, there's a perl's search and replace command with regular expression s/^(#)+\s/#$&/g that looks for all lines starting from one or more # and adds another # to them.
In the end, I'm running pandoc with --toc-depth=2 so the table of content contains only first and second level headings.
pandoc ./docs/**/*.md --verbose --fail-if-warnings --toc-depth=2 --table-of-contents -o ./ebook.epub
To revert all changes made to files, I restore changes in the Git repo.
git restore .

Mac - List All Image Paths (Names) in a Directory and Its Subdirectories

I like to list all image names in a directory with its subdirectories using terminal on Mac. I used the below command, it listed everything including folder names, but not working for my problem.
ls -R /Users/samuel/Apps/assets/images > file_names.txt
Thanks for your time and help.
If you know the exact extension of what constitutes an "image" file, then use this example below for jpg files:
ls -R /Users/samuel/Apps/assets/images | grep "*.jpg" > file_names.txt
For a broader definition of "image" file try this:
mdfind image -onlyin /Users/samuel/Apps/assets/images > file_names.txt
The first has to be run for each known image type. The second one may include any file with "image" in its metadata.

How to exclude files using ls?

I'm using a python script I wrote to take some standard input from ls and load the data in the files described by that path. It looks something like this:
ls -d /path/to/files/* | python read_files.py
The files have a certain name structure based on what data they have in them but are all stored in the same directory. The files I want to use have the name structure A<fileID>_###.txt (where ### is always some 3 digit number). I can accomplish getting only the files that start with A by just changing what I have above slightly to ls -d /path/to/files/A*. HOWEVER, some files have a suffix flag called B (so the file looks like A<fileID>_###B.txt) and I DO NOT want to include those.
So, my question is, is there a way to exclude those files that end in ...B.txt (or a way to only include files that end in a number)? I thought about something to the effect of:
ls -d /path/to/file/R*%d.txt
to only include files that end in a number followed by the file extension, but couldn't find any documentation on anything of the sort.
You could try this : ls A*[^B].txt
With extended globbing.
shopt -s extglob
ls R*!(B).txt

Finding and Removing Unused Files Through Command Line

My websites file structure has gotten very messy over the years from uploading random files to test different things out. I have a list of all my files such as this:
file1.html
another.html
otherstuff.php
cool.jpg
whatsthisdo.js
hmmmm.js
Is there any way I can input my list of files via command line and search the contents of all the other files on my website and output a list of the files that aren't mentioned anywhere on my other files?
For example, if cool.jpg and hmmmm.js weren't mentioned in any of my other files then it could output them in a list like this:
cool.jpg
hmmmm.js
And then any of those other files mentioned above aren't listed because they are mentioned somewhere in another file. Note: I don't want it to just automatically delete the unused files, I'll do that manually.
Also, of course I have multiple folders so it will need to search recursively from my current location and output all the unused (unreferenced) files.
I'm thinking command line would be the fastest/easiest way, unless someone knows of another. Thanks in advance for any help that you guys can be!
Yep! This is pretty easy to do with grep. In this case, you would run a command like:
$ for orphan in `cat orphans.txt`; do \
echo "Checking for presence of ${orphan} in present directory..." ;
grep -rl $orphan . ; done
And orphans.txt would look like your list of files above, one file per line. You can add -i to the grep above if you want to grep case-insensitively. And you would want to run that command in /var/www or wherever your distribution keeps its webroots. If, after you see the above "Checking for..." and no matches below, you haven't got any files matching that name.

Convert all image links in a website directory (on mac)

My task is to take a directory of website files, including html, css, and non-text image files, and change all image paths from relative paths to a universal cdn path. This is on a mac.
My starting point is something like this:
sed -i '' 's/old_link/new_link/g' *
but I want it to act only on css,html files, and to recurse through any subdirectories.
thanks
Try using find. Soemthing like:
find . -name *css -exec sed -i '' 's/old_link/new_link/g' {} ';'
will find all the css files in your current directory and below it and pass each one to sed. The {} stand in for the name (and location) of each file that find finds. Don't omit the quote marks around the final ;
Then repeat for html files.
As ever, for the finer points of syntax consult man, or google for the find documentation.

Resources