Can files be deleted in a folder if they don't contain a specific word using Power Automate? - power-automate

I currently have a folder which has photos dumped into it, I am looking to delete all files that do not contain a specific word (which is present in all file names in which I want to keep).
I am hoping this can be done with power automate as there is 100's of photos and I want to improve its efficiency.
I look forward to learning from somebody!
Image below, it seems the flow ran successfully.
enter image description here

You could use a Get files (properties only) action and use a filter array afterwards. In the filter array you could check if the Name field does not contain your keyword.
After that you can loop through the results of the filter array and delete the files based on the {Identifier} field.
Below is an example of that approach
Test it properly, because you are deleting files. Otherwise restore from the first or second stage recycle bin ;)

Related

Find and copy multiple pictures using powershell

I got a list on excel with picture names I have to find, is it in anyway possible to add the list to powershell and find the pictures and copy them out into one folder?
The list is (about 1000)1310 pictures and there is a total of 44k pictures in aprox a ton of folders. I think maybe it was 500k folders.
Picture of how the image software have made the folder structure
Exact number of files and folders, the last 14k pictures are in another main folder and not relevant for the list
Your question is very broad, and I can only give a very general answer. This is clearly scriptable, but it might take a lot of learning and a lot of effort.
First, you might want to consider what the relationship is between the way pictures are named inthe excel sheet and the way the picture files are named in the folders.
If they follow the same naming rules, that gets one big problem out of the way.
Next, you need to learn how to copy and excel table onto a Csv file.
Then you need to learn how to use Import-csv and feed the stream into a pipeline.
Then you need to process the output of the pipeline to a foreach loop that contains a copy-item cmdlet.
If there is a single master folder that contains all the other folders that contain pictures, then you are in luck. Learn the -path, -recurse, and -include parameters.
Perhaps someone who has already dealt with the same problem can provide you with code. But it may not do what you really want.

Is it possible to append prefix to files names taken from folder name, automatically? (Windows)

I work as technical photographer. I do a lot of photos of particular parts. Each parts get a folder assigned and then I copy photos to the folder.
I would like the names of files (photos) get a prefix which is folder name. Example:
I take 20 photos of part A1. I copy those 20 photos from SD card to my PC to previously created folder named "A1". I would like those 20 files to have names as follows:
A1(1)
A1(2)
A1(3)
[...]
A1(20)
Is it possible to make it automatic? or do it by one click?
Thanks in advance
If you don't need to preserve the original numbering, it's as simple as selecting all the files in Explorer, pressing F2 (for rename) and typing in the new name. The files will automatically get non-colliding names in the form of "Name (number)".
This respects the ordering you have selected in Explorer, so if you want the index to increment from older to newer files, for example, just sort the files by date ascending.
This can also be used to preserve the original numbering, but only if there are no gaps and if the numbers start from 1. If you sort the files by name and do the rename trick, they will still be ordered the same as before. If there are gaps, they will not be there anymore with the new file names, though.
One more gotcha is that this only works if all of the files have the same extension. If some are jpg and others png, for example, each extension will get its own numbering.
If this isn't good enough, you'll either have to use a script, which is a bit more advanced, or some tool that helps with batch renaming. My favourite has been Total Commander for a long time - in TC, this is as simple as selecting the files you want to rename, pressing Ctrl+M, and changing the file name to something like A1 ([N]).

How can I find duplicately named files in Windows?

I am organizing a large Windows folder with many subfolders (with sub folders, etc...), in which files have been saved multiple times in different locations. Can anyone figure out how to identify all files with duplicate names across multiple directories? Some ways I am thinking about include:
A command or series of that could be run in the command line (cmd). Perhaps DIR could be a start...
Possibly a tool that comes with Windows
Possibly a way to specify in search to find duplicate filenames
NOT a separate downloadable tool (those could carry unwanted security risks).
I would like to be able to know the directory paths and filename to the duplicate file(s).
Not yet a full solution, but I think I am on the right track, further comments would be appreciated:
From CMD (start, type cmd):
DIR "C:\mypath" /S > filemap.txt
This should generate a recursive list of files within the directories.
TODO: Find a way to have filenames on the left side of the list
From outside cmd:
Open filemap.txt
Copy and paste the results into Excel
From Excel:
Sort the data
Add in the next column logic to compare to see if the current text = previous text (for filename)
Filter on that row to identify all duplicates
To see where the duplicates are located:
Search filemap.txt for the duplicate filenames identified above and note their directory location.
Note: I plan to update this as I get further along, or if a better solution is found.

process 100K of image files with bash

here is the script to optimize jpg images: https://github.com/kormoc/imgopt/blob/master/imgopt
There is a CMS with image files (not mine).
I assume there is a complicated structure of subdirectories and script just recursively find all img files in given folder.
The question is how to mark already processed files so with next run
script won't touch them and just skip?
I dont know when the guys would like to add new files to it and process it. Also I think renaming is not a good choice either.
I was thinking about hash-table or associative array which will be filled from txt file during
start. But is it ok to have 100K of items array in bash? Seems complicated for a script.
Any other ideas about optimization are also welcome.
I think the easiest thing to do is just output a file with a similar name per processed image file.
For example image1.jpg after being processed would have an empty file with a similar name e.g. .image1.jpg.processed.
Then when your script runs it just checks if the for the current image: NAME.EXT if a file .NAME.EXT.processed exists. If the file doesn't exist then you know it needs to be processed. No memory issues and no hashtable needed granted you will have 100K of empty extra files.

Is there a part of a windows file that can't be modified?

I'm trying to accomplish something that will let a user download a file from a web application onto their system. The file will contain a unique five digit code. Using this unique five digit code the users can search for a file in their file system.
I'm wondering where is the best place to put this five digit code in a file so that users can easily search for the file. The simplest approach would be to put it in the name of the file, however, users can change the name of the file easily.
I'm looking for a filed where I can put the code so that users won't be able to modify it but will still be able to search for it. Is this possible?
If you say File.. what kind of file format do you mean. I'm asking because a file is just a pile of bytes and you can append your 5 digit code every where in the file, if it is your own file format. But if you tell us which file format you use, probably there are some fields which can be used to search for it. As example Tiff has many tags. Images have other meta data. etc

Resources