Powershell scripting clean up activity - windows

I have a scenario here, i am working on files clean up activity.
I have folders and withing the folder i have excel and pdf files. I want to retain the last two modified files and delete all other files.
Please help me writing the script.
Regards
NKS

To do something with all but the two most recently modified files can be done with a sort and then skipping two (broken over lines for ease of reading):
dir *.doc |
sort LastWriteTimeUtc -desc |
select -skip 2 |
# Do something like remove-item
(Using UTC to avoid problems when entering/leaving DST.)

Related

How to copy files and file structure with a batch script

I am writing a batch script that does a myriad of things. The one part I am stuck on is copying files/file structure from a location to my final image. The file structure looks something like this
Foo
|->Bar
| |->Snafu
| | |-><FILES>
| |-><FILES>
|->Bar2
| |->Snafu
| | |-><FILES>
| |-><FILES>
|->Bar3
| |->Snafu
| | |-><FILES>
| |-><FILES>
etc...
I want to copy the whole contents of the Folder Foo while maintaining the file structure. Here is the rub...this has to be able to run on a clean copy of Windows, so I cannot use any third party programs (which leaves out XCOPY, etc.).
I have tried using the "copy" command with various parameters, but the closest I get is getting the files with no folder structure. I am not always sure what is in the folders, so I can't even hard code it.
I would appreciate any help. Thanks!
You can use XCOPY, which is way better than COPY.
XCOPY is NOT a third party command, there's no need for software. It was added back in 1986 (MS-DOS 3.30, correct me if I'm wrong), so every Windows OS has it.
Command would be: xcopy /y /h /i /e foo bar
Which will:
Copy all directories structure, including empty directories (/e)
It won't prompt for confirmation
Hidden files will be also copied.

Script to compare two different folder contents and rename them based on minimum similarity

Story:
I have multiple folders with 1000+ files in each that are named similar to each other but are slightly different but they relate to the same content.
For example, in one folder I have files named quite simply "Jobs to do.doc" and in another folder "Jobs to do (UK) (Europe).doc" etc.
This is on Windows 10, not Linux.
Question:
Is there a script to compare each folder's content and rename them based on minimum similarity? So the end result would be to remove all the jargon and have each file in each folder (multiple) the same as one another but STILL remain in the retrospective folder?
*Basically compare multiple folder content to one folders contents and rename them so each file in each folder is named the same?
Example:
D:/Folder1/Name_Of_File1.jpeg
D:/Folder2/Name_Of_File1 (Europe).jpeg
D:/Folder3/Name_of_File1_(Random).jpeg
D:/folder1/another_file.doc
D:/Folder2/another_file_(date_month_year).txt
D:/Folder3/another_file(UK).XML
I have used different file extensions in the above example in hope someone can write a script to ignore file extensions.
I hope this make sense. So either a script to remove the content in brackets and keep the files integrity or rename ALL files across all folders based on minimum similarity.
The problem is its 1000+ files in each folder so want to run it as an automated job.
Thanks in advance.
If the stuff you want to get rid of is always in brackets then you could write a regex like
(.*?)([\s|_|]*\(.*\))
Try something like this
$folder = Get-ChildItem 'C:\TestFolder'
$regex = '(.*?)([\s|_|]*\(.*\))'
foreach ($file in $folder){
if ($file.BaseName -match $regex){
Rename-Item -Path $file.FullName -NewName "$($matches[1])$($file.extension)" -Verbose #-WhatIf
}
}
Regarding consistency you could run a precheck using same regex
#change each filename if it matches regex and store only it's new basename
$folder1 = get-childitem 'D:\T1' | foreach {if ($_.BaseName -match $regex){$matches[1]}else{$_.BaseName}}
$folder2 = get-childitem 'D:\T2' | foreach {if ($_.BaseName -match $regex){$matches[1]}else{$_.BaseName}}
#compare basenames in two folders - if all are the same nothing will be returned
Compare-Object $folder1 $folder2
Maybe you could build with that idea.

mac: compare two folders, extract non-identical photos only

I have been looking in S.O. for ways to compare two folders to extract only the "new" or the "non-identical" photos. I have two large folders and I only want the new files, I need to identify them. Is there a way to do it? or an application to do it?
situation: with the iOS8, I gave it a try, and then I revert back to iOS711. but my most recent backup (the one I did before upgrading to iOS8) was corrupt, due to the downgrade I did.
Now, I have a copy of my iOS photos from the recent backup (I had to use backup extractor to extract the photos from), but I also have the photos from a month old backup that I restored into my phone after I gave up on the corrupt and recent backup.
I have now two sets of photo libraries. one with the up to date photos (which cannot be restored in the iPhone through iTunes), and one with the month old photo library (which was restored to my iPhone through iTunes easily).
I extracted photos from both backups, and I ended up with two directories. I only need the new photos (the difference between the two folders).
I hope it's now clearer, and more detailed.
Thanks a lot!
Can you install duff via homebrew or macports? If so, the following should give you a list of files that occur only once:
$ duff -r -f '' folder1 folder2 | sort > duplicate_files.txt
$ find folder1 folder2 -print | sort > all_files.txt
$ diff all_files.txt duplicate_files.txt | grep '^< ' | cut -c 3-
If you don't want to install additional packages, this would also work:
sort <(ls dir1) <(ls dir2) | uniq -u
That'll sort the list of files in both directories and then return only the items that appear only once. If you want to also return the locations of those files, you could then search for them.
This compares files by name, which might not be desirable. If you want to compare them by something else (e.g. size), then the answer gets a little more complicated.

Finding and Removing Unused Files Through Command Line

My websites file structure has gotten very messy over the years from uploading random files to test different things out. I have a list of all my files such as this:
file1.html
another.html
otherstuff.php
cool.jpg
whatsthisdo.js
hmmmm.js
Is there any way I can input my list of files via command line and search the contents of all the other files on my website and output a list of the files that aren't mentioned anywhere on my other files?
For example, if cool.jpg and hmmmm.js weren't mentioned in any of my other files then it could output them in a list like this:
cool.jpg
hmmmm.js
And then any of those other files mentioned above aren't listed because they are mentioned somewhere in another file. Note: I don't want it to just automatically delete the unused files, I'll do that manually.
Also, of course I have multiple folders so it will need to search recursively from my current location and output all the unused (unreferenced) files.
I'm thinking command line would be the fastest/easiest way, unless someone knows of another. Thanks in advance for any help that you guys can be!
Yep! This is pretty easy to do with grep. In this case, you would run a command like:
$ for orphan in `cat orphans.txt`; do \
echo "Checking for presence of ${orphan} in present directory..." ;
grep -rl $orphan . ; done
And orphans.txt would look like your list of files above, one file per line. You can add -i to the grep above if you want to grep case-insensitively. And you would want to run that command in /var/www or wherever your distribution keeps its webroots. If, after you see the above "Checking for..." and no matches below, you haven't got any files matching that name.

Set svn:ignore recursively for a directory structure

I have imported a huge hierarchy of maven projects into IntelliJ idea, now idea has created .iml projects on all different levels. I would like to svn:ignore these files.
There are several similar questions, for example this one: svn (subversion) ignore specific file types in all sub-directories, configured on root?
However: the answer is always to apply svn:ignore with the --recursive flag. That's not what I am looking for, I just want to ignore the files I created, not set a property on the hundreds of directories underneath.
Basically, what I have is the output of
svn status | grep .iml
which looks like this:
? foo/bar/bar.iml
? foo/baz/phleem.iml
? flapp/flapp.iml
etc.
What I would like to do is for each entry dir/project.iml to add svn:ignore *.iml to dir. I am guessing I have to pipe the above grep to sed or awk and from there to svn, but I am at a total loss as to the exact sed or awk command on one hand and the exact svn command (with which I won't override existing svn:ignore values) on the other hand.
Update: what I am also not looking for are solutions based on find, because possibly there are projects in this hierarchy where .iml files are in fact committed and I wouldn't want to interfere with those projects, so I'm looking to add the property only for .iml files my IDE has created.
You can set up your client to globally ignore given file extensions. Just add
global-ignores = *.iml
into your Subversion config file.
Update: If you want to only ignore iml files in the directories involved, you can try
svn status | grep '^\?.*\.iml' | sed 's=^? *=./=;s=/[^/]*$==' | xargs svn propset svn:ignore '*.iml'

Resources