#echo off
setlocal
set _RunOnceValue=%~d0%\Windows10Upgrade\Windows10UpgraderApp.exe /SkipSelfUpdate
set _RunOnceKey=Windows10UpgraderApp.exe
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\RunOnce" /V "%_RunOnceKey%" /t REG_SZ /F /D "%_RunOnceValue%"
PowerShell -Command "&{ Get-PSDrive -PSProvider FileSystem | Where-Object { $_.Used -gt 0 } | ForEach-Object { $esdOriginalFilePath = 'C:\Windows10Upgrade\\*.esd'; $driveName = $_.Name; $esdFilePath = $esdOriginalFilePath -replace '^\w',$driveName; if (Test-Path $esdFilePath) { Remove-Item $esdFilePath } } }"
Found this batch script hidden somewhere
in my C: Drive and want to know what does this script do.
This is the Powershell portion of the script:
Get-PSDrive -PSProvider FileSystem |
Where-Object { $_.Used -gt 0 } |
ForEach-Object {
$esdOriginalFilePath = 'C:\Windows10Upgrade\\*.esd';
$driveName = $_.Name;
$esdFilePath = $esdOriginalFilePath -replace '^\w',$driveName;
if (Test-Path $esdFilePath)
{ Remove-Item $esdFilePath }
}
The Powershell part:
Gets your drive mappings
Checks which have used up space in them (probably all of them)
Loops through these and removes any files that are in Windows10Upgrade folders that have the extension .esd
e.g.
M:\Windows10Upgrade\*.esd
S:\Windows10Upgrade\*.esd
T:\Windows10Upgrade\*.esd
See below for a commented version of the code, explaining each line:
# Get shared drives
Get-PSDrive -PSProvider FileSystem |
# Filter just those with used space
Where-Object { $_.Used -gt 0 } |
# Go through each of those drives
ForEach-Object {
# Set a variable with the Windows10Upgrade folder and *.esd wildcard
$esdOriginalFilePath = 'C:\Windows10Upgrade\\*.esd';
# Get the drive name for the one currently in the loop ($_)
$driveName = $_.Name;
# Use a regex replace of the first 'word' character with drivename (e.g. C -> E, C -> W)
$esdFilePath = $esdOriginalFilePath -replace '^\w',$driveName;
# Check if the path exists
if (Test-Path $esdFilePath)
# If so, remove the file
{ Remove-Item $esdFilePath }
}
I would suggest running this in Powershell:
Get-PSDrive -PSProvider FileSystem |
Where-Object { $_.Used -gt 0 }
To get an idea of how this works.
Related
I have the following problem and I would really appreciate it if I could get some help on that front. I am getting a constant flow of xml files into a folder. A XML file name can look like this. It only goes up to 1005.
1001.order-asdf1234.xml
1002.order-asdf4321.xml
I want to sort the files into uniquely named folders that are not based on the file names. A example for that would be
C:\Directory Path...\Peter (All files starting with 1001 go in there)
C:\Directory Path...\John (All files starting with 1002 go there)
How can I create a batch or a powershell script to continuously sorts files into the specified folders? Since I only have 5 folders I would like to simply specify the target folders for each and not have elaborate loops but I don't know how to do that.
The easiest way is to create a lookup Hashtable where you define which prefix ('1001' .. '1005') maps to which destination folder:
# create a Hasthable to map the digits to a foldername
$folderMap = #{
'1001' = 'Peter'
'1002' = 'John'
'1003' = 'Lucretia'
'1004' = 'Matilda'
'1005' = 'Henry'
}
# set source and destination paths
$rootFolder = 'X:\Where\the\files\are'
$destination = 'Y:\Where\the\files\should\go'
# loop over the files in the root path
Get-ChildItem -Path $rootFolder -Filter '*.xml' -File |
Where-Object { $_.BaseName -match '^\d{4}\.' } |
ForEach-Object {
$prefix = ($_.Name -split '\.')[0]
$targetPath = Join-Path -Path $destination -ChildPath $folderMap[$prefix]
$_ | Move-Item -Destination $targetPath -WhatIf
}
Remove the -WhatIf safety-switch if you are satisfied with the results shown on screen
You could use a switch statement to decide on the target folder based on the first part of the file name:
$files = Get-ChildItem path\to\folder\with\xml\files -Filter *.xml
switch($files)
{
{$_.Name -like '1001*'} {
$_ |Move-Item -Destination 'C:\path\to\Peter'
}
{$_.Name -like '1002*'} {
$_ |Move-Item -Destination 'C:\path\to\John'
}
{$_.Name -like '1003*'} {
# etc...
}
default {
Write-Warning "No matching destination folder for file '$($_.Name)'"
}
}
If you change your mind about loops, my preference would be to store the mapping in a hashtable and loop over the entries for each file:
$files = Get-ChildItem path\to\folder\with\xml\files -Filter *.xml
$targetFolders = #{
'1001' = 'C:\path\to\Peter'
'1002' = 'C:\path\to\John'
'1003' = 'C:\path\to\Paul'
'1004' = 'C:\path\to\George'
'1005' = 'C:\path\to\Ringo'
}
foreach($file in $files){
$targetFolder = $targetFolders.Keys.Where({$file.Name -like "${_}*"}, 'First')
$file |Move-Item -Destination $targetFolder
}
The goal is to reset the current cmd.exe shell to have the original set of environment variables. This should include deleting current environment variables that were created after the cmd.exe shell started.
The System and User environment variables can be read from the registry. But, the dynamic variables such as ALLUSERSPROFILE, APPDATA, LOGONSERVER, etc. are not in those locations. Where can those be found?
Because of this, the code cannot delete a variable created after the cmd.exe shell was started. This is because it may be one of the dynamic variables.
Put both of these files in the same directory.
=== Do-Environment.bat
#ECHO OFF
SET "TEMPFILE=%TEMP%\do-environment-script.bat"
powershell -NoLogo -NoProfile -File "%~dp0Do-Environment.ps1" >"%TEMPFILE%"
type "%TEMPFILE%"
CALL "%TEMPFILE%"
EXIT /B %ERRORLEVEL%
=== Do-Environment.ps1
$Vars = #{}
$UserVars = 'HKCU:\Environment'
$SystemVars = 'HKLM:\SYSTEM\CurrentControlSet\Control\Session Manager\Environment'
(Get-Item -Path $SystemVars).Property |
ForEach-Object {
$KeyName = $_
$KeyValue = (Get-Item -Path $SystemVars).GetValue($KeyName, $null)
$Vars[$KeyName] = $KeyValue
}
(Get-Item -Path $UserVars).Property |
ForEach-Object {
$KeyName = $_
$KeyValue = (Get-Item -Path $UserVars).GetValue($KeyName, $null)
if (($null -eq $Vars[$KeyName]) -or ($KeyName -ne 'Path')) {
$Vars[$KeyName] = $KeyValue
} else {
$Vars[$KeyName] = $Vars[$KeyName] + ';' + $KeyValue
}
#'SET "{0}={1}"' -f #($KeyName, $KeyValue)
}
$Vars.Keys | Sort-Object | ForEach-Object { 'SET "{0}={1}"' -f #($_, $Vars[$_]) }
I'm trying to get a listing of all permissions on some network folders using PowerShell. Unfortunately I'm encountering the dreaded PathTooLongException so I'm attempting to use Robocopy as a work around. However I'm a complete novice with PowerShell so was hoping for a little help. The easiest command I've come up with is
Get-Childitem "S:\StartingDir" -recurse | Get-Acl | Select-Object path,accestostring | Export-Csv "C:\export.csv"
That works and does what I want except the exception I'm getting. How would I insert Robocopy into this statement to bypass the exception? Any thoughts?
First, create a batch file, such as getShortFilename.bat, with the following lines:
#ECHO OFF
echo %~s1
This will return the short filename of the long filename passed to it. The following script will use that to get the short filename when Get-Acl fails due to a long path. It will then use the short path with cacls to return the permissions.
$files = robocopy c:\temp NULL /L /S /NJH /NJS /NDL /NS /NC
remove-item "c:\temp\acls.txt" -ErrorAction SilentlyContinue
foreach($file in $files){
$filename = $file.Trim()
# Skip any blank lines
if($filename -eq ""){ continue }
Try{
Get-Acl "$filename" -ErrorAction Stop| Select-Object path, accesstostring | Export-Csv "C:\temp\acls.txt" -Append
}
Catch{
$shortName = &C:\temp\getShortFilename.bat "$filename"
$acls = &cacls $shortName
$acls = $acls -split '[\r\n]'
#create an object to hold the filename and ACLs so that export-csv will work with it
$outObject = new-object PSObject
[string]$aclString
$firstPass = $true
# Loop through the lines of the cacls.exe output
foreach($acl in $acls){
$trimmedAcl = $acl.Trim()
# Skip any blank lines
if($trimmedAcl -eq "" ){continue}
#The first entry has the filename and an ACL, so requires extra processing
if($firstPass -eq $true){
$firstPass = $false
# Add the long filename to the $exportArray
$outObject | add-member -MemberType NoteProperty -name "path" -Value "$filename"
#$acl
# Add the first ACL to $aclString
$firstSpace = $trimmedAcl.IndexOf(" ") + 1
$aclString = $trimmedAcl.Substring($firstSpace, $trimmedAcl.Length - $firstSpace)
} else {
$aclString += " :: $trimmedAcl"
}
}
$outObject | add-member -MemberType NoteProperty -name "accesstostring" -Value "$aclString"
$outObject | Export-Csv "C:\temp\acls.txt" -Append
}
}
Notes:
The string of ACLs that is created by Get-Acl is different from the on created by cacls, so whether that's an issue or not...
If you want the ACL string to be in the same format for all files, you could just use cacls on all files, not just the ones with long filenames. It wouldn't be very difficult to modify this script accordingly.
You may want to add extra error checking. Get-Acl could of course fail for any number of reasons, and you may or may not want to run the catch block if it fails for some reason other than the path too long.
I'm working on a script that checks folders in specific directory. For example, I run the script for first time, it generates me a txt file containing folders in the directory.
I need the script to add any new directories that are found to the previously created txt file when the script is run again.
Does anyone have any suggestions how to make that happen?
Here is my code so far:
$LogFolders = Get-ChildItem -Directory mydirectory ;
If (-Not (Test-Path -path "txtfilelocated"))
{
Add-Content txtfilelocated -Value $LogFolders
break;
}else{
$File = Get-Content "txtfilelocatedt"
$File | ForEach-Object {
$_ -match $LogFolders
}
}
$File
something like this?
You can specify what directory to check adding path to get-childitem cmdlet in first line
$a = get-childitem | ? { $_.psiscontainer } | select -expand fullname #for V2.0 and above
$a = get-childitem -Directory | select -expand fullname #for V3.0 and above
if ( test-path .\list.txt )
{
compare-object (gc list.txt) ($a) -PassThru | Add-Content .\list.txt
}
else
{
$a | set-content .\list.txt
}
I have a PowerShell 2.0 script that I use to delete folders that have no files in them:
dir 'P:\path\to\wherever' -recurse | Where-Object { $_.PSIsContainer } | Where-Object { $_.GetFiles().Count -eq 0 } | foreach-object { remove-item $_.fullname -recurse}
However, I noticed that there were a ton of errors when running the script. Namely:
Remove-Item : Directory P:\path\to\wherever cannot be removed because it is not empty.
"WHAT?!" I panicked. They should all be empty! I filter for only empty folders! Apparently that's not quite how the script is working. In this scenario a folder that has only folders as children, but files as grandchildren is considered empty of files:
Folder1 (no files - 1 folder) \ Folder 2 (one file)
In that case, PowerShell sees Folder1 as being empty and tries to delete it. The reason this puzzles me is because if I right-click on Folder1 in Windows Explorer It says that Folder1 has 1 folder and 1 file within it. Whatever is used to calculate the child objects underneath Folder1 from within Explorer allows it to see grandchild objects ad infinitum.
Question:
How can I make my script not consider a folder empty if it has files as grandchildren or beyond?
Here's a recursive function I used in a recent script...
function DeleteEmptyDirectories {
param([string] $root)
[System.IO.Directory]::GetDirectories("$root") |
% {
DeleteEmptyDirectories "$_";
if ([System.IO.Directory]::GetFileSystemEntries("$_").Length -eq 0) {
Write-Output "Removing $_";
Remove-Item -Force "$_";
}
};
}
DeleteEmptyDirectories "P:\Path\to\wherever";
Updating for recursive deletion:
You can use a nested pipeline like below:
dir -recurse | Where {$_.PSIsContainer -and `
#(dir -Lit $_.Fullname -r | Where {!$_.PSIsContainer}).Length -eq 0} |
Remove-Item -recurse -whatif
(from here - How to delete empty subfolders with PowerShell?)
Add a ($_.GetDirectories().Count -eq 0) condition too:
dir path -recurse | Where-Object { $_.PSIsContainer } | Where-Object { ($_.GetFiles().Count -eq 0) -and ($_.GetDirectories().Count -eq 0) } | Remove-Item
Here is a more succinct way of doing this though:
dir path -recurse | where {!#(dir -force $_.fullname)} | rm -whatif
Note that you do not need the Foreach-Object while doing remove item. Also add a -whatif to the Remove-Item to see if it is going to do what you expect it to.
There were some issues in making this script, one of them being using this to check if a folder is empty:
{!$_.PSIsContainer}).Length -eq 0
However, I discovered that empty folders are not sized with 0 but rather NULL. The following is the PowerShell script that I will be using. It is not my own. Rather, it is from PowerShell MVP Richard Siddaway. You can see the thread that this function comes from over at this thread on PowerShell.com.
function remove-emptyfolder {
param ($folder)
foreach ($subfolder in $folder.SubFolders){
$notempty = $false
if (($subfolder.Files | Measure-Object).Count -gt 0){$notempty = $true}
if (($subFolders.SubFolders | Measure-Object).Count -gt 0){$notempty = $true}
if ($subfolder.Size -eq 0 -and !$notempty){
Remove-Item -Path $($subfolder.Path) -Force -WhatIf
}
else {
remove-emptyfolder $subfolder
}
}
}
$path = "c:\test"
$fso = New-Object -ComObject "Scripting.FileSystemObject"
$folder = $fso.GetFolder($path)
remove-emptyfolder $folder
You can use a recursive function for this. I actually have already written one:
cls
$dir = "C:\MyFolder"
Function RecurseDelete()
{
param (
[string]$MyDir
)
IF (!(Get-ChildItem -Recurse $mydir | Where-Object {$_.length -ne $null}))
{
Write-Host "Deleting $mydir"
Remove-Item -Recurse $mydir
}
ELSEIF (Get-ChildItem $mydir | Where-Object {$_.length -eq $null})
{
ForEach ($sub in (Get-ChildItem $mydir | Where-Object {$_.length -eq $null}))
{
Write-Host "Checking $($sub.fullname)"
RecurseDelete $sub.fullname
}
}
ELSE
{
IF (!(Get-ChildItem $mydir))
{
Write-Host "Deleting $mydir"
Remove-Item $mydir
}
}
}
IF (Test-Path $dir) {RecurseDelete $dir}