How can i count Get-SFTPChildItem folder size together? - windows

I need help with my Project
I'm trying to display the size of a folder called sh-modules but only thing I can achieve is displaying individual subfolder sizes...
Get-SFTPChildItem -SessionId 0 -Path sh-modules -Recursive | select FullName, #{N='Size (kB)';E={$_.Length/1kb}}
This is what I get
Is there any way to combine all those sizes to one number?

Related

How to find and delete duplicate Outlook emails in file explorer with PowerShell?

Please forgive me, I have little experience with PowerShell, but I know in theory what i need to do.
I have been given a list of 21,000 outlook emails and told to delete duplicates. The server uploaded these incorrectly somehow.The subject of the emails is a randomly generate string so is unique. I need to delete duplicates based on email size and manually opening the email to check that the content is the same. Rather than eyeballing them and comparing them manually which will take me approximately 25 years lol, does anyone know how to do this in PowerShell?
E.g. traverse through Outlook files line by line. IF one file size matches the previous, open both emails. AND Compare first line of email with both emails. IF they both match in terms of file size and content,delete one email. Surely this wouldn't be too difficult to do? Please help me, I cannot fathom looking at 21k emails!!
I've got powershell open and I navigated to the directory which hosts the 21k outlook emails. Please can someone help me? I know i am going to need some sort of loop in there and for 21k files it isn't going to be quick, but eyeballing them and doing it manually will take MUCH MUCH longer, the thought of manually doing it is giving me shivers...
Thanks! Much appreciated!
I am in powershell and navigated to the directory which hosts the 21k emails. I now need to find out how to traverse through line by line, find matching sizes and compare content, IF both are true then delete one file. I am not a programmer and I don't want to mess up by randomly winging it.
I'm not sure I understood everything but this might help you:
$Outlook = New-Object -ComObject Outlook.Application
$Namespace = $Outlook.GetNamespace("MAPI")
$Folder = $Namespace.GetDefaultFolder(6) # 6 is the index for the Inbox folder
$Emails = $Folder.Items
$Emails.Sort("[ReceivedTime]", $true)
$PreviousEmail = $null
foreach ($Email in $Emails) {
if ($PreviousEmail -ne $null -and $Email.Subject -eq $PreviousEmail.Subject -and $Email.ReceivedTime -eq $PreviousEmail.ReceivedTime) {
Write-Host "Deleting duplicate email with Subject:" $Email.Subject "and Received Time:" $Email.ReceivedTime
$Email.Delete()
}
$PreviousEmail = $Email
}
To run this tho, you need to change the directory to the outlook file locations. You can use "cd C:\PATH" or "Set-Location C:\PATH".
Eg:
cd C:\Users\MyUserName\Documents\Outlook\
Give it a try and let me know if works or it errors out. You might need to adjust some lines, I don't have outlook on my PC to test.
Also please do a backup/copy of the folder before running the script, in case it does delete emails it shouldn't.

Keras - using predefined training / validation split

I'm working with Tensorflow/Keras. I have two text files (train_{modality_name}.txt and val_{modality_name}.txt). They contain the split I want to use for the images I'm processing.
The format of these files is the following:
example_0_path category_id
example_1_path category_id
...
example_N_path category_id
and my folder structure is like this:
/labels
train_X.txt
val_X.txt
/data
/modality_1
...
/modality_M
(e.g. data/sketch/abbey/id)
How can I make use of the files?
'flow_from_dataframe' did the job, additionally it was necessary to preprocess the txt with pandas. This tutorial was very helpful: https://medium.com/#vijayabhaskar96/tutorial-on-keras-imagedatagenerator-with-flow-from-dataframe-8bd5776e45c1
Still having problems with matching target size of the arrays (labels seem to have the wrong format)

auto squencentially rename files based on last, numbered, file in folder and move to that location

I am looking for suggestions on how to automatically monitor a folder for uploads (pictures only) and move them into another with a specific filename "MyLife" in the prefix and a numbering sequence in the suffix. I store all my pictures in \Documents\Photos\MyLife and they are numbered with digits i.e. MyLife5523.jpg I would like to monitor a folder \Documents\picsupload for images and have a script or program to check \Documents\Photos\MyLife for the last existing file name (number) and rename the image to the next sequential number and move to \Documents\Photos\MyLife
i.e.
\Documents\picsupload has IMG_20170313_123003.jpg, IMG_20170318_123003.jpg, and IMG_20170313_123030.jpg
The program/script checks \Documents\Photos\MyLife and finds the last file is MyLife5523.jpg
IMG_20170313_123003.jpg, IMG_20170318_123003.jpg, and IMG_20170313_123030.jpg are renamed to MyLife5524.jpg, MyLife5525.jpg, and MyLife5526.jpg and moved to \Documents\Photos\MyLife
I currently have AutoHotKey and DropIt installed. However, I'm open to other suggestions.
Respectfully,
JA
Use SetTimer to poll the source directory for files using Loop, FilePattern. When found, use FileMove to move and rename them into your destination folder. Use IniWrite and IniRead to save numeric suffix so you don't have to scan the destination folder every time.

Change Chrome Settings via Powershell

I want to make a script to change the default page zoom in Chrome, however I do not know where these options are stored.
I guess that I have to find an appropriate options text file, parse it, and then make a textual replacement there, using powershell in order to apply the changes.
I need to do it every time I plugin my laptop to an external monitor, and it is kinda annowying to do it by hand
Any ideas?
The default zoom level is stored in a huge JSON config file called Preferences, that can be found in the local appdata folder:
$LocalAppData = [Environment]::GetFolderPath( [Environment+SpecialFolder]::LocalApplicationData )
$ChromeDefaults = Join-Path $LocalAppData "Google\Chrome\User Data\default"
$ChromePrefFile = Join-Path $ChromeDefaults "Preferences"
$Settings = Get-Content $ChromePrefFile | ConvertFrom-Json
The default Page Zoom level is stored under the partition object, although it seems to store it as a unique identifier with some sort of ratio value, this is what it looks like with 100% zoom:
PS C:\> $Settings.partition.default_zoom_level | Format-List
2166136261 : 0.0
Other than that, I have no idea. I don't expect this to be a good idea, Chrome seems to update a number of binary values everytime the default files are updated, you might end up with a corrupt Preferences file
$Env:
Is a special PSdrive that contains many of the SpecialFolder Paths
$Env:LOCALAPPDATA
and
[Environment]::GetFolderPath( [Environment+SpecialFolder]::LocalApplicationData )
Yield the same result in addition to the rest of Mathias answer.

Compressed array of file paths and random access

I'm developing an file management Windows application. The program should keep an array of paths to all files and folders that are on the disk. For example:
0 "C:"
1 "C:\abc"
2 "C:\abc\def"
3 "C:\ghi"
4 "C:\ghi\readme.txt"
The array "as is" will be very large, so it should be compressed and stored on the disk. However, I'd like to have random access to it:
to retrieve any path in the array by index (e.g., RetrievePath(2) = "C:\abc\def")
to find index of any path in the array (e.g., IndexOf("C:\ghi") = 3)
to add a new path to the array (indexes of any existing paths should not change), e.g., AddPath("C:\ghi\xyz\file.dat")
to rename some file or folder in the database;
to delete existing path (again, any other indexes should not change).
For example, delete path 1 "C:\abc" from the database and still have 4 "C:\ghi\readme.txt".
Can someone suggest some good algorithm/data structure/ideas to do these things?
Edit:
At the moment I've come up with the following solution:
0 "C:"
1 "[0]\abc"
2 "[1]\def"
3 "[0]\ghi"
4 "[3]\readme.txt"
That is, common prefixes are compressed.
RetrievePath(2) = "[1]\def" = RetrievePath(1) + "\def" = "[0]\abc\def" = RetrievePath(0) + "\abc\def" = "C:\abc\def"
IndexOf() also works iteratively, something like that:
IndexOf("C:") = 0
IndexOf("C:\abc") = IndexOf("[0]\abc") = 1
IndexOf("C:\abc\def") = IndexOf("[1]\def") = 2
To add new path, say AddPath("C:\ghi\xyz\file.dat"), one should first add its prefixes:
5 [3]\xyz
6 [5]\file.dat
Renaming/moving file/folder involves just one replacement (e.g., replacing [0]\ghi with [1]\klm will rename directory "ghi" to "klm" and move it to the directory "C:\abc")
DeletePath() involves setting it (and all subpaths) to empty strings. In future, they can be replaced with new paths.
After DeletePath("C:\abc"), the array will be:
0 "C:"
1 ""
2 ""
3 "[0]\ghi"
4 "[3]\readme.txt"
The whole array still needs to be loaded in RAM to perform fast operations. With, for example, 1000000 files and folders in total and average filename length of 10, the array will occupy over 10 MB.
Also, function IndexOf() is forced to scan array sequentially.
Edit (2): I just realised that my question can be reformulated:
How I can assign each file and each folder on the disk unique integer index so that I will be able to quickly find file/folder by index, index of the known file/folder, and perform basic file operations without changing many indices?
Edit (3): Here is a question about similar but Linux-related problem. It is suggested to use filename and content hashing to identify file. Are there some Windows-specific improvements?
Your solution seems decent. You could also try to compress more using ad-hoc tricks such as using a few bits only for common characters such as "\", drive letters, maybe common file extensions and whatnot. You could also have a look on tries ( http://en.wikipedia.org/wiki/Trie ).
Regarding your second edit, this seems to match the features of a hash table, but this is for indexing, not compressed storage.

Resources