How to calculate Google Drive folder size (space used) using RCLONE? - rclone

Google Drive does not give you a break down of each folder size individually, so when getting close to the limit of storage, it would be nice to know which folders are taking up the most space.
How can I use rclone (rclone.org) to get the size of a folder and how much space it is using, on Google Drive?

To get the total space used by your entire Google Drive use this:
rclone size "myGoogleDrive:/"
To get the total size (space used) of one particular folder, use this:
rclone size "myGoogleDrive:myFolderName/"
Thanks to the following post for helping figure this out: https://www.guyrutenberg.com/2017/08/23/calculate-google-drive-folder-size-using-rclone/
WINDOWS PHP SCRIPT TO FIND ALL TOP LEVEL FOLDER SIZES:
Note that the following script is written in PHP on Windows. You need to make sure shell_exec() command is not disabled in php.ini. You can run the script from command line or powershell using php filename.php
It assumes you have rclone installed and included in your windows environment path.
It will first retreive a list of all top level folders from your Google Drive, and then breaks the returned string into an array.
It will then loop thru the array and pull each directory name out and get its size.
<?php
echo "\r\nGOOGLE DRIVE - Total Storage Space Used (by top level directories)\r\n\r\n";
echo "Running RCLONE LSD...\r\n\r\n";
$dir_list_string = shell_exec('rclone lsd myGoogleDrive:'); // Get top level directory listing
echo "FOUND THESE...\r\n";
echo $dir_list_string."\r\n\r\n";
echo "Running RCLONE SIZE...\r\n\r\n";
$dir_list = explode("-1 ", $dir_list_string); // Split the returned string into an array
$count_list = count($dir_list); // How many items in the array?
for ($i = 2; $i < $count_list; $i=$i+2) {
$dir_list[$i] = trim($dir_list[$i]); // Get rid of white space around name
echo "Checking size of: ".$dir_list[$i]."\r\n";
$size_string = shell_exec('rclone size "myGoogleDrive:'.$dir_list[$i].'/" '); // Get size of each directory
echo $size_string."\r\n";
}
$size_all = shell_exec('rclone size "myGoogleDrive:/" ');
echo "TOTAL SPACE USED FOR ENTIRE GOOGLE DRIVE:\r\n";
echo $size_all."\r\n";
echo "DONE\r\n\r\n";
?>

I don't know if this can really be considered as an improvement to the previous answer, or simply another way to do this.
Anyways, this method also uses rclone and works more or less better, because you will use the GUI to view the size of the folders, but GUIs are usually slower than a pure command line interface. Nevertheless, you can view all sub-folders and their size.
Download Rclone and set it up to use Google Drive.
Once downloaded and configured for Google Drive, type :
rclone mount GDrive: d:
Use Windows Explorer or any folder size viewer (WinDirSat, TreeSize), to view all folders details and folder sizes.
I will explain each point in detail.
To download RClone, this is pretty straightforward. Go here : https://rclone.org/downloads/
To setup Rclone, this is also very straightforward and, this is not often the case for other softwares, but the documentation from the developers is very good : https://rclone.org/drive/
To mount a local drive, RClone can let you mount a 'cloud folder' as a local drive on Windows, and the code of RClone seems to be very well tested, because this feature is very stable. But to do this, you must download and install WinFUSE here : https://github.com/billziss-gh/winfuse
You then type rclone mount GDrive: d: to mount your cloud storage called GDrive: as the drive letter D: ("GDrive:" and "D:" name CAN change according to the configuration you made earlier and the drives on your computer, so please change it, according to your set up)
Finally, you can view the folders and analyse them with an adequate software, as any other files on your system.

Related

how does physical disk read work with volume shadow for ntfs?

my goal is to make a backup program reading a physical disk (with NTFS partitions) while using VSS for data consistency.
i use windows api's functions CreateFile with '\.\PhysicalDriveN'
as described here (basically, it allow me to access a disk as a big file)
https://support.microsoft.com/en-us/help/100027/info-direct-drive-access-under-win32
for tests i create volume shadows with this command
wmic shadowcopy call create Volume='C:\'
this is a temporary solution, i plan on using VSS via the program itself
My question is:
how are stored Volume shadows? does it stores data that have been modified since the volume shadow or does it store modification made since the last volume shadow?
in the first case:
when i read the disk, will i get consistent data (including ntfs metadata files)?
in the other case:
can i access a volume shadow the same way i would access a disk/partition? (in order to read hidden metadata files, etc)
-im am currenctly using windows 7 but planning on using it on differents version of windows server
-i've read a lot of microsoft doc about VSS but how it work seem really unclear for me (if you answer with one please explain a bit it meaning)
-i know that Volume shadows are stored in the folder "System Volume Information" as files with names like {3808876b-c176-4e48-b7ae-04046e6cc752}
"how are stored Volume shadows? does it stores data that have been modified since the volume shadow or does it store modification made since the last volume shadow?"
A hardware or software shadow copy provider uses one of the following methods for creating a shadow copy:(Answer by msdn doc)
Complete copy This method makes a complete copy (called a "full copy"
or "clone") of the original volume at a given point in time. This copy
is read-only.
Copy-on-write This method does not copy the original volume. Instead,
it makes a differential copy by copying all changes (completed write
I/O requests) that are made to the volume after a given point in time.
Redirect-on-write This method does not copy the original volume, and
it does not make any changes to the original volume after a given
point in time. Instead, it makes a differential copy by redirecting
all changes to a different volume.
"when i read the disk, will i get consistent data (including ntfs metadata files)?"
Even if an application does not have its files open in exclusive mode, it is possible—because of the finite time needed to open, back up, and close a file—that files copied to storage media may not all reflect the same application state.
"can i access a volume shadow the same way i would access a disk/partition? (in order to read hidden metadata files, etc)"
Requester Access to Shadow Copied Data
Paths on the shadow copied volume are obtained by replacing the root
of the original path with the device object. For example, given a path
on the original volume of "C:\DATABASE*.mdb" and a VSS_SNAPSHOT_PROP
instance of snapProp, you would obtain the path on the shadow copied
volume by concatenating snapProp.m_pwszSnapshotDeviceObject, "\",
and "\DATABASE*.mdb".
So i did more test and actually Shadow Volume are made at block level not file level. it mean that by using createfile with the path
\\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy1 it would work in a similar way than using createfile with the path \\.\C:
So yeah you can access a shadow copy file system, it have it own boot sector, mft, etc.

Add Disk space to C with batch Script

is it possible to add (Free)Diskspace to C: with a batch script?
You could write a script which deletes the most common places of temp files and caches on your system in order to free up space. Creating space out of nowhere is not possible though, obviously. If you do not need to automate the cleanup, you are probably better off using some cleanup tool for windows, simply google for "cleanup tool windows" for example.
If you want identify the most bloated places on your system, i found it very useful to use a tool showing you a sunburst diagram of places taking the most space on your hard drive. Such a tool would be f.i. http://www.jgoodies.com/freeware/jdiskreport/
If you are talking about resizing a partition, see e.g. https://www.partition-tool.com/resource/expand-windows-7-partition.htm for non automated options or http://www.itprotoday.com/management-mobility/formatting-and-resizing-partitions-diskpart for being able to do that in a batch file
yes you can extend Partition size using script but with Precaution
there are many tools available for doing the same task safely
but since you asked
Check Few Examples before proceeding.
Diskpart Scripts and Examples - Microsoft
User input for a DISKPART batch file
Extend a Basic Volume and Increase Disk Space
(for example)
Manually Check First (use with caution)
open cmd and input diskpart
Now when in Diskpart window
Input below commands
list volume - This will list all your volumes/partitions
select volume n - This will select your volume to resize/extend n will be your volume number
extend size=10240 - you need to input your desired size to be added to selected volume
10240=10GB

Finding actual size of a folder in Windows

On my home desktop which is a Windows machine I right click on C:\Windows folder under properties and it displays:
If I use the du tool provided by Microsoft sysinternals
du C:\Windows
This produces
Files: 77060
Directories: 21838
Size: 31,070,596,369 bytes
Size on disk: 31,151,837,184 bytes
If I run the same command as administrator
Files: 77894
Directories: 22220
Size: 32,223,507,961 bytes
Size on disk: 32,297,160,704 bytes
With Powershell ISE running as administrator I ran the following powershell snippet from this SO answer
"{0:N2}" -f ((Get-ChildItem -path C:\InsertPathHere -recurse | Measure-Object -property length -sum ).sum /1MB) + " MB"
which output
22,486.11 MB
The C# code in the following SO answer from a command prompt running as Administrator returns:
35,163,662,628 bytes
Although close it still does not display the same as Windows Explorer. None of these methods therefore return the actual size of the directory. So my question is this.
Is there a scripted or coded method that will return the actual folder size of C:\Windows
If there is no way of retrieving the folder size, is there a way I can programatically retrieve the information displayed by Windows Explorer?
When it comes to windows they have a strange way of actually storing data, for example while a file maybe 1mb in size, when stored on disc its probably going to be 1.1mb the reason for this being is that includes the directory link to the actual file on disc and that estimated size isn't including the possible additional data windows stores with the associated data.
Now your probably thinking, thats nice and all but how do you explain the massive size change when looking at the file size from admin, well that is a good question because this is another additional header/meta data that is stored in conjunction with the file which is only allowed to be seen by admins.
Coming back to your original question about telling the actual size of the file, well that is quite hard to say for windows due to the amount of additional data it uses in conjunction with the desired file, but for readability purposes or if you are using this for some form of coding, I'd suggest looking for the size on disk with the admin command, not because it seems like the file is at its maximum size (for me it is) but because usually when you are looking to transfer that's probably the most reliable figure you can go with, because once you transfer the file, some additional data will be removed or changed and you already know what the likely swing in file size difference will be.
Also you have to take into account the hard drive format (NTFS, fat32) because of how it segments files because that too can change the file size slightly if the file is huge Ie. 1gb++
Hope that helps mate, because we all know how wonderful windows can be when trying to get information (sigh).
The ambiguities and differences have a lot to do with junctions, soft links, and hard links (similar to symlinks if you come from the *nix world). The biggest issue: Almost no Windows programs handle hard links well--they look like (and indeed are) "normal" files. All files in Windows have 1+ hard links.
You can get an indication of "true" disk storage by using Sysinternals Disk Usage utility
> du64 c:\windows
Yields on my machine:
DU v1.61 - Directory disk usage reporter
Copyright (C) 2005-2016 Mark Russinovich
Sysinternals - www.sysinternals.com
Files: 204992
Directories: 57026
Size: 14,909,427,806 bytes
Size on disk: 15,631,523,840 bytes
Which is a lot smaller than what you would see if you right-click and get the size in the properties dialog. By default du64 doesn't double count files with multiple hard links--it returns true disk space used. And that's also why this command takes a while to process. You can use the -u option to have the disk usage utility naively count the size of all links.
> du64 -u c:\windows
DU v1.61 - Directory disk usage reporter
Copyright (C) 2005-2016 Mark Russinovich
Sysinternals - www.sysinternals.com
Files: 236008
Directories: 57026
Size: 21,334,850,784 bytes
Size on disk: 22,129,897,472 bytes
This is much bigger--but it's double counted files that have multiple links pointing to the same storage space. Hope this helps.

Transferring (stopping, resuming) file using rsync

I have an external hard-drive that I suspect is on its way out. At the minute, I can transfer files from it, but only for a while. Unfortunately, I have one single file that's >50GB in size. My solution to this is to use rsync to transfer this one particular file a bit at a time, leave the drive to rest (switch it off), and resume a little while later.
I'm using rsync --partial --progress --inplace --append -a /Volumes/Backup\ Drive/chris/Desktop/Recording\ Sessions/S1/Session\ 1/untitled ~/Desktop/temp to transfer it. (The file is in the untitled folder, which I'm moving into the temp folder) However, after having stopped it and resumed it, it seems to be over-writing the previous attempt at the file, meaning I don't really get any further.
Is there something I'm missing? :X
Thankyou ^_^
EDIT: Still don't know :\
Well, since this is a programming site, here's a program to do it. I tested it on OS X, but you should definitely test it on some small files first to make sure it does what you want:
#!/usr/bin/env python
import os
import sys
source = sys.argv[1]
target = sys.argv[2]
begin = int(sys.argv[3])
end = int(sys.argv[4])
mode = 'r+b' if os.path.exists(target) else 'w+b'
with open(source, 'rb') as source_file, open(target, mode) as target_file:
source_file.seek(begin)
target_file.seek(begin)
buffer = source_file.read(end - begin)
target_file.write(buffer)
You run this with four arguments: the source file, the destination, and two numbers. The first number is the byte count to start copying from (so on the first run you'd use 0). The second number is the byte count to copy until (not including). So on subsequent runs you'd always use the previous fourth argument as the new third argument (new begin equals old end). And just go on like that until it's done, using whatever sizes you like along the way.
I know this is related to macOS, but the best way to get all the files off a dying drive is with GNU ddrescue. I have no idea if this runs nicely on macOS, but you can always use a Linux live-usb to do this. You'll want to open a terminal and be either root (preferred) or use sudo.
Firstly, find the disk that you want to backup. This can be done by running the following. Make note of the partition name or disk name that you want to back up. Hard drives/flash drives will typically use the format sdX, where X is the drive letter. Partitions will be listed under sdX1, sdX2... etc. NVMe drives/partitions follow a similar naming convention.
lsblk -o name,size,label,fstype,model
Mount and change directory (cd) to a writable location that is bigger than the drive/partition you want to back up.
Now we are going to do a first pass over the drive/partition. This will do a first pass, without stopping on problematic sections. This will ensure that ddrescue does not cause any more damage by trying to access a bad section. Think of it like a hole in a sweater -- you wouldn't want to keep picking at the hole or it would get bigger. Run the following, with sdX replaced with the drive/partition name from earlier:
ddrescue -d /dev/sdX backup.img backup.logfile
the -d flag uses direct disk access and ignores the kernel cache, and the logfile is important in case the drive gets disconnected or the process stops somehow.
Run ddrescue again with the -r flag. This will retry bad sections 3 times. Feel free to run this a few times, but note that ddrescue cannot restore everything. From my experience it usually restores in the high 90%s, and many of the files are system files (aka not your personal files).
ddrescue -d -r3 /dev/sdX backup.img backup.logfile
Finally, you can use the image however you want. You can either mount it to copy the files off or use it in a virtual machine/burn it to a working drive with dd. Do note that the latter options will not always work if system critical files were damaged.
Good luck and remember to make backups!

Calculate the total space consumption of specific files in unix terminal

I have a folder containing .tcb and .tch files. I need to know what the size of all .tcb files together, respectively of all .tch files is.
I did like this:
1) I created a temp folder and then:
mv *tch temp
2) and then:
du -sk temp
I found the command in the Internet and wikipedia says this: "du (abbreviated from disk usage) is a standard Unix program used to estimate the file space usage". I think the reason why it says that it is an estimation is that if there are links then the size of the link will be shown instead of the linked file.
But if I do
ls -l
in the temp folder (which contains the all *.tch) files and then sum up the sizes which are displayed in the terminal, I have another file size. Why is that the case?
Well in sum, what I need is a command which shows me the real file size of *all .tch files in a folder, which can contain also other file types.
I hope anyone can help me with that. Thanks a lot!
You can use the -L option to du if you want to follow symbolic links (that is, calculate the size of the link target, not of the link itself). You can also use the -c option to display a grand total at the end.
Armed with those options, try du -skLc *.tch.
For more details on du, see this manpage.
Look at the specific man page for your version of du as they vary considerably in how they count.
"Approximate" can be because:
Blocks used or Bytes used can be reported with Blocks over-stating file sizes that aren't exact multiples of the block size but more accurately represents "space used that I can't use for other stuffs"
Unix files can have "holes" created by seeking a long way and writing. The OS doesn't actually allocate space for the skipped holes.
Symbolic links may or may not be dereferenced to the real file they point to.
If you just want the bytecount use wc -c *.tcb

Resources