Temporary internet files not deleting even after disk cleanup - windows

Temporary internet files not deleting after I am applying disk cleanup. They are occupying 9.17gb in the screenshot attached. After disk cleanup it shows the same 9.17gb.

Related

C drive free space drops very fast with no obvious reason

My operating system is Windows 10 and I have a problem with the free space dropping for no reason.
A couple of days ago I ran a python code in jupyter notebook, and in the middle of the execution my C drive ran out of space (there was ~50 GB free space), and since then the C drive free space changes significantly (even shrinks to few MBs) without no obvious reason.
Since then I found some huge files in a pycharm temporary directory, and I freed 47GB of space, but after a short time, it runs out of space again ( I am not even running any code anymore)!
When I restart, the free space gradually starts to increase, and again after a some time, it shrinks to a few GB or even MBs.
PS. I installed WinDirStat to show me the stat of the disk space, and it shows 93 GB under this path: C:\ProgramData\Microsoft\Search\Data\Applications\Windows\Files\Windows.edb, but I can't open Data folder in the file explorer, and it shows 0 bytes when I open the folder properties.
Windows.edb is an index database of the Windows Search function. It provides data to speed up searching in the file system due to indexing of files. There are several guides in the internet about reducing it's size. The radical way would be deleting it but I do not recomment this. You had to turn Windows Search off to do so:
net stop "Windows Search"
del %PROGRAMDATA%\Microsoft\Search\Data\Applications\Windows\Windows.edb
net start "Windows Search"
You wrote in your question that the file suddenly grew while your program was running. Maybe files will be created there. These files should be set to not be indexed. You should do that for the folder where the files are created. If this all fails, you could finally turn indexing off which slows down Windows Search.

Delays in freeing disk space with Dropbox Smart Sync on MacOS/APFS - what dangers lurk?

As explained by Dropbox, Smart Sync is a feature "that helps you save space on your hard drive. Access every file and folder in your Dropbox account from your computer, using virtually no hard drive space. ... With Smart Sync, content on your computer is available as either online-only, local, or in mixed state folders."
Last night and this morning, I moved a large quantity of files from an external disk into Dropbox folders on my MacBook (MacOS Mojave Version 10.14.4), then selected those Dropbox folders to be "online-only". The files rather quickly synched with Dropbox on the cloud -- I saw them appear in the local folders of a desktop computer that shares the dropbox -- but the grey icons (for "online only") took a long time to display in Finder. (More than twenty hours later, two larger folders still show the blue icon, for "synching", even though their contents have long appeared on the other computer.)
With growing alarm, I watched as each new directory added to Dropbox ratcheted up the amount of space used on the MacBook to dangerous levels (93%) even as large directories marked as "online only" continued to sync to the Dropbox cloud. I could only restore available space by moving some content back to an external disk.
Confusingly, information about how much space really remained was inconsistent. df showed 58 GB available:
Filesystem 1G-blocks Used Available Capacity Mounted on
/dev/disk1s1 465 403 58 88% /
while About this Mac => Storage showed 232 GB available.
According to one source, "the Storage tab in About This Mac ... can be useful as it is the only guide to what types of data are taking up storage space, but when you want to know how much space is used or free on any volume or disk, use Disk Utility: it’s much more likely to be accurate." Confusingly, however, my Disk Utility displayed both results:
433.68 GB used, 3.95 GB on other volumes, 62.45 GB free
Capacity 500.07 GB, Available: 232 GB (169.55 GB purgeable), Used: 433 GB
As explained by Dropbox, "setting files to be online only will free up space on your hard drive within minutes (as long as your computer is online and able to sync to Dropbox). However: ... macOS 10.13 (High Sierra) uses ... APFS. With APFS, the operating system takes snapshots of the file system and available hard drive space. These snapshots may not update after you've used Smart Sync to set Dropbox files as online only. This means that hard drive space you freed up with Smart Sync may not be immediately reflected or available if this snapshot hasn't updated. This hard drive space should eventually be freed up by the OS, but the amount of time this will take can vary. This isn't a behavior specific to Dropbox, but instead the designed behavior of macOS." On APFS, the placeholders for "online-only files use a small amount of space on your hard drive to store information about the file, such as its name and size. This uses less space than the full file." Indeed, files marked as "online-only" continue to show their non-zero (online) sizes (e.g., with ls and os.path.getsize()) as if they were still available locally.
I gather this is a MacOS (i.e., APFS) issue, not specific to Dropbox.
My question: If Disk Utility shows 232 GB "available" but only 62.45 GB "free", what are the consequences? Would bad things happen if I were to add another 100 GB of files to the disk?
I am of course reluctant to add more content than space free just "as an experiment" but see how this could happen unintentionally.
THIS HELPED ME: https://www.cbackup.com/articles/dropbox-taking-up-space-on-mac-6688.hmtl.html#A1
Solution 4. Clear the Dropbox cache folder
Generally, there is a hidden folder that containing Dropbox cache stored in your Dropbox root folder, named ".dropbox.cache". Only when the function of viewing hidden files and folders is enabled in the operating system, you can see the folder.
If you delete a large number of files from Dropbox, but the hard drive of your computer does not reflect these deletions, the deleted files may be saved in the cache folder. So, you can manually clear the cache to clear some space on the hard drive by following the steps below:
Open the Finder and select Go to folder... from the Go menu.
A dialog box should appear. Now copy and paste the following line into the box and press the return key:
~/Dropbox/.dropbox.cache
This will take you directly to the Dropbox cache folder. Delete the files in your cache by dragging them out of the Dropbox cache folder and into your Trash.

Azure file access failure (ERROR 1/Invalid MS-DOS function) from a VHD mounted from the blob

After scaling to 500 VMs, some of them (about 20 last time I tried) fail to start properly because their startup script cannot read a file from the mounted VHD. The VHD is mounted from the blob, and then the startup script copies some files from it.
The logs show the following output when trying to copy the file:
2013/06/26 12:39:55 ERROR 1 (0x00000001) Copying File F:\Folder\file.xxx
Incorrect function.
When I try copy it manually in Windows Explorer, I get an error message with the following contents: Invalid MS-DOS function. (Try Again/Cancel)
The drive is visible in Windows Explorer, and you can navigate in the folders (though, I think, not all)
Any ideas what can be causing it?
Some additional details: The VHD is mounted read-only from the blob by creating a snapshot for each of the machines. On most of the VMs, there are no problems accessing the files, but when you scale out, some of them fail to complete the operation.
Geo-replication is enabled.
Issues found: when scaling out, there was another vaguely-related bottleneck which slowed down the startup so much that the SAS expired, so we lost access to the drive. I'll have to re-think the way the files are accessed so that we do not have such slowdowns (this issue must be addressed anyway)

Windows Batch Filesystem Backup

Update:
Ehh -- Even though this question isn't "answered", I've just emptied my pockets and purchased an SSD. My ramdisk software was going to cost just about as much anyway. I'm not particularly interested in an answer here anymore, so I'll just mark this as "answered" and go on with my life.
Thanks for the help.
I've got a program which is writing files to a ramdisk (in Windows XP) and I need to copy its data from the ramdisk to a directory on my harddrive once it has finished execution. Obviously in a ramdisk the space is limited and I need to free up as much space on the ramdisk as I can between runs. The simple solution is to copy the data folder that my program generates on the ramdisk to a location on the harddisk and recursively delete the "data" folder from the ramdisk.
There is a problem with that solution however; my program looks at the filesystem and filenames to ensure that it doesn't overwrite files (The most recent data file in the directory is 006.dat, so it will write 007.dat instead of overwriting anything). I can't just delete the files once I'm done writing data because it needs that filesystem intact to record data without over-writing the old files when I copy the data back to my hard-drive
I'd like a simple little windows batch script which I can execute after my program has finished writing data files to the ramdisk. This batch script should copy the ramdisk "data" folder to my harddisk and delete all the files from the ramdisk, then it should re-create the filesystem as it was but with all zero-byte files.
How would I go about this?
Could you simply have it delete all files EXCEPT the most recent, then you would still have 006 and your logger would generate 007?
That seems safer than creating a zero length file because you would have to make sure it wasn't copied over the real 006 on the backup.
edit: Sorry can't help with how to do this solely in batch, but there are a bunch of unix utils, specifically find and touch that are perfect for this. There are numerous windows ports of these - search on SO for options.
Robocopy.exe (free download in the windows server resource kit) can do copy from one dir to another AND has the option to watch a dir for new files and copy them when they are closed or changed

How do backup apps which create a system image handle disk changes during the image creation process?

I created a backup disk image of my disk yesterday and the software told me to close all Windows programs to make sure the process finishes successfully.
I did that, but I was wondering what happens when some program does write to the disk nevertheless during the process. Windows 7 is a complex system and surely various log files and such are written continuously (the disk has one partition which contains the Windows install too). How does the backup software handle it when the disk content is changed during image creation?
What is the algorithm in this case?
Snapshotting, or 'Shadow Copy' as Microsoft calls it, see Shadow Copy on wikipedia

Resources