Technical difference between format and quick format in windows platform? - windows

i have many times seen this on my system that when i format my 16GB pen drive using just right click on it and then select format, then it takes a lot of time to format, but when i select quick format then it takes very less time. Can anyone please tell what is the technical difference between these two process?

When you choose to run a regular format on a volume, files are removed from the volume that you are formatting and the hard disk is scanned for bad sectors. The scan for bad sectors is responsible for the majority of the time that it takes to format a volume.
If you choose the Quick format option, format removes files from the partition, but does not scan the disk for bad sectors. Only use this option if your hard disk has been previously formatted and you are sure that your hard disk is not damaged.
If you installed Windows XP on a partition that was formatted by using the Quick format option, you can also check your disk by using the chkdsk /r command after the installation of Windows XP is completed.
Source: http://support.microsoft.com/kb/302686

Full - set up zeroes in every cell, quick - change file system headers only.

Related

Add Disk space to C with batch Script

is it possible to add (Free)Diskspace to C: with a batch script?
You could write a script which deletes the most common places of temp files and caches on your system in order to free up space. Creating space out of nowhere is not possible though, obviously. If you do not need to automate the cleanup, you are probably better off using some cleanup tool for windows, simply google for "cleanup tool windows" for example.
If you want identify the most bloated places on your system, i found it very useful to use a tool showing you a sunburst diagram of places taking the most space on your hard drive. Such a tool would be f.i. http://www.jgoodies.com/freeware/jdiskreport/
If you are talking about resizing a partition, see e.g. https://www.partition-tool.com/resource/expand-windows-7-partition.htm for non automated options or http://www.itprotoday.com/management-mobility/formatting-and-resizing-partitions-diskpart for being able to do that in a batch file
yes you can extend Partition size using script but with Precaution
there are many tools available for doing the same task safely
but since you asked
Check Few Examples before proceeding.
Diskpart Scripts and Examples - Microsoft
User input for a DISKPART batch file
Extend a Basic Volume and Increase Disk Space
(for example)
Manually Check First (use with caution)
open cmd and input diskpart
Now when in Diskpart window
Input below commands
list volume - This will list all your volumes/partitions
select volume n - This will select your volume to resize/extend n will be your volume number
extend size=10240 - you need to input your desired size to be added to selected volume
10240=10GB

Undo a botched command prompt copy which concatenated all of my files

In a Windows 8 Command Prompt, I had a backup drive plugged in and I navigated to my User directory. I executed the command:
copy Documents G:/Seagate_backup/Documents
What I assumed was that copy would create the Documents directory on my backup drive and then copy the contents of the C: Documents directory into it. That is not what happened!
I proceeded to wipe my hard-drive and re-install the operating system, thinking I had backed up the important files, only to find out that copy seemingly concatenated all the C: Documents files of different types (.doc, .pdf, .txt, etc) into one file called "Documents." This file is of course unreadable but opening it in Notepad reveals what happened. I can see some of my documents which were plain text throughout the massively long file.
How do I undo this!!? It's terrible because I was actually helping a friend and was so sure of myself but now this has happened. The only thing I can think of doing is searching for some common separator amongst the concatenated files and write some sort of script to split the file back apart. But then I would have to guess the extensions of each of the pieces...
Merging files together in the fashion that copy uses, discards important file system information such as file size and file name. While the file name may not be as important the size is. Both parameters are used by the OS to discriminate files.
This problem might sound familiar if you have damaged your file allocation table before and all files disappeared. In both cases, you will end up with a binary blob (be it an actual disk or something like your file which might resemble a disk image) that lacks any size and filename information.
Fortunately, this is where a lot of file system recovery tools can help. They are specialized in matching patterns. Specifically they are looking for giveaway clues to what type a file is of, where it starts and what it's size is.
This is for instance enabled by many file types having a set of magic numbers that are used to allow a program to check if a file really is of the type that the extension claims to be.
In principle it is possible to undo this process more or less well.
You will need to use data recovery tools or other analysis tools like binwalk to extract the concatenated binary blob. Essentially the same tools that are used to recover deleted files should be able to extract your documents again. Without any filename of course. I recommend renaming the file to a disk image (.img) and either mounting it from within the operating system as a virtual harddisk (don't worry that it has no file system - it should show up as an unformatted drive) or directly using a data recovery tool or analysis tool which can read binary files (binwalk, for instance, can do that directly, but may not find all types of files as it's mainly for unpacking firmware images that may be assembled in the same or a similar way to how your files ended up).

Writing to /dev/loop USB image?

I've got an image that I write onto a bootable USB that I need to tweak. I've managed to mount the stick as /dev/loopX including allowing for the partition start offset, and I can read files from it. However writing back 'seems to work' (no errors reported) but after writing the resulting tweaked image to a USB drive, I can no longer read the tweaked files correctly.
The file that fails is large and also a compressed tarfile.
Is writing back in this manner simply a 'no-no' or is there some way to make this work?
If possible, I don't want to reformat the partition and rewrite from scratch because that will (I assume) change the UUID and then I need to go worry about the boot partition etc.
I believe I have the answer. When using losetup to create a writeable virtual device from the partition on your USB drive, you must specify the --sizelimit parameter as well as the offset parameter. If you don't then the resulting writes can go past the last defined sector on the partition (presumably requires your USB drive to have extra space). Linux reports no errors until later when you try to read. The key hints/evidence for this are that when reads (or (re)written data) fail, dmesg shows read errors attempting to read past the end of the drive. fsck tools such as dosfsck also indicate that the drive claims to be larger than it is.

Finding actual size of a folder in Windows

On my home desktop which is a Windows machine I right click on C:\Windows folder under properties and it displays:
If I use the du tool provided by Microsoft sysinternals
du C:\Windows
This produces
Files: 77060
Directories: 21838
Size: 31,070,596,369 bytes
Size on disk: 31,151,837,184 bytes
If I run the same command as administrator
Files: 77894
Directories: 22220
Size: 32,223,507,961 bytes
Size on disk: 32,297,160,704 bytes
With Powershell ISE running as administrator I ran the following powershell snippet from this SO answer
"{0:N2}" -f ((Get-ChildItem -path C:\InsertPathHere -recurse | Measure-Object -property length -sum ).sum /1MB) + " MB"
which output
22,486.11 MB
The C# code in the following SO answer from a command prompt running as Administrator returns:
35,163,662,628 bytes
Although close it still does not display the same as Windows Explorer. None of these methods therefore return the actual size of the directory. So my question is this.
Is there a scripted or coded method that will return the actual folder size of C:\Windows
If there is no way of retrieving the folder size, is there a way I can programatically retrieve the information displayed by Windows Explorer?
When it comes to windows they have a strange way of actually storing data, for example while a file maybe 1mb in size, when stored on disc its probably going to be 1.1mb the reason for this being is that includes the directory link to the actual file on disc and that estimated size isn't including the possible additional data windows stores with the associated data.
Now your probably thinking, thats nice and all but how do you explain the massive size change when looking at the file size from admin, well that is a good question because this is another additional header/meta data that is stored in conjunction with the file which is only allowed to be seen by admins.
Coming back to your original question about telling the actual size of the file, well that is quite hard to say for windows due to the amount of additional data it uses in conjunction with the desired file, but for readability purposes or if you are using this for some form of coding, I'd suggest looking for the size on disk with the admin command, not because it seems like the file is at its maximum size (for me it is) but because usually when you are looking to transfer that's probably the most reliable figure you can go with, because once you transfer the file, some additional data will be removed or changed and you already know what the likely swing in file size difference will be.
Also you have to take into account the hard drive format (NTFS, fat32) because of how it segments files because that too can change the file size slightly if the file is huge Ie. 1gb++
Hope that helps mate, because we all know how wonderful windows can be when trying to get information (sigh).
The ambiguities and differences have a lot to do with junctions, soft links, and hard links (similar to symlinks if you come from the *nix world). The biggest issue: Almost no Windows programs handle hard links well--they look like (and indeed are) "normal" files. All files in Windows have 1+ hard links.
You can get an indication of "true" disk storage by using Sysinternals Disk Usage utility
> du64 c:\windows
Yields on my machine:
DU v1.61 - Directory disk usage reporter
Copyright (C) 2005-2016 Mark Russinovich
Sysinternals - www.sysinternals.com
Files: 204992
Directories: 57026
Size: 14,909,427,806 bytes
Size on disk: 15,631,523,840 bytes
Which is a lot smaller than what you would see if you right-click and get the size in the properties dialog. By default du64 doesn't double count files with multiple hard links--it returns true disk space used. And that's also why this command takes a while to process. You can use the -u option to have the disk usage utility naively count the size of all links.
> du64 -u c:\windows
DU v1.61 - Directory disk usage reporter
Copyright (C) 2005-2016 Mark Russinovich
Sysinternals - www.sysinternals.com
Files: 236008
Directories: 57026
Size: 21,334,850,784 bytes
Size on disk: 22,129,897,472 bytes
This is much bigger--but it's double counted files that have multiple links pointing to the same storage space. Hope this helps.

How does file recovery software work?

I wanted to make some simple file recovery software, where I want to try to recover files which happen to have been deleted by pressing Shift + Delete. I'm working in Windows, can anyone show me any links or documents which can help me to do so programatically? I know C, C++, .NET. Any pointers?
http://www.google.hu/search?q=file+recovery+theory&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a :)
Mainly file recoveries are looking for file headers and/or filenames in the disk as I know, then try to get the whole file by the header information.
This could be a good start: http://geeksaresexy.blogspot.com/2006/02/theory-behind-deleted-files-recovery.html
The principle of all recovery tools is that deleting a file only removes a pointer in a folder and (quick) formatting of a partition only rewrites the first sectors of the partition which contains the headers of the filesystem. An in depth analysis of the partition data (at sector level) can rebuild a big part of the filesystem data, cluster allocation tables, folders, and file cluster chains.
All course if you use a surface test tool while formatting the partition that will rewrite all sectors to make sure that they are correct, nothing will be recoverable - unless you use specialized hardware to look at remanent magnetism on the edges of the actual tracks
In windows when a file is deleted(permanent delete) it's not actually deleted from disk but the file name added with char( _ I guess) in front of it and windows ignores these when showing in explorer... and recovery tools will search these kind of file names in the disk. And your file recover integrity based on some data over written on location of deleted file. Don't know this pattern still used by windows.. but long time back I have read this some where

Resources