How do backup apps which create a system image handle disk changes during the image creation process? - windows

I created a backup disk image of my disk yesterday and the software told me to close all Windows programs to make sure the process finishes successfully.
I did that, but I was wondering what happens when some program does write to the disk nevertheless during the process. Windows 7 is a complex system and surely various log files and such are written continuously (the disk has one partition which contains the Windows install too). How does the backup software handle it when the disk content is changed during image creation?
What is the algorithm in this case?

Snapshotting, or 'Shadow Copy' as Microsoft calls it, see Shadow Copy on wikipedia

Related

Unable to shrink partition size

I am trying to install Linux in a computer that has Windows 7. The first step was shrinking the disk size but Windows did not allow any reduction. Thus I followed a number of steps to disable "unmovable" files
I disabled the Page File
I disabled hibernation
I disabled System Protection
After that nothing seemed to have changed so I checked the disk fragmentation and it was 11% fragmented. I have since then run at least 4 defrags and I have also defragged the free space using Defraggler.
As of now the disk looks like this
Right now, Windows refuses to shrink the partition by any amount (I imagine that the files at the end of the disk are the troublesome ones).
Coming from an Linux background I am unsure what else needs to be done in order to shrink the partition.
Are you using Windows disk management tool to do the shrink? Here's a link for that method.
https://www.howtogeek.com/howto/windows-vista/resize-a-partition-for-free-in-windows-vista/
Also make sure the recycle bin on that drive is empty.
I finally figured it out.
The easiest way is just to use a Live USB with GParted on it since that will allow you to move Windows protected files around (the windows OS is not loaded on the live distro).
If just defragmenting is concerned one can use Hiren's Boot CD and the included Defraggler for the same purpose.
I had the same problem on Windows 10. Turns out it was antivirus software that was running on the machine that prevented defragmentation happen properly. I actually had to temporarily uninstall antivirus. After that, the Disk Management tool was able to correctly shrink the volume.

Is it possible to recover a running VBScript file, if the original file was already deleted?

I have one Vbscript which runs continuously on my system to monitor a web page on Internet Explorer.
I have permanently deleted this Vbscript file from its original location on system by mistake, However the script is still in RAM and is still running and monitoring the web page.
This script is very important to me but I have lost it :(
I want to know if there is any way by which I can recover the code of Vbscript file from system's RAM or any temporary file as the script is still running.
I am not allowed to use any file recovery software, so please don't suggest to install any third party data recovery software.
Try using 'ADPlus.vbs' script from WinDbg:
1. http://msdn.microsoft.com/en-us/windows/hardware/hh852365
2. http://support.microsoft.com/kb/286350
As the code was running, I followed the below process to recover the running code:
Go to Task Manager
Select the process and create dump
Open online dump analyser (www.osronline.com)
Upload dump file
Download the dump analysis
The dump analysis provided almost 95% of the correct code. Code within some loops were distorted or changed. As I was the owner of the code I was able to correct it.
Use HxD, it can view all ram content relative to any process at fly. It is commonly used to hack currently running games etc.
After locating your script, it might be needed to clear alphanumeric mess between your code, N++ and regex knowledge may be useful.

Locking sharable memory

Is there away to page into memory another process’s entire image? In a couple of weeks, our IT staff will be replacing all of the "core" network switches. This will bring down the network. This will be done after normal business hours. During this time, several users will still be using a program that I have written. It will be a nightmare to install local copies of my program on each user's machine. The program normally runs from a network share. The only time the program will access the network is when the program executes its executable (image) code. How can I get the Windows Memory Manager to load the entire image into memory and hold it "lock" there until the network is back online?
You can relink your program with the /swaprun:net option:
http://msdn.microsoft.com/en-us/library/w0628bwh.aspx
You could write it so that it copies itself locally to temp directory and then have it run that copy as a separate process, and then kill itself(the first copy). I've done this little juggling act before, but it depends on how your program works whether or not it will like being run from the temp directory.
This isn't going to work.
Windows doesn't necessarily load a 'static' copy of the executable into memory, it's free to shuffle chunks around and page parts in and out. Often it loads resources (images, strings, etc.) from the executable after the program has started running. It often loads external libraries dynamically as well.
Edited to add:
There is no such thing as "a process's entire image". Every thread, for example, gets its own allocation.
Maybe you should explain why running from a different location (i.e., a local copy of the binary) won't work for you.

Change Journal for Blocks in Windows(NTFS)

I have written a backup tool that is able to backup files and images of volumes for Windows. To detect which files have changed I use the Windows Change Journal. I already use the shadow copy functionality to do a consistent copy of both the files and the volume images.
To detect which blocks have changed I use hashes at the moment. This means the whole volume has to be read once (because to see which block has changed hashes of all blocks have to be calculated).
The backup integrated into Windows 7 is able to create incremental volume images without checking all blocks. I wasn't able to find an API for a kind of block level change journal.
Does anybody know how to access this information?
(I'm willing to dive deep into NTFS internals - even reading and parsing special files)
I don't think block level change info is available anywhere. Most probably what the Windows 7 integrated backup does is it installs a File System Filter Driver like some backup products does and anti-virus software. A filter driver can intercept all file system calls and in this way know which blocks changed. If you do this you can basically build your own change journal that works block level but only for the files that you are interested in.
I would really like to know a better answer myself here.
When you say Windows Change Journal I take it you are referring to the NTFS USN? It looks very much like the Windows 7 backup uses a combination of VSC and NTFS USN to detect changes and create incremental images much like you are already doing.

let's say I am writing my code and then my PC died, how necessary is it to do a complete scan if i don't want my later source code to be contaminated?

let's say I am writing a Ruby on Rails program and while editing a file, the machine blue screened. in this case, how necessary is it to re-scan the whole hard drive if I don't want my future files to be damaged?
Let's say if the OS is deleting a tmp file at the moment when my computer crashed, and still have some pointers to some sector on the hard drive. and if my newly created files happen to be in those sector, and next time the OS clean up files again, it may think that the "left-over" sector wasn't cleaned last time and clean it again, and damaging our source code. (esp with Ruby on Rails, where the source code could be generated by rails and not by us, and we may not know why our rails server doesn't work, if a file is affected). we can rely on SVN, but what if the file is affected before we check it in?
i think the official answer will be: "always scan the disk after a crash or power outage, for the data and even the space and indicate attempt to fix any bad sector", but the thing is, nowadays with the hard drive so big, it could take 2 hours to scan everything. And especially at work, we cannot wait for 2 hours if it is the middle of the day.
Does someone know if the modern OS, like XP, Vista, Mac OS, and Linux (when sometimes the power cord was loose and it didn't shut down properly and just shut down on 0% battery), with these modern OS, are our source code safe? Do they know how to structure to write to sector so that at most it will waste sector instead of overlapping sectors?
With a modern journaling file system (ext3/4, NTFS), the only problem would be that a file could be in a "half-written" state. Obviously scanning is not going to help this (that's what backups are for). The file system itself could not be corrupted. If you are using something like FAT, then yes, you should worry about this.
There's really only 1 issue here.
Is any file currently being written in some kind of "half written" state.
The primary cause of this would be if the application/editor is writing the file and the machine dies halfway through. In this case, the file be written is, well, half done. If it was over writing the original file, the original file is "gone", and the new one is "half done". If you don't have a back up file, then, well, you have a problem.
As far as a file having dangling pointers, or references to sectors not written, or somesuch thing. That problem depends on your file system.
The major, modern files ystems are journaled and "won't allow" this to happen. You may have a "half written", but that's because the application only got to write half of it, rather than the file system losing track of a sector pointer.
If you're playing file system games for performance, or whatever (such as using a UFS without logging), then you would want to run a fschk to clean up the file systems meta data.
But if you're using a modern operating system and file system (i.e. anything from the past 5 years), you won't have this problem.
Finally, if you do have version control running, then just do an "svn status", it will show you any "corrupted" files as they will have changed and it will detect that as well.
i see some information on
http://en.wikipedia.org/wiki/Journaling_file_system
Journalized file systems
File systems may provide journaling, which provides safe recovery in the event of a system crash. A journaled file system writes some information twice: first to the journal, which is a log of file system operations, then to its proper place in the ordinary file system. Journaling is handled by the file system driver, and keeps track of each operation taking place that changes the contents of the disk. In the event of a crash, the system can recover to a consistent state by replaying a portion of the journal. Many UNIX file systems provide journaling including ReiserFS, JFS, and Ext3.
In contrast, non-journaled file systems typically need to be examined in their entirety by a utility such as fsck or chkdsk for any inconsistencies after an unclean shutdown. Soft updates is an alternative to journaling that avoids the redundant writes by carefully ordering the update operations. Log-structured file systems and ZFS also differ from traditional journaled file systems in that they avoid inconsistencies by always writing new copies of the data, eschewing in-place updates.

Resources