Is the ntfs metadata $LogFile deletable? - windows

I am willing to write an backup application that:
read a ntfs partition without using windows api (done)
write a bootable ntfs partition from the data it saved (planned)
i have a problem with $logfile, im scared that if i simply copy it, it could make windows believe that this partition is in a bad state and try to fix it(and would probably corrupt stuff since this scenario isn't supposed to happen). i currently have very little understanding of how logfile work except that it is some kind of transaction and it use entries sequence numbers.
my question is the following:
what happen if mft entries don't fit $logFile content? (like sequences numbers)
can i bypass this by not copying $logfile? or at least removing a part of it content? (i guess windows wouldn't try to fix anything but i cant be sure)
if it doesn't work, what could i do to make $logfile safe to copy?

Related

Writing to /dev/loop USB image?

I've got an image that I write onto a bootable USB that I need to tweak. I've managed to mount the stick as /dev/loopX including allowing for the partition start offset, and I can read files from it. However writing back 'seems to work' (no errors reported) but after writing the resulting tweaked image to a USB drive, I can no longer read the tweaked files correctly.
The file that fails is large and also a compressed tarfile.
Is writing back in this manner simply a 'no-no' or is there some way to make this work?
If possible, I don't want to reformat the partition and rewrite from scratch because that will (I assume) change the UUID and then I need to go worry about the boot partition etc.
I believe I have the answer. When using losetup to create a writeable virtual device from the partition on your USB drive, you must specify the --sizelimit parameter as well as the offset parameter. If you don't then the resulting writes can go past the last defined sector on the partition (presumably requires your USB drive to have extra space). Linux reports no errors until later when you try to read. The key hints/evidence for this are that when reads (or (re)written data) fail, dmesg shows read errors attempting to read past the end of the drive. fsck tools such as dosfsck also indicate that the drive claims to be larger than it is.

Move or copy and truncate a file that is in use

I want to be able to (programmatically) move (or copy and truncate) a file that is constantly in use and being written to. This would cause the file being written to would never be too big.
Is this possible? Either Windows or Linux is fine.
To be specific what I'm trying to do is log video with FFMPEG and create hour long videos.
It is possible in both Windows and Linux, but it would take cooperation between the applications involved. If the application that is writing the new data to the file is not aware of what the other application is doing, it probably would not work (well ... there is some possibility ... back to that in a moment).
In general, to get this to work, you would have to open the file shared. For example, if using the Windows API CreateFile, both applications would likely need to specify FILE_SHARE_READ and FILE_SHARE_WRITE. This would allow both (multiple) applications to read and write the file "concurrently".
Beyond sharing the file, though, it would also be necessary to coordinate the operations between the applications. You would need to use some kind of locking mechanism (either by locking some part of the file or some shared mutex/semaphore). Note that if you use file locking, you could lock some known offset in the file to act as a "semaphore" (it can even be a byte value beyond the physical end of the file). If one application were appending to the file at the same exact time that the other application were truncating it, then it would lead to unpredictable results.
Back to the comment about both applications needing to be aware of each other ... It is possible that if both applications opened the file exclusively and kept retrying the operations until they succeeded, then perform the operation, then close the file, it would essentially allow them to work without "knowledge" of each other. However, that would probably not work very well and not be very efficient.
Having said all that, you might want to consider alternatives for efficiency reasons. For example, if it were possible to have the writing application write to new files periodically, it might be more efficient than having to "move" the data constantly out of one file to another. Also, if you needed to maintain some portion of the file (e.g., move out the first 100 MB to another file and then move the second 100 MB to the beginning) that could be a fairly expensive operation as well.
logrotate would be a good option is linux, comes stock on just about any distro. I'm sure there's a similar windows service out there somewhere

Is it possible to create a file that cannot be copied?

To restrict the scope, let assume we are in Windows world only.
Also assume we don't want to play with permission policy.
Is it possible for us to create a file that cannot be copied?
Thank you in advance.
"Trying to make digital files uncopyable is like trying to make water not wet." ~ Bruce Schneier
No. You can't create a file that a SYSADMIN can't copy. You could encrypt it, though.
Well, how about creating a file that uses up more than 50% of the total space on that machine and that is not compressible?
For instance, let us assume that you want to save a boolean (true or false) in such a fashion.
Depending on its value, you could then write a bit stream of ones or zeroes and encrypt said stream using some kind of encryption algorith, such as AES in CBC mode. This gives you the added advantage of error correction. Even in case of massive data corruption, you should be able to recover your boolean by checking whether ones or zeroes are prevalent in the decrypted stream.
In that case you cannot copy it around (completely) on the machine...
Of course, any type of external memory that can be added to the system would pose a problem in this scenario. But the file would be already encrypted, so don't worry about it too much...
Any file that can be read can have its contents written to another location (such as another file, i.e. copied).
The only thing you can do is limit who/what can read the file.
What is the motivation behind? If it is a read-only file, you can have it as embedded resources within your assembly.
Nice try, RIAA.
But seriously, no you can not. It is always possible to copy, you can just make it it more difficult for people to make sense of the file or try to hide it using like encryption. Spotify does it.
If you really try hard thou, you cold make a root-kit for windows and use it to prevent windows from even knowing about the file and also prevent copies. The file will still be there and copy-able by other tools, or Linux accessing the ntfs.
If in a running process you open a file and hold an exclusive lock, then other processes cannot read the file until you close the handle or your process terminates. However, as admin you could forcibly remove the lock handle.
Short answer: No.
You can, of course, use security settings to limit who can read the file. But if someone can read it, then they can copy it. Even if you found some operating system trick to disable "ordinary" copying, if someone can read the file, they can extract the contents, store it in memory, and then write it somewhere else.
You can encrypt the contents so it's only useful to your own program, that knows how to decrypt it.
That's about it.
When using Windows 7 to copy some files from a hard drive, certain files popped up a message saying they could not be copied in their entirety; certain data would be omitted from the copy. I suspect that had something to do with slack space at the end of the files, though I thought the message was curious. I would have expected the copy operation to just ignore the slack space.
If you are running old (OLD) versions of windows, there are certain characters you can put in the filename that make it invalid, not listed in folders, etc. They were used a lot in the old pub ftp days of filesharing ;)
In the old DOS days, you used to be able to flag disk sectors as bad and still read from them. This meant the OS ignored the sector in question but your application would know where to look and be able to get the data. Not sure this would work these days.
Another old MS-DOS trick was to put a space character in the middle of the filename (yes, spaces were valid characters for filenames). Since there was no method on the command line to escape a space, the file couldn't be copied using the DOS commands.
This answer is outside Windows so yeah
Dont know if its already been said but what about a file that is an inseperable part of the firmware so that it is always on AND running, perhaps it has firmware that generates a sequence that is required for the other . AN incedental effect of its running is to prevent any 80% or more of its code from being replicated. Lets say its on an entirely different board, protected by surge protectors, heavy em proof shielding and anything else required to make it completely unerasable.
If its possible to make a program that is ALWAYS on and running as long as the copying software is running then yes.
I have another way and this IS with windows. I will come to your house and give you a disk, i will then proceed to destroy every single computer you put the disk into. This doesnt work on XP
Well technically you could create and write to a write-only network share.

How does file recovery software work?

I wanted to make some simple file recovery software, where I want to try to recover files which happen to have been deleted by pressing Shift + Delete. I'm working in Windows, can anyone show me any links or documents which can help me to do so programatically? I know C, C++, .NET. Any pointers?
http://www.google.hu/search?q=file+recovery+theory&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a :)
Mainly file recoveries are looking for file headers and/or filenames in the disk as I know, then try to get the whole file by the header information.
This could be a good start: http://geeksaresexy.blogspot.com/2006/02/theory-behind-deleted-files-recovery.html
The principle of all recovery tools is that deleting a file only removes a pointer in a folder and (quick) formatting of a partition only rewrites the first sectors of the partition which contains the headers of the filesystem. An in depth analysis of the partition data (at sector level) can rebuild a big part of the filesystem data, cluster allocation tables, folders, and file cluster chains.
All course if you use a surface test tool while formatting the partition that will rewrite all sectors to make sure that they are correct, nothing will be recoverable - unless you use specialized hardware to look at remanent magnetism on the edges of the actual tracks
In windows when a file is deleted(permanent delete) it's not actually deleted from disk but the file name added with char( _ I guess) in front of it and windows ignores these when showing in explorer... and recovery tools will search these kind of file names in the disk. And your file recover integrity based on some data over written on location of deleted file. Don't know this pattern still used by windows.. but long time back I have read this some where

Force windows to refresh a disk FAT [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have a separate partition on my disk formatted with FAT32. When I hibernate windows, I want to be able to load another OS, create/modify files that are on that partition, then bring Windows out of hibernation and be able to see the changes that I've made.
I know what you're going to type, "Well, you're not supposed to do that!" and then link me to some specs about how what I'm trying to do is wrong/impossible/going to break EVERYTHING. However, I'm sure there's some way I can get around that. :)
I don't need the FAT32 partition in Windows, except to read the files that were written there, then I'm done - so whatever the solution is, it's acceptable for the disk to be completely inaccessible for a period of time. Unfortunately, I can't take the entire physical disk offline because it is just a partition of the same physical device that windows is installed on -- just the partition.
These are the things I've tried so far...
Google it. I got at least one "this is NEVER going to happen" answer. Unacceptable! :)
Unmount the disk before hibernating. Mount after coming out of hibernation. This seems to have no effect. Windows still thinks the FAT is the same as it was before, so whatever data I wrote to disk is lost, and any files I resized are corrupted. If any of the file was cached, it's even worse.
Use DeviceIoControl to call IOCTL_DISK_UPDATE_PROPERTIES to try and refresh the disk (but the partition table hasn't changed, so this doesn't really do anything).
Is there any way to invalidate the disk/volume read cache to force windows to go back to the disk?
I thought about opening the partition and reading/writing directly by using libfat and bypassing the cache or something is overkill.
So I finally got a solution to my problem. In my mind, I associated Mount Point with Mount. These are NOT the same thing. Removing all of the volume mount points does not make the volume unmounted. It's still mounted but not in the sense that you have a path you can access in explorer.
This is the article that started it all.
It also goes to show that searching for your EXACT problem, as opposed to the perceived problem can help a lot!
So there were a couple of solutions, one was to constantly call NtSetSystemInformation() in a tight loop to set the "SYSTEMCACHEINFORMATION" property to essentially empty/clear the cache whenever the system is going to hibernation. Then stop the loop when you come out. This, to me, seemed like it could affect system performance. So I discarded it.
Even better though, is the recommended solution to a slightly different problem presented in this MSDN article, which provides direction to an even better solution to the problem: Dismounting Volumes in a Hibernate Once/Resume Many Configuration
Now I have a service which will flush the write caches, then lock and dismount the volume whenever the system goes into hibernation/sleep and release the lock on the volume as soon as it comes out.
Here's a little bit of code.
OnHibernate>
volumeHandle = CreateFile(volumePath,
GENERIC_READ|GENERIC_WRITE,
FILE_SHARE_READ|FILE_SHARE_WRITE,
NULL,
OPEN_EXISTING,
FILE_ATTRIBUTE_NORMAL,
0 );
FlushFileBuffers( volumeHandle );
DeviceIoControl( volumeHandle, FSCTL_LOCK_VOLUME, NULL, 0, NULL, 0, &cbReturned, NULL ) ;
DeviceIoControl( volumeHandle, FSCTL_DISMOUNT_VOLUME, NULL, 0, NULL, 0, &cbReturned, NULL );
//Keep the handle open here.
//System hibernates.
OnResume>
DeviceIoControl( volumeHandle, FSCTL_UNLOCK_VOLUME, NULL, 0, NULL, 0, &cbReturned, NULL )
CloseHandle(volumeHandle)
Hopefully this helps someone else out in the future :)
Well, you're not supposed to do that! ;-)
Since the operating system (Windows in this case, but Linux is the same) writes some of its internal filesystem structures in the hibernation image, it is going to be very confused if the disk contents change while it's "running" (think of hibernation as just a long pause in the operating system's execution).
What I can suggest is that you completely bypass the issue: format the partition as ext2. There are Windows programs to read an ext2 partition, which you can use to get data out of it, and most modern operating systems should be able to read/write it (since it's a quite common Unix-style filesystem). Avoid ext2 IFS drivers; you want to take the filesystem access out of the kernel and into a program which you can open and close at will.
Use Linux to create the partition as a hidden FAT32 partition. Linux will let you mount the partition and write files. Windows will not let you mount the partition and read files, and Windows will not corrupt the partition. But there are third party libraries that will read the partition while Windows is running.
To clarify, hidden means that the partition type is different from an ordinary FAT32 partition type. If your ordinary partition type is 0x0C then the corresponding hidden type is 0x1C.
On a related but different problem, I used the following:
Running cmd as administrator (works from batch file)
DISKPART
SELECT G
REMOVE
ASSIGN LETTER=G
EXIT
This unmounts the volume (G:) and then remounts it. Any read of the disk (in my case a device pretending to be a USB Mass Storage Device formatted as FAT16) will actually read the device, so the read cache is effectively flushed.
Downside is that starting DISKPART takes about 4 seconds, but that's probably not a problem in a hibernating situation.
My memory is that the FAT table is read during the OS boot and mounting of the volume. Can't you do a shutdown, then modify the FAT, then reboot Windows?
As far as I can tell, Windows does caching at the disk level. However, if a partition has a type that Windows refuses to read or write (ext2, hidden FAT32, etc.) then that partition's contents should never get into Windows caches in the first place.
With DOS, it was typing ctrl+c, twice if I recall well.
With Linux, sync; echo 3 > /proc/sys/vm/drop_caches or a script thereof, of course ;-)
With Windows, interpolate ;-) Or install VirtualBox and Ubuntu+Wine to develop compatibly.
Well, I vaguely remember that former Windows used a diskcache program to start a process and that diskcache could be used to send the process a signal to flush and purge the whole cache. If things have evolved gently, you might be able to send such a signal to a Windows process. Sorry I'm no longer using Windows partly because of such obscurity.

Resources