TrueCrypt: Whole container broken on single bit error? - crypt

I wondered what would happen to a TrueCrypt (pre-NSA, i.e. v7.1a) container, if just a single random bit of it gets flipped?
Does this cause the whole container to become corrupted?
Is there a way to fix the container (without knowing which bit was flipped)?
Also: How often does a bit erroneously flip on a common hard drive / flash drive? Does it happen often?

Related

Possible to keep bad VRAM "occupied"?

I've got an iMac whose VRAM appears to have gone on the fritz. On boot, things are mostly fine for a while, but eventually, as more and more windows are opened (i.e. textures are created on the GPU), I eventually hit the glitchy VRAM, and I get these bizarre "noisy" grid-like patterns of red and green in the windows.
I had an idea, but I'm mostly a newb when it comes to OpenGL and GPU programming in general, so I figured I'd ask here to see if it was plausible:
What if I wrote a little app, that ran on boot, and would allocate GPU textures (of some reasonable quantum -- I dunno, maybe 256K?) until it consumed all available VRAM (i.e. can't allocate any more textures). Then have it upload a specific pattern of data into each texture. Next it would readback the texture from the GPU and checksum the data against the original pattern. If it checks out, then release it (for the rest of the system to use). If it doesn't checksum, hang onto it (forever).
Flaws I can see: a user space app is not going to be able to definitively run through ALL the VRAM, since the system will have grabbed some, but really, I'm just trying to squeeze some extra life out of a dying machine here, so anything that helps in that regard is welcome. I'm also aware that reading back from VRAM is comparatively slow, but I'm not overly concerned with performance -- this is a practical endeavor, to be sure.
Does this sound plausible, or is there some fundamental truth about GPUs that I'm missing here?
Your approach is interesting, although I think there other ways that might be easier to implement if you're looking for a quick fix or work-around. If your VRAM is on the fritz then it's likely that there is a specific location the corruption is taking place. If you're able to determine consistently that it happens at a certain point (VRAM is consuming x amount of memory, etc.) then you can work with it.
It's quite easy to create a RAM disk, and another possibility would be to allocate regular memory for VRAM. I know both of these are very possible, because I've done it. If someone says something "won't work" (no offense Pavel), it shouldn't discourage you from at least trying. If you're interested in the techniques that I mentioned I'd be happy to provide more info, however, this is about your idea and I'd like to know if you can make it work.
If you are able to write an app that ran on boot even before an OS loaded, that would be in the bootloader - why wouldnt you just then do a self-test of memory at that time ?
Or did you mean an userland app after the OS boots into the login ? An userland app will not be able to do what you mentioned of cycling through every address simply because there is no mapping to userland directly for every page.
If you are sure that RAM is a problem, did you try replacing the RAM ?

Copyfile and clusters reservation for Windows

What is the OS (XP, Vista, Win7) behavior for copying files (with CopyFile) ?
When does it reserve clusters to copy to? which of the following ?
it reserves all destination clusters before starting to copy
it reserves some clusters, then copy a file portion to
these clusters, then, reserves additional clusters, then
copy a new file portion to these new reserved clusters,
etc.
The copy operation used by Explorer and cmd.exe reserves most of the disk space immediately, at least on my Windows 7 32-bit, as you can see by watching the free space on the volume. To the best of my recollection this behaviour has been the same in all versions of Windows since at least NT 4.
However, there are several caveats:
Explorer and cmd.exe don't (necessarily) use CopyFile.
This behaviour might be different in different versions of Windows, or depending on circumstances.
It might be only most of the destination clusters, for example it might sometimes needs to expand the MFT to complete the operation; I don't think this is likely, but I can't rule it out.
My recommendation:
If a slim possibility of the occasional failure is acceptable, test CopyFile and if it behaves as expected go ahead and use it.
If it isn't, consider doing the copy yourself. Unfortunately that last caveat might apply even then, but as I said I think it's probably not a significant risk.
You need to be prepared to cope with an unexpected failure either way since hardware faults, or perhaps even file system corruption, could cause the copy to fail part way through.

What reliability guarantees are provided by NTFS?

I wonder what kind of reliability guarantees NTFS provides about the data stored on it? For example, suppose I'm opening a file, appending to the end, then closing it, and the power goes out at a random time during this operation. Could I find the file completely corrupted?
I'm asking because I just had a system lock-up and found two of the files that were being appended to completely zeroed out. That is, of the right size, but made entirely of the zero byte. I thought this isn't supposed to happen on NTFS, even when things fail.
NTFS is a transactional file system, so it guarantees integrity - but only for the metadata (MFT), not the (file) content.
The short answer is that NTFS does metadata journaling, which assures valid metadata.
Other modifications (to the body of a file) are not journaled, so they're not guaranteed.
There are file systems that do journaling of all writes (e.g., AIX has one, if memory serves), but with them, you tend to get a tradeoff between disk utilization and write speed. IOW, you need a lot of "free" space to get decent performance -- they basically just do all writes to free space, and link that new data into the right spots in the file. Then they go through and clean out the garbage (i.e., free up parts that have since been overwritten, and usually coalesce the pieces of a file together as well). This can get slow if they have to do it very often though.

Windows Stalls When My Program Uses Swapfile

I am running a user mode program on normal priority. My program is searching an NP problem, and as a result, uses up a lot of memory which eventually ends up in the swap file.
Then my mouse freezes up, and it takes forever for task manager to open up and let me end the process.
What I want to know is how I can stop my Windows operating system from completely locking up from this even though only 1 out of my 2 cores are being used.
Edit:
Thanks for the replies.
I know that making it use less memory will help, but it just doesn't make sense to me that the whole OS should lock up.
The obvious answer is "use less memory". When your app uses up all the
available memory, the OS has to page the task manager (etc.) out to make room for your app. When you switch programs, the OS has to page the other programs back in (as they are needed).
Disk reads are slower than memory reads, so everything appears to be
going slower.
If you want to avoid this, have your app manage its own memory, or
use a better algorithm than brute force. (There are genetic
algorithms, simulated annealing, etc.)
The problem is that when another program (e.g. explorer.exe) is going to execute, all of its code and memory has been swapped out. To make room for the other program Windows has to first write data that your program is using to disk, then load up the other program's memory. Every new page of code that is executed in the other program requires disk access, causing it to run slowly.
I don't know the access pattern of your program, but I'm guessing it touches all of its memory pages a lot in a random fashion, which makes the problem worse because as soon as Windows evicts a memory page from your program, suddenly you need it again and Windows has to find some other page to give the same treatment.
To give other processes more RAM to live in, you can use SetProcessWorkingSetSize to reduce the maximum amount of RAM that your program may use. Of course this will make your program run more slowly because it has to do more swapping.
Another alternative you could try is to add more drives to the system, and distribute the swap files over those. You may have a dual-core CPU, but you have only a single drive. Distributing the swap file over multiple drives allows Windows to balance work across them (although I don't have first-hand experience of how well it does this).
I don't think there's a programming answer to this question, aside from "restructure your app to use less memory." The swapfile problem is most likely due to the bottleneck in accessing the disk, especially if you're using an IDE HDD or a highly fragmented swapfile.
It's a bit extreme, but you could always minimise your swap file so you don't have all the disk thashing, and your program isn't allowed to allocate much virtual memory. Under Control panel / Advanced / Advanced tab / Perfromance / Virtual memory, set the page file to custom size and enter a value of 2mb (smallest allowed on XP). When an allocation fails, you should get an exception and be able exit gracefully. It doesn't quite fix your problem, just speeds it up ;)
Another thing worth considering would be if you are ona 32bit platform, port to a 64bit system and get a box with much more addressable RAM.

Windows disk partition gap

Windows XP Disk Defragmenter report shows a constant gap in disk usage on a number of disk partitions on my system. I'm not referring to the little transitory gaps that occur. In disk D below, the gap in question is the one under the word "defragmentation". In disk P below, the gap is the one under "usage before def" the but a bigger one. The C partition doesn't have this anomaly. The size and placement pattern isn't obvious. It is as though there was an area, a no-man's land, that both the file system and the defragmenter avoid. These gaps survive daily use and defragmentation. I don't believe this is a residue from a paging file -- it should show up in green, anyway. Recycle bin is empty.
Any ideas?
Disk D (20 Gig):
Disk P (40 Gig):
That is probably the space reserved for the MFT, which will only be used for files if the disk gets really full. This empty space allows it to grow for a while without getting fragmented.
References:
How NTFS reserves space for its Master File Table (MFT)
No idea what's causing this, but the defragger that comes with Win XP is Diskkeeper Lite, which is not very good. A better defragger might get rid of the gap if it's not being caused by anything. I personally use O&O Defrag; it's not free, but there's a 30-day trial.
Defragging to the point that there are absolutely no gaps is not necessarily a good thing. Some OSs/FileSystems try to pack files in as tightly as possible and fill without gaps where possible.
The problem with this is if any of the earlier files get changed or appended to then you are either leaving an early gap (which will tend to case fragments) or forcing the extra bit to be entered at the next gap (creating a fragment again).
Defrag when you start getting weird behaviour (quite often it helps... even though it is not supposed to); however you don't need to do it every day, nor is a totally defragmented drive a sign of a particularly health drive.
Like the poster above said, that's most likely the reserved zone for the MFT. When the drive is formatted, about 12.5% of the partition is reserved for the MFT, and this can grow as needed to accomodate new records if the initial allocation is used up. Mind you, the MFT can also fragment if the adjacent contiguous free space is not large enough to accomodate the expansion.
Reg. defragging, instead of defragging manually regularly, save yourself the trouble and get Diskeeper. The newest version i.e 2008 Professional is fully automatic and defrags in the background using idle resources. There is also a manual/scheduled defrag mode, but I don't see any reason to waste my time; it does a fine job running on automatic on my systems.

Resources