Recovery from optical media ignoring read errors - windows

I have backups of files archived in optical media (CDs and DVDs). These all have par2 recovery files, stored on separate media. Even in cases where there are no par2 files, minor errors when reading on one optical drive can be read fine on another drive.
The thing is, when reading faulty media, the read time is very, very long, because devices tend to retry multiple times.
The question is: how can I control the number of retries (ie set to no retries or only one try)? Some system call? A library I can download? Do I have to work on the SCSI layer?
The question is mainly about Linux, but any Win32 pointers will be more than welcome too.

man readom, a program that comes with cdrecord:
-noerror
Do not abort if the high level error checking in readom found an
uncorrectable error in the data stream.
-nocorr
Switch the drive into a mode where it ignores read errors in
data sectors that are a result of uncorrectable ECC/EDC errors
before reading. If readom completes, the error recovery mode of
the drive is switched back to the remembered old mode.
...
retries=#
Set the retry count for high level retries in readom to #. The
default is to do 128 retries which may be too much if you like
to read a CD with many unreadable sectors.

The best tool avaliable is dd_rhelp. Just
dd_rhelp /dev/cdrecorder /home/myself/DVD.img
,take a cup of tea and watch the nice graphics.
The dd_rhelp rpm package info:
dd_rhelp uses ddrescue on your entire disc, and attempts to gather the maximum
valid data before trying for ages on badsectors. If you leave dd_rhelp work
for infinite time, it has a similar effect as a simple dd_rescue. But because
you may not have this infinite time, dd_rhelp jumps over bad sectors and rescue
valid data. In the long run, it parses all your device with dd_rescue.
You can Ctrl-C it whenever you want, and rerun-it at will, dd_rhelp resumes the
job as it depends on the log files dd_rescue creates. In addition, progress
is shown in an ASCII picture of your device being rescued.
I've used it a lot myself and Is very, very realiable.
You can install it from DAG to Red Hat like distributions.

Since dd was suggested, I should note that I know of the existence and have used sg_dd, but my question was not about commands (1) or (1m), but about system calls (2) or libraries (3).
EDIT
Another linux command-line utility that is of help, is sdparm. The following flag seems to disable hardware retries:
sudo sdparm --set=RRC=0 /dev/sr0
where /dev/sr0 is the device for the optical drive in my case.

While checking whether hdparm could modify the number of retries (doesn't seem so), I thought that, depending on the type of error, lowering the CD-ROM speed could potentially reduce the number of read errors, which could actually increase the average read speed. However, if some sectors are completely unreadable, then even lowering the CD-ROM speed won't help.

Since you are asking about driver level access, you should look into SCSI commands, or perhaps an ASPI like API. On windows VSO software (developers of blindread/blindwrite below) have developed a much better API, Patin-Couffin, that provides locked low level access:
http://en.wikipedia.org/wiki/Patin-Couffin
That might get you started. However, at the end of the day, the drive is interfaced with SCSI commands, even if it's actually USB, SATA, ATA, IDE, or otherwise. You might also look up terms related to ATAPI, which was one of the first specifications for this CD-ROM SCSI layer interface.
I'd be surprised if you couldn't find a suitable linux library or example of dealing with the lower level commands using the above search terms and concepts.
Older answer:
Blindread/blindwrite was developed in the heyday of cd-rom protection schemes often using intentionally bad sectors or error information to verify the original CD.
It will allow you to set a whole slew of parameters, including retries. Keep in mind that the CD-ROM drive itself determines how many times to retry, and I'm not sure that this is settable via software for many (most?) CD-ROM drives.
You can copy the disk to ISO format, ignoring the errors, and then use ISO utilities to read the data.
-Adam

Take a look at the ASPI interface. Available on both windows and linux.

dd(1) is your friend.
dd if=/dev/cdrom of=image bs=2352 conv=noerror,notrunc
The drive may still retry a bit, but I don't think you'll get any better without modifying firmware.

Related

Is there a bad way to read and write to a hard drive?

I am writing a disk utility for windows that just uses CreateFile to open physical drives and read and write to them.
I am currently trying to implement a testing feature that will thoroughly test a drive for consistency. I wrote a function that would open a drive for read and write, write 10meg of data (in one call to WriteFile) seek back to the start of where that 10meg was written (using SetFilePointer) then read it back (also in one call to ReadFile) test that the read matches the written, then proceed to the next 10meg etc
The problem is that on the two (old) hard drives I have tried this on, the process eventually fails after about 5mins (its going fine then the drive will just disconnect from the system, or spaz out in general requiring a reboot)
Both these drives pass WD Diag tests.
Im wondering if it is possible to just 'stress' a drive beyond designed operational limits.
So my question is this: Should you be able to perform any combination of read, write, seek operations on a drive in good heath, without it failing? (I know you have to read/write in multiples of 512, and obviously all drives have a limited life and would fail eventually, but surely should be able to go at least a few hours)

How to debug potential CPU/RAM errors in Bash script on Linux

I have a relatively simple bash script that reads from a set of static input files, stores the input in bash variables and then does a bunch of processing over said input by calling out to external scripts (e.g. written in Python, Go, other bash scripts etc.) and using the intermediate results.
Lately I have been experiencing an intermittent problem where a single character seems to be getting altered somewhere during the processing which then causes subsequent errors. Specifically, a lot of the processing I'm doing involves slicing up a list of comma-separated records, and one of the values on each line is a unix timestamp, e.g. 1354245000.
What seems to be happening is that occasionally one of these values will get altered slightly, so I end up with a timestamp like 13542458=2 or 13542458>2 or 13542458;2 coming out of one of the intermediate scripts. This then subsequently gets fed into another script, which throws an exception when it tries to parse the value to an integer.
In the title of this question, I've suggested that this might be a potential CPU/RAM error. I know the general folly in thinking errors are caused by low level things like hardware/compilers etcetera, but the nature of this particular error makes me think it may be possible, for the following reasons:
The input files are the same on each invocation of the script, and the script only fails on some invocations.
I cannot think of any sources of randomness in the source code prior to where the script is breaking. It's basically just slicing and dicing csv input.
I cannot think of any sources of concurrency in the source code -- even the Go scripts aren't actually written to run anything concurrently.
This problem has only arisen in the last week or so. Prior to this time, this error would never occur.
While I haven't documented every erroneous character, they seem to often be quite close in the ASCII table to numeric values (=, >, ; etc). That said, I guess the Hamming distance between two characters quite far apart can be small also with changes to a high order bit.
The script often breaks at a different stage on different runs. i.e. I have a number of separate Python scripts, and sometimes it'll make it past one script and then the error will be induced in another. Other times it'll be induced on an earlier script.
What I'd like to know is, is there any methodical way to either confirm or rule out a hardware error for this problem? Or if it is a hardware problem, is it possibly undetectable by the operating system?
A bit of further info on the machine:
Linux 64-bit, Ubuntu 12.04
Intel i7 processor
16GB DDR3 RAM
I'm hoping someone can either point me to a reliable way to verify whether the hardware is to blame or otherwise a sound reason as to what else might be the cause.
Try booting into Memtest to check your memory.
While it is highly unlikely that it will be hardware, if you have exhausted you standard software debug as suggested by #OliCharlesworth, here is an outline of hardware error investigation:
(1) check your log area for any `MCE` logs (machine check exceptions).
If you find any in either your log area (syslog) or sometimes in
the present working dir or /dir -- you have a hardware failure.
(2) check your log area for disk errors. e.g:
smartd[3963]: Device: /dev/sda [SAT], 34 Currently unreadable (pending) sectors
(3) check your drive integrity, e.g.: (as root) # `smartctl -a /dev/sda` if any abnormality, run:
smartctl -t short /dev/sda (change drive as required)
(4) download/install/boot to [memtest86](http://www.memtest86.com/download.htm)
(run the complete test)
If your cpu/motherboard has thrown no mce's, you have no disk error, your drive tests OK with smartctl and you have no memory errors with memtest86, then recheck the software debugging. While additional hardware errors can still be present (bad capacitors, etc..) the likelihood at this point is software. Good luck.

High Write I/O "Pulsing" with dtrace errors

We are experiencing "pulsing" writes to disk (from 1 writes out/sec pulsing up to 142+ writes out/sec) around every 10 seconds.
See this example image:
https://discussions.apple.com/servlet/JiveServlet/showImage/2-22394173-269851/Screen+Shot+2013-07-03+at+13.22.28.png
We dug into these "pulsing" writes and found that they happen exactly the same time as these errors from IOTOP:
dtrace: error on enabled probe ID 5 (ID 992: io:mach_kernel:buf_strategy:start): illegal operation in action #3 at DIF offset 0
The "pulsing" only happens when the error above presents itself in IOTOP.
Note: we are running Apple RAID software mirroring for two drives.
Any suggestions, help and tips will be greatly appreciated. Thanks in advance.
The pulsing I/O pattern you're seeing is characteristic of applications where many/most filesystem writes are asynchronous - this is because the filesystem will batch up the writes so it can do many at the same time to avoid doing one disk seek per write. The most common example I can think of is a database writing data - except for the database's write-ahead log, everything is typically written asynchronously; other transactional access patterns tend to be similar because they have a write-ahead log to recover if some async writes are lost in a crash. This is a common access pattern and isn't necessarily a problem, but it can become a problem when your disk is highly fragmented and the filesystem can't write everything out in batches (causing many seeks, just like it was trying to avoid).
The DTrace/iotop error you're seeing means there's either a bug in the DTrace implementation itself or in the iotop DTrace script. Looking at iotop's source code (in /usr/bin/iotop on OS X), there are three io:::start callbacks which could be the culprit. It's possible that there's some sort of null pointer access in the script for some types of I/O, but it doesn't look likely based on the script and the arguments io:::start probes take. Perhaps this is best resolved with a bug report to Apple.

Is it possible to create a file that cannot be copied?

To restrict the scope, let assume we are in Windows world only.
Also assume we don't want to play with permission policy.
Is it possible for us to create a file that cannot be copied?
Thank you in advance.
"Trying to make digital files uncopyable is like trying to make water not wet." ~ Bruce Schneier
No. You can't create a file that a SYSADMIN can't copy. You could encrypt it, though.
Well, how about creating a file that uses up more than 50% of the total space on that machine and that is not compressible?
For instance, let us assume that you want to save a boolean (true or false) in such a fashion.
Depending on its value, you could then write a bit stream of ones or zeroes and encrypt said stream using some kind of encryption algorith, such as AES in CBC mode. This gives you the added advantage of error correction. Even in case of massive data corruption, you should be able to recover your boolean by checking whether ones or zeroes are prevalent in the decrypted stream.
In that case you cannot copy it around (completely) on the machine...
Of course, any type of external memory that can be added to the system would pose a problem in this scenario. But the file would be already encrypted, so don't worry about it too much...
Any file that can be read can have its contents written to another location (such as another file, i.e. copied).
The only thing you can do is limit who/what can read the file.
What is the motivation behind? If it is a read-only file, you can have it as embedded resources within your assembly.
Nice try, RIAA.
But seriously, no you can not. It is always possible to copy, you can just make it it more difficult for people to make sense of the file or try to hide it using like encryption. Spotify does it.
If you really try hard thou, you cold make a root-kit for windows and use it to prevent windows from even knowing about the file and also prevent copies. The file will still be there and copy-able by other tools, or Linux accessing the ntfs.
If in a running process you open a file and hold an exclusive lock, then other processes cannot read the file until you close the handle or your process terminates. However, as admin you could forcibly remove the lock handle.
Short answer: No.
You can, of course, use security settings to limit who can read the file. But if someone can read it, then they can copy it. Even if you found some operating system trick to disable "ordinary" copying, if someone can read the file, they can extract the contents, store it in memory, and then write it somewhere else.
You can encrypt the contents so it's only useful to your own program, that knows how to decrypt it.
That's about it.
When using Windows 7 to copy some files from a hard drive, certain files popped up a message saying they could not be copied in their entirety; certain data would be omitted from the copy. I suspect that had something to do with slack space at the end of the files, though I thought the message was curious. I would have expected the copy operation to just ignore the slack space.
If you are running old (OLD) versions of windows, there are certain characters you can put in the filename that make it invalid, not listed in folders, etc. They were used a lot in the old pub ftp days of filesharing ;)
In the old DOS days, you used to be able to flag disk sectors as bad and still read from them. This meant the OS ignored the sector in question but your application would know where to look and be able to get the data. Not sure this would work these days.
Another old MS-DOS trick was to put a space character in the middle of the filename (yes, spaces were valid characters for filenames). Since there was no method on the command line to escape a space, the file couldn't be copied using the DOS commands.
This answer is outside Windows so yeah
Dont know if its already been said but what about a file that is an inseperable part of the firmware so that it is always on AND running, perhaps it has firmware that generates a sequence that is required for the other . AN incedental effect of its running is to prevent any 80% or more of its code from being replicated. Lets say its on an entirely different board, protected by surge protectors, heavy em proof shielding and anything else required to make it completely unerasable.
If its possible to make a program that is ALWAYS on and running as long as the copying software is running then yes.
I have another way and this IS with windows. I will come to your house and give you a disk, i will then proceed to destroy every single computer you put the disk into. This doesnt work on XP
Well technically you could create and write to a write-only network share.

How do I stop USB buffering?

I'm reading real-time data over USB, but the data is buffered. How do I stop the buffering?
Linux
Use udev to change the latency_timer.
On ubuntu, create a rule in /etc/udev/rules.d for your device. Eg 99-xsens.rules
Create a rule in that file to match your device and set the latency_timer. For example, for my device this is:
KERNEL=="ttyUSB[0-9]*", SUBSYSTEMS=="usb-serial", DRIVERS=="ftdi_sio", ATTR{device/latency_timer}="2"
This causes the device to wait a shorter time before deciding there's no more incoming data to buffer. In this case my device went from waiting 16ms to waiting 2ms.
Use udevadm info -a -n /dev/ttyUSB0, for example, to find out what key-value pairs to match against in your rule. There are some tricky things to keep in mind, but finding resources to help with the ins and outs was easy once I knew to use udev rules.
Here's a good reference page on writing udev rules. It is old and the syntax of the udev tools has changed, but the concepts are still valid.
Windows
On Windows you use Device Manager->Ports->COM Port->Port Settings->Advanced->Latency Timer.

Resources