I am writing a disk utility for windows that just uses CreateFile to open physical drives and read and write to them.
I am currently trying to implement a testing feature that will thoroughly test a drive for consistency. I wrote a function that would open a drive for read and write, write 10meg of data (in one call to WriteFile) seek back to the start of where that 10meg was written (using SetFilePointer) then read it back (also in one call to ReadFile) test that the read matches the written, then proceed to the next 10meg etc
The problem is that on the two (old) hard drives I have tried this on, the process eventually fails after about 5mins (its going fine then the drive will just disconnect from the system, or spaz out in general requiring a reboot)
Both these drives pass WD Diag tests.
Im wondering if it is possible to just 'stress' a drive beyond designed operational limits.
So my question is this: Should you be able to perform any combination of read, write, seek operations on a drive in good heath, without it failing? (I know you have to read/write in multiples of 512, and obviously all drives have a limited life and would fail eventually, but surely should be able to go at least a few hours)
I have a relatively simple bash script that reads from a set of static input files, stores the input in bash variables and then does a bunch of processing over said input by calling out to external scripts (e.g. written in Python, Go, other bash scripts etc.) and using the intermediate results.
Lately I have been experiencing an intermittent problem where a single character seems to be getting altered somewhere during the processing which then causes subsequent errors. Specifically, a lot of the processing I'm doing involves slicing up a list of comma-separated records, and one of the values on each line is a unix timestamp, e.g. 1354245000.
What seems to be happening is that occasionally one of these values will get altered slightly, so I end up with a timestamp like 13542458=2 or 13542458>2 or 13542458;2 coming out of one of the intermediate scripts. This then subsequently gets fed into another script, which throws an exception when it tries to parse the value to an integer.
In the title of this question, I've suggested that this might be a potential CPU/RAM error. I know the general folly in thinking errors are caused by low level things like hardware/compilers etcetera, but the nature of this particular error makes me think it may be possible, for the following reasons:
The input files are the same on each invocation of the script, and the script only fails on some invocations.
I cannot think of any sources of randomness in the source code prior to where the script is breaking. It's basically just slicing and dicing csv input.
I cannot think of any sources of concurrency in the source code -- even the Go scripts aren't actually written to run anything concurrently.
This problem has only arisen in the last week or so. Prior to this time, this error would never occur.
While I haven't documented every erroneous character, they seem to often be quite close in the ASCII table to numeric values (=, >, ; etc). That said, I guess the Hamming distance between two characters quite far apart can be small also with changes to a high order bit.
The script often breaks at a different stage on different runs. i.e. I have a number of separate Python scripts, and sometimes it'll make it past one script and then the error will be induced in another. Other times it'll be induced on an earlier script.
What I'd like to know is, is there any methodical way to either confirm or rule out a hardware error for this problem? Or if it is a hardware problem, is it possibly undetectable by the operating system?
A bit of further info on the machine:
Linux 64-bit, Ubuntu 12.04
Intel i7 processor
16GB DDR3 RAM
I'm hoping someone can either point me to a reliable way to verify whether the hardware is to blame or otherwise a sound reason as to what else might be the cause.
Try booting into Memtest to check your memory.
While it is highly unlikely that it will be hardware, if you have exhausted you standard software debug as suggested by #OliCharlesworth, here is an outline of hardware error investigation:
(1) check your log area for any `MCE` logs (machine check exceptions).
If you find any in either your log area (syslog) or sometimes in
the present working dir or /dir -- you have a hardware failure.
(2) check your log area for disk errors. e.g:
smartd[3963]: Device: /dev/sda [SAT], 34 Currently unreadable (pending) sectors
(3) check your drive integrity, e.g.: (as root) # `smartctl -a /dev/sda` if any abnormality, run:
smartctl -t short /dev/sda (change drive as required)
(4) download/install/boot to [memtest86](http://www.memtest86.com/download.htm)
(run the complete test)
If your cpu/motherboard has thrown no mce's, you have no disk error, your drive tests OK with smartctl and you have no memory errors with memtest86, then recheck the software debugging. While additional hardware errors can still be present (bad capacitors, etc..) the likelihood at this point is software. Good luck.
We are experiencing "pulsing" writes to disk (from 1 writes out/sec pulsing up to 142+ writes out/sec) around every 10 seconds.
See this example image:
https://discussions.apple.com/servlet/JiveServlet/showImage/2-22394173-269851/Screen+Shot+2013-07-03+at+13.22.28.png
We dug into these "pulsing" writes and found that they happen exactly the same time as these errors from IOTOP:
dtrace: error on enabled probe ID 5 (ID 992: io:mach_kernel:buf_strategy:start): illegal operation in action #3 at DIF offset 0
The "pulsing" only happens when the error above presents itself in IOTOP.
Note: we are running Apple RAID software mirroring for two drives.
Any suggestions, help and tips will be greatly appreciated. Thanks in advance.
The pulsing I/O pattern you're seeing is characteristic of applications where many/most filesystem writes are asynchronous - this is because the filesystem will batch up the writes so it can do many at the same time to avoid doing one disk seek per write. The most common example I can think of is a database writing data - except for the database's write-ahead log, everything is typically written asynchronously; other transactional access patterns tend to be similar because they have a write-ahead log to recover if some async writes are lost in a crash. This is a common access pattern and isn't necessarily a problem, but it can become a problem when your disk is highly fragmented and the filesystem can't write everything out in batches (causing many seeks, just like it was trying to avoid).
The DTrace/iotop error you're seeing means there's either a bug in the DTrace implementation itself or in the iotop DTrace script. Looking at iotop's source code (in /usr/bin/iotop on OS X), there are three io:::start callbacks which could be the culprit. It's possible that there's some sort of null pointer access in the script for some types of I/O, but it doesn't look likely based on the script and the arguments io:::start probes take. Perhaps this is best resolved with a bug report to Apple.
To restrict the scope, let assume we are in Windows world only.
Also assume we don't want to play with permission policy.
Is it possible for us to create a file that cannot be copied?
Thank you in advance.
"Trying to make digital files uncopyable is like trying to make water not wet." ~ Bruce Schneier
No. You can't create a file that a SYSADMIN can't copy. You could encrypt it, though.
Well, how about creating a file that uses up more than 50% of the total space on that machine and that is not compressible?
For instance, let us assume that you want to save a boolean (true or false) in such a fashion.
Depending on its value, you could then write a bit stream of ones or zeroes and encrypt said stream using some kind of encryption algorith, such as AES in CBC mode. This gives you the added advantage of error correction. Even in case of massive data corruption, you should be able to recover your boolean by checking whether ones or zeroes are prevalent in the decrypted stream.
In that case you cannot copy it around (completely) on the machine...
Of course, any type of external memory that can be added to the system would pose a problem in this scenario. But the file would be already encrypted, so don't worry about it too much...
Any file that can be read can have its contents written to another location (such as another file, i.e. copied).
The only thing you can do is limit who/what can read the file.
What is the motivation behind? If it is a read-only file, you can have it as embedded resources within your assembly.
Nice try, RIAA.
But seriously, no you can not. It is always possible to copy, you can just make it it more difficult for people to make sense of the file or try to hide it using like encryption. Spotify does it.
If you really try hard thou, you cold make a root-kit for windows and use it to prevent windows from even knowing about the file and also prevent copies. The file will still be there and copy-able by other tools, or Linux accessing the ntfs.
If in a running process you open a file and hold an exclusive lock, then other processes cannot read the file until you close the handle or your process terminates. However, as admin you could forcibly remove the lock handle.
Short answer: No.
You can, of course, use security settings to limit who can read the file. But if someone can read it, then they can copy it. Even if you found some operating system trick to disable "ordinary" copying, if someone can read the file, they can extract the contents, store it in memory, and then write it somewhere else.
You can encrypt the contents so it's only useful to your own program, that knows how to decrypt it.
That's about it.
When using Windows 7 to copy some files from a hard drive, certain files popped up a message saying they could not be copied in their entirety; certain data would be omitted from the copy. I suspect that had something to do with slack space at the end of the files, though I thought the message was curious. I would have expected the copy operation to just ignore the slack space.
If you are running old (OLD) versions of windows, there are certain characters you can put in the filename that make it invalid, not listed in folders, etc. They were used a lot in the old pub ftp days of filesharing ;)
In the old DOS days, you used to be able to flag disk sectors as bad and still read from them. This meant the OS ignored the sector in question but your application would know where to look and be able to get the data. Not sure this would work these days.
Another old MS-DOS trick was to put a space character in the middle of the filename (yes, spaces were valid characters for filenames). Since there was no method on the command line to escape a space, the file couldn't be copied using the DOS commands.
This answer is outside Windows so yeah
Dont know if its already been said but what about a file that is an inseperable part of the firmware so that it is always on AND running, perhaps it has firmware that generates a sequence that is required for the other . AN incedental effect of its running is to prevent any 80% or more of its code from being replicated. Lets say its on an entirely different board, protected by surge protectors, heavy em proof shielding and anything else required to make it completely unerasable.
If its possible to make a program that is ALWAYS on and running as long as the copying software is running then yes.
I have another way and this IS with windows. I will come to your house and give you a disk, i will then proceed to destroy every single computer you put the disk into. This doesnt work on XP
Well technically you could create and write to a write-only network share.
I'm reading real-time data over USB, but the data is buffered. How do I stop the buffering?
Linux
Use udev to change the latency_timer.
On ubuntu, create a rule in /etc/udev/rules.d for your device. Eg 99-xsens.rules
Create a rule in that file to match your device and set the latency_timer. For example, for my device this is:
KERNEL=="ttyUSB[0-9]*", SUBSYSTEMS=="usb-serial", DRIVERS=="ftdi_sio", ATTR{device/latency_timer}="2"
This causes the device to wait a shorter time before deciding there's no more incoming data to buffer. In this case my device went from waiting 16ms to waiting 2ms.
Use udevadm info -a -n /dev/ttyUSB0, for example, to find out what key-value pairs to match against in your rule. There are some tricky things to keep in mind, but finding resources to help with the ins and outs was easy once I knew to use udev rules.
Here's a good reference page on writing udev rules. It is old and the syntax of the udev tools has changed, but the concepts are still valid.
Windows
On Windows you use Device Manager->Ports->COM Port->Port Settings->Advanced->Latency Timer.