What file access woke the sleeping disk in Windows? - windows

Every now and then, my sleeping disk wakes up, does what sounds like a single read, and then sits idle until it falls asleep again. Sometimes a program that I am using completely freezes for about 10 seconds while the disk spins up, even though that program doesn't seem to need to read from that drive.
Is there an api for listening to file accesses as they happen, or similar, so I can figure out what is read from that drive, so I can move it? If not on Windows, can I do this on Linux?
This is also applicable for figuring out what files/folders a program is accessing in general, so I wouldn't say it only applies to my very narrow problem.

There's a simple tool called What's My Computer Doing? that you can use to get a quick idea of what's causing activity on your computer.
Install and run it, and leave it running in the background. Once you use this tool to narrow down which process is causing the disk activity, you'll want a more comprehensive tool. I use Process Monitor from Sysinternals/Microsoft.
It can be a bit daunting at first, but that's mainly because it is so powerful. It can also alter the behavior of the computer. When it's running, it backs up the huge quantity of data it collects to the disk. So that's why I suggest using the 'What's My Computer Doing?' tool first. Once you know which process is generating the disk access you can add a new filter rule (keep all the defaults, as they mask out a bunch of normal system processes) and select "Process Name" "is" "process_name", or select "PID" "is" "actual_PID".
There are plenty of tutorials like this one that can help you get started with Process Monitor.

Related

Packet Corruption: Why sometimes ffmpeg .bat batch video editing makes my computer unstable unable to restart?

I'm doing very time consuming ffmpeg video editing. That's why I put my commands into a .bat batch file and run them over night. Usually that works fine, but from time to time when I look the next moring I see an error message of this kind:
From that state on, I didn't find any good way to close the console. When I press the [x] button in the top right corner, it freezes. When I try to kill the application using the task manager nothing happens. Even explorer.exe cannot be closed using the task manager. A shutdown won't do anything. During the last month I had this problem about three times and the only way I could close it was to long press the power button of the computer until it was turned off "the bad way".
Any ideas what to in such situations?
Or even better: How to prevent those situations?
What can the reason(s) be for the error?
Do you understand the message?
When the computer is started again the next morining and I run the same .bat file again everything works fine. So the same error does not repeat and the video is edited nicely!
Edit: Now, about one week after posting this question the problem occurred many more times! It is very annoying. I guess it has to do with the external hard drive connected by USB. Sometimes it randomly interrupts the connection! That might be the reason for the behavior. Whatever its causing the error, I want to learn a solution how to deal with this in future. I don't want to always push the reset button of my computer. I want a proper way to be able to shut it down.
To narrow down the cause, what is causing this error, and what is not, here is a list of seven seven seemingly isolated solutions that each alone or all together should fix your problem:
The .bat Batch File
Apparently there is nothing wrong with your coded .bat batch files.
If that was the case, then none of your past videos would have rendered.
But just to be sure, try to run your .bat in a different laptop or computer on the heaviest and most demanding video editing project files just to make sure that the .bat files in fact and without a doubt flawless.
The Computer CPU
Make sure that your CPU runs flawlessly not just for 30 minutes but for the hours long burn tests that are the video
projects at night you mention. Poor contact between a concave or convex heatsink and cpu or lack of or too much of thermal paste can make cpu too hot and unstable during prolonged cpu intensive burn tests. A software like OCCT or Intel
Burn Test should be able to run for hours in your case without a
single fault.
The Computer RAM
To test your memory you can use MemTest86 or my favourite the open source MemTest86+ which should run for hours without a single memory error.
The OS Integrity
Run CMD as admin, and type chkdsk c: /f or chkdsk c: /f /r /x and press Y to check and repair (after a reboot) the local hard drive c: or any other partitions that are the source or destination of your rendering projects. When your computer encounters a sudden shutdown or detects a corrupted file system, sometimes this is the cause of a corrupted OS file. This checks for the integrity of the most important system files. Also sfc /scannow is another way to check System Files which scans and repairs system files.
The Harddrive
Connect your external drive locally, and run both a short and deep long test to make sure the harddrive has zero cluster faults. A SMART test from Crystical Disk Info famout for their Crystal Disk Test, can be a good way to see all the past errors on a Harddrive. Also, try to run the nightly batch files on the HDD connected internally. That way you can rule out the next item:
The Cable Quality
Cat rated UTP networking and USB cables are notoriously known for their poor manufacturing quality and low reliability. Not just over time, but new out of the box they can be the cause of disconnects, bad connections and low throughput. There is not something like they work 100% or they work 0%. Sometimes they sit right in between and "work, but to a degree" enough to be sold, with the absolute bare minimum and sometimes under minimum quality strands that are anything but cupper. So check your cables, replace the cables with other cables that you have laying around. CCA (Copper Cladded Aluminium) is the garbage to stay away from. Get proper Cupper only cables.
USB to SATA (HDD) or M.2 NVMe (SSD) Adapter Chip
Some USB-to-SATA adapters are notorious for their low stamina, stopping working when the adapter chips become exhausted in professional usage over prolonged continous workloads, resulting in disconnects even if they would be connected via a cupper USB 3.2 cable to the computer! The internet is full of forums with people having problems with older generation cheaper JMicron chips causing interruptions causing failures in copying files from or to the PC. Realtek chips are somewhat better, but often the solutions on the last page shows all problems went away when they bought an expensive adapter that uses an ASMedia chip.

Continue batch script only after program has booted

I have a batch script (yes I know batch is awful, no I don't care) that checks my VM's on the local machine to the ones stored on my USB, if they're out of date, it updates them, then boots them. I use multiple machines at uni, so this makes it easier to ensure the VM I'm working on are always the latest.
When I open them like this;
PATH "%PROGRAMFILES%\VMware\VMware Workstation\"
START vmware.exe -x "%USERPROFILE%\Desktop\VMs\VM1\VM1.vmx"
START vmware.exe -x "%USERPROFILE%\Desktop\VMs\VM2\VM2.vmx"
START vmware.exe -x "%USERPROFILE%\Desktop\VMs\VM3\VM3.vmx"
It causes the VM's to open in separate windows, rather than tabs of the same window for easy switching.
The workaround I came up with is to boot the VMware program first, then when I open the .VMXs, they all open as tabs in the same window.
The problem is that the VMware program sometimes takes a long time to open. Similar to Photoshop's loading splashscreen, but instead with no visual indicator, VMware opens up to 20 seconds after the icon has been clicked, or it has been summoned with a script.
So finally, here is my question.
Is there a way to make a batch that waits for the program to open before continuing? I know by omitting START I can stop the batch until the program closes, but obviously this is useless for my purposes.
If all else fails, I may just have to include a 30 second timeout and hope it's enough.
I don't think there's any reliable way to do this.
If you have a program that you need to open and then wait for a certain state to change in the program before doing something else, that state could be set by any arbitrary operation running on any thread spawned by the application. There would be no way to know when the application had set that state unless you were able to somehow communicate with the program to query or be notified of its state.
Theoretically if you knew enough about what the program was doing internally you could monitor the thread count or file system accesses or something to determine roughly when it had changed to the desired state, but just using a timer would be much much simpler.

Windows 7: poor GUI response in my program while downloading data; is there some way to improve this?

I've written a program that (among other things) downloads multiple large files from a server on the LAN, using TCP. This program runs fine under Linux, MacOS/X, and generally under Windows as well (it uses Qt for the GUI and straight sockets calls for networking), but on certain Windows machines the download appears to be too much for the machine to handle, and I'm wondering if anyone has any ideas as to why that is and what can be done about it.
When downloading files, my program spawns a separate I/O thread that basically just sits in a loop, downloading data over TCP and writing it to a file, writing 128KB per call to QFile:write(). Each file is typically several hundred megabytes long, and a typical download session writes out several dozen of these files. Note that the I/O thread runs independently of the GUI thread, so I wouldn't expect it to affect GUI's performance much if at all -- especially not when running on a multicore PC.
The PC in question is a Core-2Duo Quad Q6600 running at 2.40GHz, with 4GB of RAM. It's running Windows 7 Ultimate SP1, 32-bit. It is receiving data over a Gigabit Ethernet connection and writing it to files on the NTFS-formatted boot partition of the 232GB internal Hitachi ATA drive.
The symptom is that sometimes during a download (seemingly at random) the program's GUI will become non-responsive for 10 to 30 seconds at a time, and often the title bar of the window will have "(not responding)" appended to it. The symptom will then clear up again and the download will proceed normally again. Another symptom is that the desktop is extremely sluggish during the download... for example, if I click on the "Start" button, the Start menu will take ~30 seconds to populate, instead of being populated near-instantaneously as I would expect.
Note that Task Manager shows plenty of free memory, but it does show short spikes of CPU usage to 100% one one of the 4 cores, at the same time the problems are seen.
The data is arriving over Gigabit Ethernet, and if I have my program just receive the data and throw it away (without writing it to the hard drive), the machine can maintain a constant download rate of about 96MB/sec without breaking a sweat. If I write the received data to a file, however, the download rate decreases to about 37MB/sec, and the symptoms described above start to appear.
The interesting thing is that just for curiosity's sake I added this call to my I/O thread's entry function, just before the beginning of its event loop:
SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_BELOW_NORMAL);
When I did that, the "(not responding)" symptoms cleared, but then download speed was reduced to only ~25MB/sec.
So my questions are:
Does anyone know what might be causing the sporadic hangups of the GUI when the hard drive is under a heavy write-load?
Why does lowering the I/O thread's priority cause the download rate to drop so much, given that there are three idle cores on the machine? I would think that even a lower-priority thread would have plenty of CPU available in this situation.
Is there any way to get a maximum download rate without causing Windows' desktop responsiveness and/or my app's GUI responsiveness to suffer problems?
Without seeing any code is hard to answer but this seems to be something related to processors and the fact that your download thread is not leaving any space for other threads to performs other operations.
It seems it never waits and that the driver of the network card is not well written.
Are you sure your thread is entering in an idle state when there is no data incoming?
In OS with a single processor a for (;;) {} will consume 100% cpu and if it talks continuously with the kernel it may stops other processes or other threads for doing that, especially if there is a bug or a very bad behaviour in some network card driver in your case.
Probably putting the thread priority below normal you are asking the OS to use your thread less often, this gives by a magical combination of things that allow things to not hang too much.
Check the code, maybe you are forgetting something?
Check if adding a sleep(0) to force the OS to yield to another thread sometime will make things better, but this is a temporary fix, you should find why your thread is consuming 100% cpu, if it is.

How to Safely Force Shutdown of Mac

What I want
I'm developing a little app to force me to only work at certain times of day - I need something to force me to stop working in the evenings so I can be more effective in the day.
The option within OS X to shut down my machine at a certain time is too easy to cancel. And you can always log back in afterwards.
I want my app to quit all applications whether they have unsaved work or not.
What I've tried
I thought of killing the loginwindow process, but I've read that this can cause data corruption.
I've come across the shutdown command - I'm using sudo shutdown -h +0 to shutdown immediately. This appears to be just the ticket, but I'm worried that it might cause data corruption if, say, Disk Utility is doing some kind of scan.
Is the shutdown command safe?
Can the shutdown command cause corruption? Or is it safe to use? Is there a better way of forcing shutdown safely?
Use AppleScript to tell application "System Events" to shut down.
The shutdown command sends running processes a signal to terminate, giving them a chance to do clean up work, if needed. So generally, when an application receives this signal (SIGTERM(inate)) it should wrap up and exit.
IIRC in Snow Leopard (10.6) Apple added something called fast-shutdown (or similar) which will send processes that have been flagged as being ok with it a SIGKILL signal, shutting them down without chance for cleanup work. This is supposed to make shutdown faster. The default is that applications still get SIGTERM and have to opt-in for SIGKILL; and they can mark themselves as "dirty", i. e. having unsaved work and do not want to be killed forcibly.
So while shutting down in the middle of a disk utility run will abort whatever disk utility is doing, IMHO it would not cause data corruption in general. However depending on the operation you are currently running, you could end up with an incomplete disk image or a half-formatted partition. Maybe you want to refrain from using it when you know the end of your configured work time is coming close.
Using cron to schedule the shutdown is a viable option if you want it to happen at a specified time. If you want it to happen after a certain amount of time after you log in, you could use the number parameter to shutdown to specify say 8 hours from now.
If you want to lose unsaved work then shutdown -h is your only answer.
However, anyone who has debugged a full-screen app on OS X knows that is it very easy (some say too easy) for an app to capture the screen and render the computer essentially useless (without SSHing from another computer to kill the process.) That's another alternative.
the recommended way to schedule a shutdown of your computer on a regular basis is in the system preferences -> Energy Saver panel. Click on the "schedule" button in the lower right hand corner. the rest is self explanatory...
Forcing your computer to shut down (and discard any unsaved work) doesn't sound like a good idea to me. Wouldn't it be easier and safer to just set an alarm clock to remind yourself when you should stop working, and walk away from your computer when it rings? (That's what I do.)
Edit: That might have come across as a bit rude, which was not my intention at all. (I had no intention of making fun of your question or anything like that.) I just think that this would be a better solution to this problem :)
Maybe cron is installed on your computer? It's wonderful =)

How do I make Windows file-locking more like UNIX file-locking?

UNIX file-locking is dead-easy: The operating system assumes that you know what you are doing and lets you do what you want:
For example, if you try to delete a file which another process has opened the operating system will usually let you do it. The original process still keeps it's file-handles until it terminates - at which point the the file-system will quietly re-cycle the disk-resources. No fuss, that's the way I like it.
How different things are on Windows: If I try to delete a file which another process is using I get an Operating-System error. The file is untouchable until the original process releases it's lock on the file. That was great back in the single-user days of MS-DOS when any locking process was likely to be on the same computer that contained the files, however on a network it's a nightmare:
Consider what happens when a process hangs while writing to a shared file on a Windows file-server. Before the file can be deleted we have to locate the computer and ID the process on that computer which originally opened the file. Only then can we kill the process and delete our unwanted file.
What a nuisance!
Is there a way to make this better? What I want is for file-locking on Windows to behave a like file-locking in UNIX. I want the operating system to just let me do what I want because I'm in charge and I know what I'm doing...
...so can it be done?
No. Windows is designed for the "average user", that is people who don't understand anything about a computer. Therefore, the OS tries to be smart to avoid PEBKACs. To quote Bill Gates: "There are no issues with Windows that any number of people want to be fixed." Of course, he knows that 99.9999% of all Windows users can't tell whether the program just did something odd because of them or the guy who wrote it.
Unix was designed when the world was more simple and anyone close enough to a computer to touch it, probably knew how to assemble it from dirty sand. Therefore, the OS usually lets you do what you want because it assumes that you know better (and if you didn't, you will next time).
Technical answer: Unix allocates an "i-nodes" if you create a file. I-nodes can be shared between processes. If two processes create the same file (that is, two processes call create() with the same path), then you end up with two i-nodes. This is by design. It allows for a fancy security feature: You can create files which no one can open but yourself:
Open a file
Delete it (but keep the file handle)
Use the file any way you like
Close the file
After step #2, the only process in the universe who can access the file is the one who created it (unless you want to read the hard disk block by block). The OS will keep the data alive until you either close the file or your process dies (at which time Unix will clean up after you).
This design is the foundation of all Unix filesystems. The Windows file system NTFS works much the same way but the high level API is different. Many applications open files in exclusive mode (which prevents anyone, even backup programs) to read the file. This is even true for applications which just display information like PDF viewers.
That means you'll have to fix all the Windows applications to achieve the desired effect. If you have access to the source, you can create a file in a shared mode. That would allow other processes to access it at the same time but then, you will have to check before every read/write if the file still exists, whether someone has made changes, etc.
According to MSDN you can specify to CreateFile() 3rd parameter (dwSharedMode) shared mode flag FILE_SHARE_DELETE which:
Enables subsequent open operations on a file or device to request delete access.
Otherwise, other processes cannot open the file or device if they request delete access.
If this flag is not specified, but the file or device has been opened for delete access, the function fails.
Note Delete access allows both delete and rename operations.
http://msdn.microsoft.com/en-us/library/aa363858(VS.85).aspx
So if you're can control your applications you can use this flag.
Note that Process Explorer allow for force closing of file handles (for processes local to the box on which you are running it) via Handle -> Close Handle.
Unlocker purports to do a lot more, and provides a helpful list of other tools.
Also deleting on reboot is an option (though this sounds like not what you want)
That doesn't really help if the hung process still has the handle open. It won't release the resources until that hung process releases the handle. But anyway, in Windows it is possible to force close a file out from under a process that's using it. Process Explorer from sysinternals.com will let you look at and close handles that a process has open.

Resources