Stop a currently running writeToFile: - cocoa

I have some code which writes some PDFDocument objects to a user-chosen destination. This works fine.
Perhaps some of these files (chosen by the user) may be pretty large (maybe hundreds of megabytes) and now I wonder whether there is a possibility to cancel the current writeToFile:withOptions: call (e.g. the user changed his mind and wants to stop it).

I doubt you can do it with that method, since it provides no canceling functionality.
I suggest you use the dataRepresentation method of PDFDocument to first get the PDF data. You can then split up the data using NSData’s subdataWithRange:. And then you can successively write out the data to a file using NSFileHandle’s fileHandleForWritingToURL:error:, writeData:, and closeFile methods.
Writing it out in chunks like this from a non-main thread, in a for-loop say, you can cancel it any time you wish.

Related

TwinCAT fails to save data to CSV

I am part of tractor pulling team and we have Bechoff CX8190 based PLC for data logging. System works most of the time but every now and then saving sensor values (every 10ms is collected) to CSV fails (mostly in middle of csv row). Guy who build the code is new with the TwinCAT and does not know how to find what causes that. Any Ideas where to look reason for this.
Writing to a file is always a asynchron action in TwinCAT. That is to say this is no realtime action and it is not safe that the writing process is done within the task cycletime of 10ms. Therefore these functionblocks always have a BUSY-output which has to be evaluated and the functionblock has to be called successivly until the BUSY-output returns to FALSE. Only then a new write command can be executed.
I normally tackle this task with a two-side-buffer algorithm. Lets say the buffer-array has 2x100 entries. So fill up the first 100 entries with sample values. Then write them all together with one command to the file. When its done, clean the buffer. In the meanwhile the other half of the buffer can be filled with sample values. If second side is full, write them all together to the file ... and so on. So you have more time for the filesystem access (in the example above 100x10ms=1s) as the 10ms task cycletime.
But this is just a suggestion out of my experience. I agree with the others, some code could really help.

Is it possible to use Windows Overlapped IO to wait for another process to write to a file?

Say I want to write a tail like application for Windows to monitor a bunch of files. Such an application should report when some of the monitored files is updated by any other application.
It can be assumed that the files being monitored are being constantly appended by other processes, but not modified in any other way. Before implementing some pooling solution (that is, iterate through the files to be monitored, seek to the end of each one, record this pointer, compare to previous end etc.) I would appreciate if someone more experienced with the Overlapped IO could tell me if I can make use of it.
For instance, is it possible to write the monitoring application in such a way that it opens all the files that need to be monitored, seek to the end of them, and try to read one byte with ReadFileEx() registering a callback.
Is there a way to make this work so that when another process write to some of the files the proper callback is invoked? Or necessarily the monitoring application will always get an EOF for such a call?
Is this approach a sensible one? Or is it a bad idea?

Appropriate way to cancel saving file via file stream?

A tool I'm writing is responsible for downloading thousands of image files over a matter of many hours. Originally, using TIdHTTP, I would Get the file(s) into a TMemoryStream, and then save that to a file, so long as there were no exceptions. In order to improve speed, I changed the TMemoryStream to a TFileStream.
However, now if the resource was not found, or otherwise any sort of exception which results in no actual file, it still saves an empty file.
Completely understandable, since I simply create a file stream just prior to the download...
FileStream:= TFileStream.Create(FileName, fmCreate);
try
Web.Get(AURL, FileStream);
finally
FileStream.Free;
end;
I know I could simply delete the file if there was an exception. But it seems far too sloppy. I'm sure there's a more appropriate method of aborting such a situation.
How should I make this to not save a file if there was an exception, while not altering the performance (if at all possible)?
How should I make this to not save a file if there was an exception, while not altering the performance (if at all possible)?
This isn't possible in general. Errors and failures can happen at any step if the way, including part way through the download. Once this point is understood, then you must accept that the file can be partially downloaded and then abandoned. At which point where do you store it?
The obvious choices are memory and file. You don't want to store to memory, which leaves to file.
This takes you back to your current solution.
I know I could simply delete the file if there was an exception.
This is the correct approach. There are a few variants on this. For instance you might download to a temporary file that is created with flags to arrange its deletion when closed. Only if the download completes do you then copy to the true destination. This is the approach that a browser takes. But the basic idea is to download to file and deal with any failure by tidying up.
Instead of downloading the entire image in one go, you could consider using HTTP range requests if the server supports it. Then you could chunk the file into smaller parts, requesting the next part after the first finishes (or even requesting multiple parts at the same time to increase performance). If there is an exception then you can about the future requests, so they never start in the first place.
YouTube and a number of streaming media sites started doing this a while ago. It used to be if you started playing a video, then paused it, then it would eventually cache the entire video. Now it only caches a little ahead of the current position. This saves a ton of bandwidth because of the abandon rate for videos.
You could write the partial file to disk or keep it in memory.

two programs accessing one file

New to this forum - looks great!
I have some Processing code that periodically reads data wirelessly from remote devices and writes that data as bytes to a file, e.g. data.dat. I want to write an Objective C program on my Mac Mini using Xcode to read this file, parse the data, and act on the data if data values indicate a problem. My question is: can my two different programs access the same file asynchronously without a problem? If this is a problem can you suggest a technique that will allow these operations?
Thanks,
Kevin H.
Multiple processes can read from the same file at a time without any problem. A process can also read from a file while another writes without problem, although you'll have to take care to ensure that you read in any new data that was written. Multiple processes should not write to the same file at at the same time, though. The OS will let you do it, but the ordering of data will be undefined, and you'll like overwrite data—in general, you're gonna have a bad time if you do that. So you should take care to ensure that only one process writes to a file at a time.
The simplest way to protect a file so that only one process can write to it at a time is with the C function flock(), although that function is admittedly a bit rudimentary and may or may not suit your use case.

How do I safely and correctly create a backup of the Windows clipboard?

I'm trying to create a backup of the Windows clipboard. Basically what I'm doing is using EnumClipboardFormats() to get all of the formats that exist on the clipboard currently, and then for each format, I'm calling GetClipboardData(format).
Part of backing up the data obviously involves duplicating it. I do that by calling GlobalLock() (which "Locks a global memory object and returns a pointer to the first byte of the object's memory block.") on the data returned by GetClipboardData(), then I fetch the size of the data by calling GlobalSize(), and then finally I do a memcpy() to duplicate the data. I then of course call GlobalUnlock() when I'm done.
Well, this works... most of the time. My program crashes at the GlobalLock() if the clipboard contains data with the format CF_BITMAP or CF_METAFILEPICT. After reading this Old New Thing blog post (https://devblogs.microsoft.com/oldnewthing/20071026-00/?p=24683) I found out why the crash occurs: apparently not all data on the clipboard is allocated using GlobalAlloc() (such as CF_BITMAP data), and so calling GlobalLock() on that data causes a crash.
I came across this MSDN article and it gives a list of clipboard formats and how they are freed by the system. So what I did was hard-code into my program all of the clipboard formats (CF_*) that are not freed by the GlobalFree() function by the system, and I simply don't back up those formats; I skip them.
This workaround seems to work well, actually. Even if a bitmap is on the clipboard, or "special" data (such as rows copied from Excel to the clipboard), my clipboard backup function works well and I haven't experienced any crashes. Also, even if there's a bitmap on the clipboard and I skip some formats during the backup (like CF_BITMAP), I can still Ctrl+V paste the originally-copied bitmap from the clipboard after restoring my clipboard backup, as the bitmap is represented by other formats on the clipboard as well that don't cause my program to crash (CF_DIB).
However, it's a workaround at best. My fear is that one of these times some weird format (perhaps a private one, i.e one between CF_PRIVATEFIRST and CF_PRIVATELAST, or maybe some other type) will be on the clipboard and my program, after calling GlobalLock(), will crash again. But since there doesn't seem to be much documentation explaining the best way to back up the clipboard, and it's clear that GlobalLock() does not work properly for all datatypes (unfortunately), I'm not sure how to handle these situations. Is it safe to assume that all other formats -- besides the formats listed in the previous URL that aren't freed by GlobalFree() -- can be "grabbed" using GlobalLock()?
Any ideas?
This is folly, as you cannot 100% backup/restore the clipboard. Lots of apps used delayed rendering, and the data is not actually on the clipboard. When you request to paste, they get notified and produce the data. This would take several minutes and hundreds of MB for large amounts of data from apps like Excel. Take a look at the number of formats listed on the clipboard when you copy from Excel. There will be more than two dozen, including Bitmap, Metafile, HTML. What do you think will happen if you select 255x25000 cells in Excel and copy that? How large would that bitmap be? Tip: save any open documents before attempting this, as you're likely going to have to reboot.

Resources