I'd like to be able to open a file on Windows Phone 7, in an XNA game, without reading the entire file into memory. I'm trying to stream audio from WAV files, to be passed to DynamicSoundEffectInstance for playback.
The method I have now uses TitleContainer.OpenStream() to open the WAV file, and then reads it on a background thread using ThreadPool.QueueUserWorkItem(). However, this causes a hitch at the beginning, and today I verified that TitleContainer.OpenStream() returns a MS.Internal.InternalMemoryStream object, which would suggest that it's reading the entire file into memory in OpenStream().
This is corroborated by the fact that it seems to take effectively no time (or, only a memcpy()'s worth of time) to do the Read(), and that Stream.BeginRead() (which is included on WP7 as part of the Async CTP) calls its callback before returning.
Is there any way to open a file on WP7 XNA without reading the entire thing into memory? If not, this is completely ridiculous.
There does not appear to be a way to do this. However, since I know the size of the chunks I want to read (32kB), I can split the files into chunks offline and read them one chunk at a time.
Related
A tool I'm writing is responsible for downloading thousands of image files over a matter of many hours. Originally, using TIdHTTP, I would Get the file(s) into a TMemoryStream, and then save that to a file, so long as there were no exceptions. In order to improve speed, I changed the TMemoryStream to a TFileStream.
However, now if the resource was not found, or otherwise any sort of exception which results in no actual file, it still saves an empty file.
Completely understandable, since I simply create a file stream just prior to the download...
FileStream:= TFileStream.Create(FileName, fmCreate);
try
Web.Get(AURL, FileStream);
finally
FileStream.Free;
end;
I know I could simply delete the file if there was an exception. But it seems far too sloppy. I'm sure there's a more appropriate method of aborting such a situation.
How should I make this to not save a file if there was an exception, while not altering the performance (if at all possible)?
How should I make this to not save a file if there was an exception, while not altering the performance (if at all possible)?
This isn't possible in general. Errors and failures can happen at any step if the way, including part way through the download. Once this point is understood, then you must accept that the file can be partially downloaded and then abandoned. At which point where do you store it?
The obvious choices are memory and file. You don't want to store to memory, which leaves to file.
This takes you back to your current solution.
I know I could simply delete the file if there was an exception.
This is the correct approach. There are a few variants on this. For instance you might download to a temporary file that is created with flags to arrange its deletion when closed. Only if the download completes do you then copy to the true destination. This is the approach that a browser takes. But the basic idea is to download to file and deal with any failure by tidying up.
Instead of downloading the entire image in one go, you could consider using HTTP range requests if the server supports it. Then you could chunk the file into smaller parts, requesting the next part after the first finishes (or even requesting multiple parts at the same time to increase performance). If there is an exception then you can about the future requests, so they never start in the first place.
YouTube and a number of streaming media sites started doing this a while ago. It used to be if you started playing a video, then paused it, then it would eventually cache the entire video. Now it only caches a little ahead of the current position. This saves a ton of bandwidth because of the abandon rate for videos.
You could write the partial file to disk or keep it in memory.
I have a Win32 program that keeps a file open and writes data to it over a period of several hours. I'd like for the file size, as shown in an Explorer window, to be updated every so often.
As an example, when a browser is downloading a large file, you can see the file size change over time, even though the file is still downloading.
With my current naive implementation, the file size remains zero until I close the file.
How do I do this in Win32? Currently the file is open using std::ofstream. Is this a proper application of std::ostream::flush() ? Or do I need to close and reopen the file with some regularity?
std::ostream::flush() makes sure you have your data safe on disk. Flushing the buffer is a valid use case in situations where the automatic flushes ain't good enough for you (e.g. there's too little data written over too long periods, the data is written constantly but needs to be accessible constantly too, you need to be sure the data gets logged in case of crash or power down etc.); yet, on some OS/filesystem combinations (see Why is the file size reported incorrectly for files that are still being written to?), that still won't update the file size accordingly. On Win32, you usually won't see size updates before actually closing/reopening the handle; sometimes re-reading the dir etc. will help, and sometimes it simply won't.
As such, you can use e.g. ReOpenFile to force that update, or simply use close/open instead of flushing. The exact solution depends whether you need the updated filesize so direly and the reduced output rate is not a real problem (in which case reopening is the best option), or if you can live with a wrong size reported (in which case flushes are your best option IMO).
I want to be able to (programmatically) move (or copy and truncate) a file that is constantly in use and being written to. This would cause the file being written to would never be too big.
Is this possible? Either Windows or Linux is fine.
To be specific what I'm trying to do is log video with FFMPEG and create hour long videos.
It is possible in both Windows and Linux, but it would take cooperation between the applications involved. If the application that is writing the new data to the file is not aware of what the other application is doing, it probably would not work (well ... there is some possibility ... back to that in a moment).
In general, to get this to work, you would have to open the file shared. For example, if using the Windows API CreateFile, both applications would likely need to specify FILE_SHARE_READ and FILE_SHARE_WRITE. This would allow both (multiple) applications to read and write the file "concurrently".
Beyond sharing the file, though, it would also be necessary to coordinate the operations between the applications. You would need to use some kind of locking mechanism (either by locking some part of the file or some shared mutex/semaphore). Note that if you use file locking, you could lock some known offset in the file to act as a "semaphore" (it can even be a byte value beyond the physical end of the file). If one application were appending to the file at the same exact time that the other application were truncating it, then it would lead to unpredictable results.
Back to the comment about both applications needing to be aware of each other ... It is possible that if both applications opened the file exclusively and kept retrying the operations until they succeeded, then perform the operation, then close the file, it would essentially allow them to work without "knowledge" of each other. However, that would probably not work very well and not be very efficient.
Having said all that, you might want to consider alternatives for efficiency reasons. For example, if it were possible to have the writing application write to new files periodically, it might be more efficient than having to "move" the data constantly out of one file to another. Also, if you needed to maintain some portion of the file (e.g., move out the first 100 MB to another file and then move the second 100 MB to the beginning) that could be a fairly expensive operation as well.
logrotate would be a good option is linux, comes stock on just about any distro. I'm sure there's a similar windows service out there somewhere
I'm trying to create a backup of the Windows clipboard. Basically what I'm doing is using EnumClipboardFormats() to get all of the formats that exist on the clipboard currently, and then for each format, I'm calling GetClipboardData(format).
Part of backing up the data obviously involves duplicating it. I do that by calling GlobalLock() (which "Locks a global memory object and returns a pointer to the first byte of the object's memory block.") on the data returned by GetClipboardData(), then I fetch the size of the data by calling GlobalSize(), and then finally I do a memcpy() to duplicate the data. I then of course call GlobalUnlock() when I'm done.
Well, this works... most of the time. My program crashes at the GlobalLock() if the clipboard contains data with the format CF_BITMAP or CF_METAFILEPICT. After reading this Old New Thing blog post (https://devblogs.microsoft.com/oldnewthing/20071026-00/?p=24683) I found out why the crash occurs: apparently not all data on the clipboard is allocated using GlobalAlloc() (such as CF_BITMAP data), and so calling GlobalLock() on that data causes a crash.
I came across this MSDN article and it gives a list of clipboard formats and how they are freed by the system. So what I did was hard-code into my program all of the clipboard formats (CF_*) that are not freed by the GlobalFree() function by the system, and I simply don't back up those formats; I skip them.
This workaround seems to work well, actually. Even if a bitmap is on the clipboard, or "special" data (such as rows copied from Excel to the clipboard), my clipboard backup function works well and I haven't experienced any crashes. Also, even if there's a bitmap on the clipboard and I skip some formats during the backup (like CF_BITMAP), I can still Ctrl+V paste the originally-copied bitmap from the clipboard after restoring my clipboard backup, as the bitmap is represented by other formats on the clipboard as well that don't cause my program to crash (CF_DIB).
However, it's a workaround at best. My fear is that one of these times some weird format (perhaps a private one, i.e one between CF_PRIVATEFIRST and CF_PRIVATELAST, or maybe some other type) will be on the clipboard and my program, after calling GlobalLock(), will crash again. But since there doesn't seem to be much documentation explaining the best way to back up the clipboard, and it's clear that GlobalLock() does not work properly for all datatypes (unfortunately), I'm not sure how to handle these situations. Is it safe to assume that all other formats -- besides the formats listed in the previous URL that aren't freed by GlobalFree() -- can be "grabbed" using GlobalLock()?
Any ideas?
This is folly, as you cannot 100% backup/restore the clipboard. Lots of apps used delayed rendering, and the data is not actually on the clipboard. When you request to paste, they get notified and produce the data. This would take several minutes and hundreds of MB for large amounts of data from apps like Excel. Take a look at the number of formats listed on the clipboard when you copy from Excel. There will be more than two dozen, including Bitmap, Metafile, HTML. What do you think will happen if you select 255x25000 cells in Excel and copy that? How large would that bitmap be? Tip: save any open documents before attempting this, as you're likely going to have to reboot.
I have a custom file type that is implemented in sections with a header at the shows the offset and length of each section within the file.
Currently, whenever I want to interact with the file, I must either load and parse the entire thing up front, or else pick only the sections that I need and load just them.
What I would like to do is to achieve a hybrid approach where each of the sections is loaded on-demand.
It seems however that doing this has a lot of potential downsides in terms of leaving filesystem handles open for longer than I would like and the additional code complexity that I would incur.
Are there any standard patterns for this sort of thing? It seems that my options are to:
Just load the entire file and stop grousing about the cycles/memory wasted
Load the entire file into memory as raw bytes and then satisfy any requests for unloaded sections from the memory buffer rather than disk. This saves me the cost of parsing the unneeded sections and requires less memory (since the disk representation is much more compact than the object model around it), but still means that I waste memory for sections that I never end up loading.
Load whatever sections I need right away and close the file but hold onto the source location of the file. Then if another section is requested, re-open the file and load the data. In this case I could get strange results if the underlying file is changed.
Same as the above but leave a file handle open (perhaps allowing read sharing).
Load the file using Memory-Mapped IO and leave a view on the file open.
Any thoughts
If possible, MMAP-ing the whole file is usually the easiest thing to do if you have a random-access pattern. This way you just delegate the loading/unloading issue to the OS and you have 1 & 2 for free.
If you have very special access patterns, you can even use something like fadvise() (I don't the exact Win32 equivalent) to tell the OS your access intend.
If your file is more than 2GB and you can either go the 64bits way or to mmap() the file on demand.
If the file is relatively small, mmap-ing the entire file is good enough. If the file is large, you could leave a mmap view open, and just move it around the file and resize it to view each section when needed.