I'm working on an application that generates a set of bitmaps and then loads them into a form for a user to pick from.
The bitmaps are generated from a small vector library which the user can add to. The code now creates the files and then deletes them immediatelyafter use, only to have to generate them again (making the UI take seconds to load) next time the user opens the UI.
So what I'm wondering is, is it okay to leave my bitmaps in the user temp folder "forever", and regenerate them if they are not in the folder? I can't expect to be able to store the images in the application directory, due to possible permission issues, and like I said, I can't prepopulate the files since the user can add more.
Ideally you should generate any temporary data into the RAM rather to the file system.
It is acceptable to depend on temporary files if you can make sure that your application is storing only a limited amount of such files per user. Any temporary files can be left behind on unexpected crashes/power offs no matter what your code does. You therefore need to implement a mechanism that will delete any stale files created by the same application in a previous session - presumably during its next start up.
Assuming such a safety mechanism, intentionally leaving behind temporary files when the application exits sounds like a non-standard but reasonable "cache".
Caveat: the next version of your application may need a slightly different file format, and should detect, delete and regenerate any files in a mismatched format based on some simple versioning scheme to avoid cross-build dependences.
Related
I’m designing a custom file format. It will be either a monolith file or a folder with smaller files. It’s a rather large file in total and there is no need to load everything into memory at once. It would make it also slower than necessary. One of the file(s) may or may not be database file. Running SQL queries would be useful.
The user can have many such files. The user might want to share files with others even if it takes some time to up/download it.
Conceptually I run into issues with shared network folders, Dropbox, iCloud, etc. Such services can lead to sync issues if the file is not loaded entirely in memory or the database file can get corrupted.
One solution is to prohibit storing the file on such services. Either by using a user/library folder or forcing the user to pick a local folder.
Using a folder in library means recreating a file navigation system like Finder. It limits the choice of the user as well in where the files end up. Limiting the location to a local folder seems the better choice.
Is there a way to programmatically detect if a folder is local?
For the last 2 days I've been trying to make my single-file document into a package, but I can't get it to work. In the documentation it states the preferred way is to use NSFileWrapper. I've tried it but it's just such a unintuitive way of handling files.
I guess to update a file I need to delete the file wrapper from it's directory, create a completely new one and add it again to the directory. I haven't found anything that explicitly states it, but I guess I should update the file only when fileWrapperOfType:error: is called.
As NSFileWrapper keeps everything (at least once loaded) in memory, this means that I'll have the old version and the new version at the same time until the user (or autosave) saves the file.
It seems like NSFileWrapper shouldn't be used for big files, but I think it's better if all the files that are needed by the document are inside the package(can be copied to another Mac/iPhone/iPad without errors) and I don't want to limit the user on how many/how big the files can be.
When using a manual URL-based saving mechanism, I end up getting corrupt packages, as the destination directory is always a temporary one, and I couldn't find any information on how to merge them. Every time I manually save the document without any changes, an error occurs, as I don't write anything to the temporary directory. But I don't see a reason in writing/linking everything to the temporary directory, only for it to be copied/'un'-linked back to its destination.
As I can't seem to find the right answer, what is the best-practice for saving and restoring big packages with many/big files in them?
Is it possible to append/remove a ressource file to a binary at execution time?
I have an application written with go, which saves/searches data from a database file, and i would like this database file to be embedded to the binary, and updated by the application itself.
This way the application would be self contained with its database.
Modifying the executable, this is generally a very bad idea.
Several issues pop right into my head, such as:
Does the current user have sufficient permissions?
Is the file locked during execution?
What about multiple running instances of the application?
Even if you manage to do just that, think of what anti-virus and firewall applications will say to it: most when they detect the change will flag the executable and/or contain it, or deny running it, or some may even delete it. Rightfully, as this is what many viruses do: modify existing executables.
Also virus scanner databases maintain reports where files (their contents) are identified based on the hash of their content. Modifying the executable will naturally change the file content hash, thus render the file unknown / suspicious to these databases.
As mentioned, just write / cache data in separate file(s), preferably in user's home folder or in the application folder (next to the executable, optionally in sub-folders). Or make the cache file / folder a changeable option (command line flags).
Technically, this is possible, but this is a bad idea. Your application could be run by users not having write permissions to your binary.
If you're talking about a portable app, your best option might be using a file in the same directory the binary is located, otherwise - use the user's home directory according to the conventions of the OS you're running on. You can use the os/user package to find the home directory.
I'm making a simple VB.net application, which basically asks the user for multiple files and later it will need to access the selected files and modify them.
Right now, I'm saving the full paths of the selected files, and in the future, the application will iterate through each path, open the file from such path, and modify it.
The problem with that is that the user could select a file (so the full path is saved) and then they delete or move the file before my application modifies it.
Normally, I'd throw an error saying "File not found", but I'm under the impression that Windows had a feature that would disallow you from deleting/moving/renaming a file because "a program was using it" - which is a feature that would fit way better for my application.
I'm not very advanced with VB.NET, but I suppose that if I "open" a file using my application (with some IO thing), the feature I mentioned earlier would indeed trigger and the user would be unable to modify the file because it is "opened" by my application.
However, since my only desire is to "reserve" files, it seems to be quite wasteful to actually open them when I don't really need to (yet). Is there a way to tell Windows I need a certain file to be intact?
Opening files (with specifying desired sharing mode) is the way to do that.
I don't believe there is anything really wrong with opening multiple files (also you still will not be able to do anything for cases like removing of removable drive). In old times there were restrictions on number of opened files per process, but I it no longer practical limitation - Pushing the Limits of Windows: Handles
There is an easy solution: open each file in exclusive mode.
It should look like this:
Sub test()
Dim FS = System.IO.File.Open("path", IO.FileMode.Open, IO.FileAccess.ReadWrite, IO.FileShare.None)
End Sub
But beware: You have opened a file handle and if you code responsible for closing files fails without terminating the application files will still be locked for very long (till app shuts down).
You can use a using clause or a try/catch/finally clause - I don't know enough about your program to recommend anyone.
We are migrating our APP to Win7. The program generates log files to help us support and also saves a number of dictionary files and settings files that are useful for the user though the user will rarely if ever actually want to interact with the files outside of our application. They can though because they are csv files. I built the first run through with using the APPDATA\LOCAL\OURAPPLICATION folder as the destination. Now I am wondering if it should be PROGRAMDATA\OURAPPLICATION.
I actually think the first choice is better because it seems that everything I have scanned suggests that the PROGRAMDATA folder should be considered untouchable by the user but as I am not a programmer I am not sure.
I hope this is the right place to ask this question
The key point to consider is what the scope of the data is. If you are storing data that is associated with a specific user then you should use APPDATA and if you are storing data that is global to your program then you should use PROGRAMDATA.
Both APPDATA and PROGRAMDATA are hidden folders so the intent is for users not to be poking around in there (not that they couldn't if they wanted to).