Why does my app sometimes create a file "A.myappextension-shm" in addition to the file "A.myappextension"? - macos

I have a Document based Core Data app that saves with SQLite. While testing I save to a test file A.myappextension. Sometimes another file---"A.myappextension-shm"---is also created. Why is that?

Assuming that A.myappextension is your Core Data persistent store file, it happens because of SQLite journaling. You might also see A.myappextension-wal. Both of these extra files are SQLite journal files, and a lot of your data may actually be stored in them instead of in the main file. If you ever copy these files, or remove them, or do anything else that treats them as files instead of SQLite data, you'll need to copy/remove/whatever all of them.

Related

how to temporary store uploaded files using FLASK

I'm creating a web application using flask that takes 3 input from the user: name, picture, grades.
I want to store these information temporary depending on the user's session.
and as a beginner I read that sessions are not for storing files, what other secure way you recommend me to use?
I would recommend to write the files to disk.
If this is really temporary, e.g. you have a two-step-sign-up-form, you could write the files to temporary files or into a temporary directory.
Please see the excellent documentation at https://docs.python.org/3/library/tempfile.html
Maybe this should not be this temporary? It sounds like a user picture is something more permanent.
Then I would recommend e.g. to create a directory for each user and store the files there.
This is done with standard Python io, e.g with the open function.
More info about reading and writing files also can be found in the official Python documentation:
https://docs.python.org/3/tutorial/inputoutput.html#reading-and-writing-files

Windows Projected File System read only?

I tried to play around with Projected File System to implement a user mode ram drive (previously I had used Dokan). I have two questions:
Is this a read-only projection? I could not find anything any notification sent to me when opening the file from say Notepad and writing to it.
Is the file actually created on the disk once I use PrjWriteFileData()? From what I have understood, yes.
In that case what would be any useful thing that one could do with this library if there is no writing to the projected files? It seems to me that the only useful thing is to initially create a directory tree from somewhere else (say, a remote repo), but nothing beyond that. Dokan still seems the way to go.
The short answer:
It's not read-only but you can't write your files directly to a "source" filesystem via a projected one.
WriteFileData method is used for populating placeholder files on the "scratch" (projected) file system, so, it doesn't affect a "source" file system.
The long answer:
As stated in the comment by #zett42 ProjFS was mainly designed as a remote git file system. So, the main goal of any file versioning system is to handle multiple versions of files. From this a question arise - do we need to override the file inside a remote repository on ProjFS file write? It would be disastrous. When working with git you always write files locally and they are not synced until you push the changes to a remote repository.
When you enumerate files nothing being written to a local file system. From the ProjFS documentation:
When a provider first creates a virtualization root it is empty on the
local system. That is, none of the items in the backing data store
have yet been cached to disk.
Only after the file is opened ProjFS creates a "placeholder" for it in a local file system - I assume that it's a file with a special structure (not a real one).
As files and directories under the virtualization root are opened, the
provider creates placeholders on disk, and as files are read the
placeholders are hydrated with contents.
What "hydrated" is mean? Most likely, it represents a special data structure partially filled with real data. I would imaginge a placeholder as a sponge partially filled with data.
As items are opened, ProjFS requests information from the provider to allow placeholders for those items to be created in the local file system. As item contents are accessed, ProjFS requests those contents from the provider. The result is that from the user's perspective, virtualized files and directories appear similar to normal files and directories that already reside on the local file system.
Only after a file is updated (modified). It's not a placeholder anymore - it becomes "Full file/directory":
For files: The file's content (primary data stream) has been modified.
The file is no longer a cache of its state in the provider's store.
Files that have been created on the local file system (i.e. that do
not exist in the provider's store at all) are also considered to be
full files.
For directories: Directories that have been created on the local file
system (i.e. that do not exist in the provider's store at all) are
considered to be full directories. A directory that was created on
disk as a placeholder never becomes a full directory.
It means that on the first write the placeholder is replaced by the real file in the local FS. But how to keep a "remote" file in sync with a modified one? (1)
When the provider calls PrjWritePlaceholderInfo to write the
placeholder information, it supplies the ContentID in the VersionInfo
member of the placeholderInfo argument. The provider should then
record that a placeholder for that file or directory was created in
this view.
Notice "The provider should then record that a placeholder for that file". It means that in order to sync the file later with a correct view representation we have to remember with which version a modified file is associated. Imagine we are in a git repository and we change the branch. In this case, we may update one file multiple times in different branches. Now, why and when the provider calls PrjWritePlaceholderInfo?
... These placeholders represent the state of the backing store at the
time they were created. These cached items, combined with the items
projected by the provider in enumerations, constitute the client's
"view" of the backing store. From time to time the provider may wish
to update the client's view, whether because of changes in the backing
store, or because of explicit action taken by the user to change their
view.
Once again, imagine switching branches in a git repository; you have to update a file if it's different in another branch. Continuing answering the question (1). Imaging you want to make a "push" from a particular branch. First of all, you have to know which files are modified. If you are not recorded the placeholder info while modifying your file you won't be able to do it correctly (at least for the git repository example).
Remember, that a placeholder is replaced by a real file on modification? A ProjFS has OnNotifyFileHandleClosedFileModifiedOrDeleted event. Here is the signature of the callback:
public void NotifyFileHandleClosedFileModifiedOrDeletedCallback(
string relativePath,
bool isDirectory,
bool isFileModified,
bool isFileDeleted,
uint triggeringProcessId,
string triggeringProcessImageFileName)
For our understanding, the most important parameter for us here is relativePath. It will contain a name of a modified file inside the "scratch" file system (projected). Here you also know that the file is a real file (not a placeholder) and it's written to the disk (that's it you won't be able to intercept the call before the file is written). Now you may copy it to the desired location (or do it later) - it depends on your goals.
Answering the question #2, it seems like PrjWriteFileData is used only for populating "scratch" file system and you cannot use it for updating the "source" file system.
Applications:
As for applications, you still can implement a remote file system (instead of using Dokan) but all writes will be cached locally instead of directly written to a remote location. A couple use case ideas:
Distributed File Systems
Online Drive Client
A File System "Dispatcher" (for example, you may write your files in different folders depending on particular conditions)
A File Versioning System (for example, you may preserve different versions of the same file after a modification)
Mirroring data from your app to a file system (for example, you can "project" a text file with indentations to folders, sub-folders and files)
P.S.: I'm not aware of any undocumented APIs, but from my point of view (accordingly with the documentation) we cannot use ProjFS for purposes like a ramdisk or write files directly to the "source" file system without writing them to the "local" file system first.

iMessage app storage location - why is chat.db-wal updated instantly but chat.db takes awhile?

So playing around with iMessages and thinking of ways to back them up and various things.
I found their location at ~/Library/Messages.
There are three files
1. chat.db
2. chat.db-wal
3. chat.db-shm
If I run a node script that watches for file changes while sending a iMessage to someone I see chat.db-wal is changed instantly but chat.db takes awhile to update.
I would like to get the messages as soon as possible, but I am not sure I can read the .db-wal file. Anyone know if I can read that file? Or why the .db file seems to take longer to update?
Thanks.
Everything is fine. Your data is there. This is just how SQLite works.
In order to support ACID transactions, where your data is guaranteed to be stored properly in the case of crashes or power-offs, SQLite first writes your data into a "write-ahead log" (the *-wal file). When the database is properly closed, or the write-ahead log gets too full, SQLite will update the database file with the contents of the log.
SQLite, when reading, will consult the write-ahead log first, even if multiple connections are using the same database. Data in the log is still "in the database".
SQLite should apply the log to the database as part of closing the database. If it does not, you can run PRAGMA wal_checkpoint; to manually checkpoint the log file.
Corollary to this: do not delete the -wal file, especially if you have not cleanly closed the database last time you used it.
More information about write-ahead logging in SQLite can be found in the SQLite documentation.

How to release an app with preloaded coreData ? [duplicate]

This question already has answers here:
How to use a pre-populated database on a coredata context
(2 answers)
Closed 7 years ago.
I try to find the best way to release an app with some preoloaded data.
I have an app that have 2 tables. I want to fill this tables with some data. The problem is that data is not only text info. 1 entity contains about 40 attributes (numbers, strings, transformable data), so to embedded that in code it's not a solution.
Thanks for help.
Write a very small CLI OS X app that stands up your existing Core Data stack.
This CLI creates a pre-populated SQLite file in a known location.
Run this CLI as part of your build procedure
Include the created SQLite file as part of your app bundle
On launch, if the destination SQLite file does not exist (NSFileManager will tell you this); copy the SQLite file from your app bundle.
Launch as normal.
This makes the procedure scriptable and consistent. It reuses your existing code structure to build the pre-populated database and lets you keep it up to date.
Here's how I handle it:
I use the default setup, where the backing store for Core data is an SQL file.
I set up my app to set up the persistent store coordinator with the SQL file in the app's documents directory.
I build my pre-populated Core Data database on the simulator.
I then go to the app's documents directory on the sim and copy the sql file into the app's bundle.
At the beginning of my app's didFinishLaunching method in the app delegate, I check to see if the Core data database's sql file exists in the documents directory. If not, I copy it from the bundle into the documents directory.
Then I invoke the code that creates the persistent store coordinator, which expects the sql file in the documents directory. On first launch, this is the initial file copied from the bundle. On subsequent launches, it's the working file in the documents directory that has the current data in it.
When the user first attempts to access the data, run a check to see if there are any objects in the persistent store by either executing a fetch request or getting the count of the objects in the persistent store.
If the results of the fetch request is nil, or the count of objects of the fetch request is 0, load data from some file (JSON, plist, XML) into Core Data by hand.

Copy / Backup Persistent Store

Normally when I backed up the core data file for my app, I would just copy the .sqlite file to another location while the app was running. But now that journaling (wal) is enabled, this does not work anymore. I cannot see a way for NSPersistentStoreCordinator or NSManagedObjectContext to write a new file. I'm guessing maybe I have 2 methods:
Close the persistent store and opening it again with #{#"journal_mode" : #"DELETE"} and then copy the .sqlite file.
Add another persistent store and maybe copy from the original ps to the new one ?
Any better ideas ?
Thank you.
Changing the journal mode will eliminate the journal files, so it's simple. I don't know that I'd trust it for your use, though-- because there's no guarantee that Core Data has actually flushed all new changes to the SQLite file. It might be OK, but there might be some in-memory changes that Core Data hasn't written out yet. This is almost certainly safe, but there's a small chance that it won't work right once in a while.
Option 2 would be safer, though more work. I'd create the second persistent store using NSPersistentStoreCoordinator's migratePersistentStore:toURL:options:withType:error: method (which the docs specifically mention as being useful for "save as" operations). Telling Core Data to create the copy for you should ensure that everything necessary is actually copied. Just don't do this on your main persistent store coordinator, because after migration, the PSC drops the reference to the original store object (the file's still there, but it's no longer used by that PSC). The steps would be
Create a new migrate-only NSPersistentStoreCoordinator and add your original persistent store file.
Use this new PSC to migrate to a new file URL.
Drop all reference to this new PSC, don't use it for anything else.

Resources