How does a Read on Write snapshot handle modifications/deletions? - snapshot

A read on write snapshot will cause any changes/updates to be redirected to new blocks. It's easy to see how this can work if data is to be appended, but what if data in the block is modified or deleted? Since the snapshotted block can't be modified, how is the information of what is modified or deleted applied? It can't be just metadata from here on out, right? That would really slow things down if the data is to be used for analysis.

Usually, you use layered filesystem. Each snapshot create a new layer, and when you ask for a file metadata / a file data, you query the top layer which will delegate to lower layer if there is no data about the query in the current layer.
When you delete a file, you simply put in top layer file xxx is deleted.
when you modify a block you create the new block on top layer with metadata referencing one block in new layer, delegating other to lower layer.
It is like docker works, and to go a little bit deeper you can check those link for instance :
https://docs.docker.com/engine/userguide/storagedriver/overlayfs-driver/#how-container-reads-and-writes-work-with-overlay-or-overlay2
https://docs.docker.com/engine/userguide/storagedriver/btrfs-driver/#how-the-btrfs-storage-driver-works

Related

What does data look like when using Event Sourcing?

I'm trying to understand how Event Sourcing changes the data architecture of a service. I've been doing a lot of research, but I can't seem to understand how data is supposed to be properly stored with event sourcing.
Let's say I have a service that keeps track of vehicles transporting packages. The current non relational structure for the data model is that each document represents a vehicle, and has many fields representing origin location, destination location, types of packages, amount of packages, status of the vehicle, etc. Normally this gets queried for information to be read to the front end. When changes are made by the user, the appropriate changes are made to this document in order to update this.
With event sourcing, it seems that a snapshot of every event is stored, but there seem to be a few ways to interpret that:
The first is that the multiple versions of the document I described exist, each a new snapshot every time a change is made. Each event would create a new version of this document and alter it. This is the easiest way for me to wrap my head around it, but I believe this to be incorrect.
Another interpretation I have is that each event stores SPECIFIC information about what's been altered in the document. When the vehicle status changes from On Road to Available, for example, an event specifically for vehicle status changes is triggered. Let's say it's called VehicleStatusUpdatedEvent, and contains the Vehicle ID number, the new status, and the timestamp for this event. So this event is stored and is published to a messaging queue. When picked up from the queue, the appropriate changes are made to the current version of the document. I can understand this, but I think I still have some misconceptions here. My understanding is that event sourcing allows us to have a snapshot of data upon each change, so we can know what it looks like at any point. What I just described would keep a log of changes, but still only have one version of the file, as the events only contain specific pieces of the whole file.
Can someone describe how the data flow and architecture works with event sourcing? Using the vehicle data example I provided might help me frame it better. I feel that I am close to understanding this, but I am missing some fundamental pieces that I can't seem to understand by searching online.
The current non relational structure for the data model is that each document represents a vehicle
OK, let's start from there.
In the data model you've described, storage of a document destroys the earlier copy.
Now imagine that instead we were storing the the document in a git repository. Then then saving the document would also save metadata, and that metadata would include a pointer to the previous document.
Of course, we've probably got a lot of duplication in that case. So instead of storing the complete document every time, we'll store a patch document (think JSON Patch), and metadata pointing to the original patch.
Take that same idea again, but instead of storing generic patch documents, we use domain specific messages that describe what is going on in terms of the model.
That's what the data model of an event sourced entity looks like: a list of domain specific descriptions of document transformations.
When you need to reconstitute the current state, you start with a state you know (which could be the "null" state of the document before anything happened to it, and replay onto that document all of the patches (events) that have occurred since.
If you want to do a temporal query, the game is the same, you replay the events up to the point in time that you are interested in.
So essentially when referring to an older build, you reconstruct the document using the events, correct?
Yes, that's exactly right.
So is there still a "current status" document or is that considered bad practice?
"It depends". In the general case, there is no current status document; only the write-ordered list of events is "real", and everything else is derived from that.
Conversations about event sourcing often lead to consideration of dedicated message stores for managing persistence of those ordered lists, and it is common that the message stores do not also support document storage. So trying to keep a "current version" around would require commits to two different stores.
At this point, designers typically either decide that "recent version" is good enough, in which case they build eventually consistent representations of documents outside of the transaction boundary... OR they decide current version is important, and look into storage solutions that support storing the current version in the same transaction as the events (ex: using an RDBMS).
what is the procedure used to generate the snapshot you want using the events?
IF you want to generate a snapshot, then you'll normally end up using a pattern called a projection, to iterate over the events and either fold or reduce them to create the document.
Roughly, you have a function somewhere that looks like
document-with-meta-data = projection(event-history-with-metadata)

encodeRestorableState for unsaved documents

The documentation for NSDocument states:
Subclasses can override this method and use it to restore any
information that would be needed to restore the document’s window to
its current state. For example, you could use this method to record
references to the data currently managed by the document and displayed
by the window. (Do not store the actual data itself. Store only
references to the data so that you can load it later from disk.) You
must store enough data to reconfigure the document and its window to
their current state during a subsequent launch of the app.
What does "Do not store the actual data itself." actually mean? Is this a hard and fast rule? Or is it more of a guideline?
In particular, I'm wondering about the case of documents with unsaved changes in them. Is it "permissible" to store the unsaved changes (which may be everything if this is a new document)? Or, do I need to save the data off in a file somewhere... and if so, where is the preferred location?
I don't want to restore a bunch of identical (blank) documents if I had multiple unsaved new documents when the application was shut down.
Thanks for any hints on the proper way to handle this.
Never mind. It hit me in the shower this morning (where I make most of my tech breakthroughs).
I am pretty sure now that the key is to get autosaving working with my application.

Key based caching

I'm reading this article:
http://37signals.com/svn/posts/3113-how-key-based-cache-expiration-works
I'm not using rails so I don't really understand their example.
It says in #3:
When the key changes, you simply write the new content to this new
key. So if you update the todo, the key changes from
todos/5-20110218104500 to todos/5-20110218105545, and thus the new
content is written based on the updated object.
How does the view know to read from the new todos/5-20110218105545 instead of the old one?
I was confused about that too at first -- how does this save a trip to the database if you have to read from the database anyway to see if the cache is valid? However, see Jesse's comments (1, 2) from Feb 12th:
How do you know what the cache key is? You would have to fetch it from the database to know the mtime right? If you’re pulling the record from the database already, I would expect that to be the greatest hit, no?
Am I missing something?
and then
Please remove my brain-dead comment. I just realized why this doesn’t matter: the caching is cascaded, so yes a full depth regeneration incurs a DB hit. The next cache hit will incur one DB query for the top-level object—all the descendant objects are not queried because the cache for the parent object includes cached versions for the children (thus, no query necessary).
And Paul Leader's comment 2 below that:
Bingo. That’s why is works soooo well. If you do it right it doesn’t just eliminate the need to generate the HTML but any need to hit the db. With this caching system in place, our data-vis app is almost instantaneous, it’s actually useable and the code is much nicer.
So given the models that DHH lists in step 5 of the article and the views he lists in step 6, and given that you've properly setup your relationships to touch the parent objects on update, and given that your partials access your child data as parent.children, or even child.children in nested partials, then this caching system should have a net gain because as long as the parent's cache-key is still valid then the parent.children lookup will never happen and will also be pulled from cache, etc.
However, this method may be pointless if your partials reference lots of instance variables from the controller since those queries will already have been performed by the time Rails sees the calls to cache in the view templates. In that case you would probably be better off using other caching patterns.
Or at least this is my understanding of how it works. HTH

Is there a Notification when CoreData finished reading data from disk?

I have a Mac (not document) app, that uses CoreData.
When launching the app, it reads the data stored on the filesystem.
I have to setup some things in -(void)applicationDidFinishLaunching based on the information stored using CoreData.
So it would be nice to know when my app read everything from disk.
If I do my setup in -(void)applicationDidFinishLaunching i doesn't work. If I do it a few seconds later it works!
Thx!
If you are using object controllers that automatically prepare their own content, you can observe arrangedObjects to find out when they have fetched their content. This does not guarantee that the actual objects are not faults. In fact, that's one of the main strengths of Core Data: objects are lazily loaded from disk.
If you for some reason want to make sure that most disk activity has taken place in applicationDidFinishLaunching, you can perform a custom fetch that specifically does not return objects as faults. Look up "prefetching" in the Core Data documentation. However, there is no guarantee that Core Data won't fault these objects at a later time due to memory constraints, thereby incurring another disk read when those objects are loaded again.
You can of course also use the NSBinaryStoreType, in which case the entire store is loaded into memory synchronously when it is added to the persistent store coordinator.

LINQ to XML updates - how does it handle multiple concurrent readers/writers?

I have an old system that uses XML for it's data storage. I'm going to be using the data for another mini-project and wanted to use LINQ to XML for querying/updating the data; but there's 2 scenarios that I'm not sure whether I need to handle myself or not:
1- If I have something similar to the following code, and 2 people happen to hit the Save() at the same time? Does LINQ to XML wait until the file is available again before saving, or will it just throw? I don't want to put locks in unless I need to :)
// I assume the next line doesn't lock the file
XElement doc = XElement.Load("Books.xml");
XElement newBook = new XElement("Book",
new XAttribute("publisher", "My Publisher"),
new XElement("author", "Me")));
doc.Add(newBook);
// What happens if two people try this at the same time?
doc.Save("Books.xml");
2- If I Load() a document, add a entry under a particular node, and then hit Save(); what happens if another user has already added a value under that node (since I hit my Load()) or even worse, deleted the node?
Obviously I can workaround these issues, but I couldn't find any documentation that could tell me whether I have to or not, and the first one at least would be a bit of a pig to test reliably.
It's not really a LINQ to XML issue, but a basic concurrency issue.
Assuming the two people are hitting Save at the same time, and the backing store is a file, then depending on how you opened the file for saving, you might get an error. If you leave it to the XDocument class (by just passing in a file name), then chances are it is opening it exclusively, and someone else trying to do the same (assuming the same code hitting it) will get an exception. You basically have to synchronize access to any shared resource that you are reading from/writing to.
If another user has already added a value, then assuming you don't have a problem obtaining the resource to write to, your changes will overwrite the resource. This is a frequent issue with databases known as optimistic concurrency, and you need some sort of value to indicate whether a change has occurred between the time you loaded the data, and when you save it (most databases will generate timestamp values for you).

Resources