Event sourcing - delete event related files - event-sourcing

I'd like to store some of my data in relative big files (a few GBs per file). I'd like to use event sourcing and save events related to these files, e.g. FileCreated: title, description, timestamp, author, personal, encryptionkey, etc. After a while some of the files won't be needed any longer, and they take up a lot of space. So in order to free up space, I need to delete them. Doing so is problematic, because I will have the history in the event storage, but not the file in the filesystem. Is there any way to keep integrity and somehow delete both? Or is there a best practice for this problem?

Since I did not get an answer, I try to answer this myself.
It is possible to remove an event from the history, you need to create a new event storage and filter the events for the same aggregate id you want to get rid of. After you are done, you can switch to the new event storage and remove the old one. Probably you need to replay projections as well. So it is very similar to a whole migration, it takes a lot of time. In the current case it is not problem if I need to do this only once every year or so. Another problem with storing this data in the event storage that either I stream it from there or I need to duplicate it in order to serve it. The latter one is not always a good solution, because sometimes it takes too much time to copy and in order to save the data you need to stream it anyways, otherwise you will be out of memory very fast. So the event storage should support streaming attachments.
Another solution to keep the relative big data in the files and display something like 404 not found, or file was removed because this and that. I see this frequently. In this case it is ok to keep the event in the storage and for example you can add a ContentRemoved event, where you can select the cause. Another option to hide the removed file, so it won't be listed by the app, this is usual I guess too. This solution has drawbacks too. Migration is more complex with this approach, because you need to move both the event storage and the files. If you remove a file by accident, you cannot undo it later unless you have the file in the backup. This can be fixed by delaying the actual file removal with a few days, so you can undo it if you change your mind. Another option to make a trash and files will be deleted only by emptying the trash.
I think both solutions are worth to consider and probably it depends on the actual project which one is better suited.

Related

What does data look like when using Event Sourcing?

I'm trying to understand how Event Sourcing changes the data architecture of a service. I've been doing a lot of research, but I can't seem to understand how data is supposed to be properly stored with event sourcing.
Let's say I have a service that keeps track of vehicles transporting packages. The current non relational structure for the data model is that each document represents a vehicle, and has many fields representing origin location, destination location, types of packages, amount of packages, status of the vehicle, etc. Normally this gets queried for information to be read to the front end. When changes are made by the user, the appropriate changes are made to this document in order to update this.
With event sourcing, it seems that a snapshot of every event is stored, but there seem to be a few ways to interpret that:
The first is that the multiple versions of the document I described exist, each a new snapshot every time a change is made. Each event would create a new version of this document and alter it. This is the easiest way for me to wrap my head around it, but I believe this to be incorrect.
Another interpretation I have is that each event stores SPECIFIC information about what's been altered in the document. When the vehicle status changes from On Road to Available, for example, an event specifically for vehicle status changes is triggered. Let's say it's called VehicleStatusUpdatedEvent, and contains the Vehicle ID number, the new status, and the timestamp for this event. So this event is stored and is published to a messaging queue. When picked up from the queue, the appropriate changes are made to the current version of the document. I can understand this, but I think I still have some misconceptions here. My understanding is that event sourcing allows us to have a snapshot of data upon each change, so we can know what it looks like at any point. What I just described would keep a log of changes, but still only have one version of the file, as the events only contain specific pieces of the whole file.
Can someone describe how the data flow and architecture works with event sourcing? Using the vehicle data example I provided might help me frame it better. I feel that I am close to understanding this, but I am missing some fundamental pieces that I can't seem to understand by searching online.
The current non relational structure for the data model is that each document represents a vehicle
OK, let's start from there.
In the data model you've described, storage of a document destroys the earlier copy.
Now imagine that instead we were storing the the document in a git repository. Then then saving the document would also save metadata, and that metadata would include a pointer to the previous document.
Of course, we've probably got a lot of duplication in that case. So instead of storing the complete document every time, we'll store a patch document (think JSON Patch), and metadata pointing to the original patch.
Take that same idea again, but instead of storing generic patch documents, we use domain specific messages that describe what is going on in terms of the model.
That's what the data model of an event sourced entity looks like: a list of domain specific descriptions of document transformations.
When you need to reconstitute the current state, you start with a state you know (which could be the "null" state of the document before anything happened to it, and replay onto that document all of the patches (events) that have occurred since.
If you want to do a temporal query, the game is the same, you replay the events up to the point in time that you are interested in.
So essentially when referring to an older build, you reconstruct the document using the events, correct?
Yes, that's exactly right.
So is there still a "current status" document or is that considered bad practice?
"It depends". In the general case, there is no current status document; only the write-ordered list of events is "real", and everything else is derived from that.
Conversations about event sourcing often lead to consideration of dedicated message stores for managing persistence of those ordered lists, and it is common that the message stores do not also support document storage. So trying to keep a "current version" around would require commits to two different stores.
At this point, designers typically either decide that "recent version" is good enough, in which case they build eventually consistent representations of documents outside of the transaction boundary... OR they decide current version is important, and look into storage solutions that support storing the current version in the same transaction as the events (ex: using an RDBMS).
what is the procedure used to generate the snapshot you want using the events?
IF you want to generate a snapshot, then you'll normally end up using a pattern called a projection, to iterate over the events and either fold or reduce them to create the document.
Roughly, you have a function somewhere that looks like
document-with-meta-data = projection(event-history-with-metadata)

How to avoid lambda trigger recursive call

I've written a lambda function that is triggered via an s3 bucket's putObject event. I am modifying the headers of an object post upload, downloading the object, and reuploading with appropriate headers. But because the function itself uses the putObject to reupload the object, the lambda triggers itself.
Three options:
Use a different API to upload your changes than the one that you have an event on. ie, if your lambda is triggered by PUT, then use a POST to modify the content afterwards (tough to do since POST isn't supported well by SDKs AFAIK, so this may not be an option).
Track usage and have a small guard at the beginning of your handler to short circuit if the only changes made to a file are ones you made. If you can't programmatically detect the headers you've set, you'll probably need a small dynamo table or similar for keeping track of which files you've already touched. This will let you abort immediately and only be charged the minimum 100ms fee.
Reorganize your project to have an 'ingest' bucket and an output bucket. Un-processed are put into the former, modified, and then placed into the latter. This has a number of advantages. The first is that you don't end up with the current situation, so that's a plus. The second is that you don't have whatever process consumes these modified files potentially pulling an unmodified version. The third is that you get better insight into the process - if something goes wrong, it's easy to see which batches of files have undergone which process.
Overall, I'd recommend option 3 for you, though I know that in my lazier moments I might try to opt for 1 or 2.
Either way, good luck.

Design a share, re-share functionality for a website, avoiding duplication

This is an interesting interview question that I found somewhere. To elaborate more:
You are expected to design classes and data structures for some website such as facebook or linkedin where your activity can be shared and re-shared. Design should be such that it avoids redundancy and duplication.
While thinking of this problem I was stuck on "link vs copy" problem as discussed here
But since the problem states that duplication should be avoided I decided to go "link" way. This makes sharing/re-sharing easier but deleting very difficult. i.e. if the original user deletes their post all the shares should be deleted. (programmatically speaking all the objects on the pointing to the particular activity should be made null. And this is the difficult part here, i.e. to find all the pointing objects)
Wouldn't it be better to keep the shares? The original user deletes
their post, fine, it's gone. But everyone who has linked to it should
not suddenly have it disappear on them.
This could be done the way Unix handles hard links. "Deleting" just
means removing one link to an object -- an inode, in Unix terms. You
don't remove the object itself until the link count is zero.
It's not obvious from the original specification that deletion should work as you describe. It might be desired that when the original user deletes the item, it is not deleted elsewhere; in that case you don't necessarily need to track all references, just keep a reference count on each post, and remove it from the database only when the count hits zero.
If you do want the behavior you describe, it may be achievable by simply removing broken links as and when you encounter them, again relieving you of the need to track each reference. The cost of tracking and updating every reference to every post is replaced with the comparable cost of one failed lookup for each referring page. The latter case is simpler to implement, though, and the cost doesn't hit your server all at once.
In real life, I would implement all references as bidirectional anyway, because it's likely to be needed sooner or later as you add features. For example, a "like" counter seems pretty simple, but to prevent duplicate votes you need to keep track of who has liked each item, and then if you want to remove their "like" when they delete their profile, you need to keep a list of each user's outbound "likes" too.
It takes a lot of database activity to implement something like Facebook...

Implementation of Unsaved Changes Detection

It seems like three ways to approach detecting unsaved changes in a text/image/data file might be to:
Update a boolean flag every time the user makes a change or saves, which would result in a lot of unnecessary updates.
Keep a cached copy of the original file and diff the two every time a save operation needs to be checked.
Keep a stack of all past operations and push/pop operations as needed, resulting in a lot of extra memory usage.
In general, how do commercial applications detect whether unsaved changes exist and what are the advantages/disadvantages of each approach? I ran into this issue while writing a custom application that has special saving behavior and would like to know if there is a known best practice.
As long as you need an undo/redo system, you need that stack of past operations. To detect in wich state the document is, an item of the stack is set to be the 'saved state'. Current stack node is not that item, the document is changed.
You can see an example of this in Qt QUndoStack( http://doc.qt.nokia.com/stable/qundostack.html ) and its isClean() and setClean()
For proposition 1, updating a boolean is not something problematic and take little time.
This depends on the features you want and the size/format of the files, I guess.
The first option is the simplest, and it gives you just what you want with minial overhead.
The second option has the advantage that you can detect when changes have been manually reverted, so that there is no real change after all (although that probably doesn't happen all too often). On the other hand it is much more costly to make a diff just to check if anything was modified. You probably don't want to do that everytime the user presses a key.
The third option gives the ability to provide an undo-history. You could limit the number of items in that history by grouping changes together that were made consecutively (without moving the cursor in between), or something like that.

In starteam, how can I find out when a file was deleted, and by whom?

We have a small team running StarTeam. A constant source of frustration and problems is the handling of deleted files in StarTeam. It is obvious that Starteam keeps track of deleted files internally, but it does not seem to be possible to get any information about a file deletion.
So far, my only solution to find the timing of a delete is to perform a manual binary search using the 'compare' views. Is there any better way (the query for 'delete time' never seems to pick up any files).
The Audit tab (just to the right of File, ChangeRequest, etc.) is probably your best bet if you're just looking for who deleted what and when. The Audit tab also provides information about when items and folders were created, shared, or moved, as well as when View labels are attached/detached. Whenever someone has files unexpectedly appear or disappear, I direct them to the Audit tab first.
There is a server-side configuration setting for the length of time the audit data is retained (30 days by default, I believe). Since it is not retained forever, it isn't a good option for historical data. The number of audits can be quite large in active views.
If you're looking for something more than that or older than your audit retention time, go with Bubbafat's suggestion of the SDK and getDeletedTime/getDeletedUserID.
Comparing views (or rolling back a view to see the item again) is the only way I know how to do this in StarTeam without writing code.
If you are willing to write a little code the StarTeam API provides the Item.getDeletedTime and Item.getDeletedUserId methods (I believe these showed up in 2006).

Resources