Handling Duplicates in Data Warehouse - etl

I was going through the below link for handling Data Quality issues in a data warehouse.
http://www.kimballgroup.com/2007/10/an-architecture-for-data-quality/
"
Responding to Quality Events
I have already remarked that each quality screen has to decide what happens when an error is thrown. The choices are: 1) halting the process, 2) sending the offending record(s) to a suspense file for later processing, and 3) merely tagging the data and passing it through to the next step in the pipeline. The third choice is by far the best choice.
"
In some dimensional feeds (like Client list), sometimes we get a same Client twice (the two records having difference in certain attributes). What is the best solution in this scenario?
I don't want to reject both records (as that would mean incomplete client data).
The source systems are very slow in fixing the issue, so we get the same issues every day. That means a manual fix to the problem also is tough as it has to be done every day (we receive the client list everyday).
Selecting a single record is not possible as we don't know what the correct value is.
Having both the records in our warehouse means our joins are disrupted. Because of two rows for the same ID, the fact table rows are doubled (in a join).
Any thoughts?

What is the best solution in this scenario?
There are a lot of permutations and combinations with your scenario. The big questions is "Are the differing details valid or invalid? as this will change how you deal with them.
Valid Data Example: Record 1 has John Smith living at 12 Main St, Record 2 has John Smith living at 45 Main St. This is valid because John Smith moved address between the first and second record. This is an example of Valid Data. If the data is valid you have options such as create a slowly changing dimension and track the changes (end date old record, start date new record).
Invalid Data Example: However if the data is INVALID (eg your system somehow creates duplicate keys incorrectly) then your options are different. I doubt you want to surface this data, as it's currently invalid and, as you pointed out, you don't have a way to identify which duplicate record is "correct". But you don't want your whole load to fail/halt.
In this instance you would usually:
Push these duplicate rows to a "Quarantine" area
Push an alert to the people who have the power to fix this operationally
Optionally select one of the records randomly as the "golden detail" record (so your system will still tally with totals) and mark an attribute on the record saying that it's "Invalid" and under investigation.
The point that Kimball is trying to make is that Option 1 is not desirable because it halts your entire system for errors that will happen, Option 2 isn't ideal because it means your aggregations will appear out of sync with your source systems, so Option 3 is the most desirable as it still leads to a data fix, but doesn't halt the process or the use of the data (but it does alert the users that this data is suspect).

Related

Simulating server-side group and sort in Azure table storage

I have a table to which I add records whenever the user views a particular resource. The key fields are
Username
Resource
Date Viewed
On a history page of my app, I want to present a set number (e.g., top 5) of the user's most recently viewed Resources, but I want to group by Resource, so that if some were viewed several times, only the most recent of each one is shown.
To be clear, if the raw data looked like this:
UserA | ResourceA | Jan 1
UserA | ResourceA | Jan 2
UserA | ResourceB | Jan 3
UserA | ResourceA | Jan 4
...
...only the bottom two records would appear in the history page.
I know you can get server-side chronological sorting by using a string derived from the date in the PartitionKey or RowKey fields.
I also see that you could enable a crude grouping mechanism by using Username and Resource as your PartitionKey and RowKey fields, and then using Insert-or-update, to maintain a table in which you kept pointers for the most recent value for each combination. However, those records wouldn't be sorted chronologically.
Is there any way to design a set of tables so that I can get the data I need without retrieving tons of extra entities and sorting on the client? I'm willing to get elaborate with the design if that's what it takes. Thanks in advance!
First, I would strongly recommend that you read this excellent Azure Storage Table Design Guide: Designing Scalable and Performant Tables document from Storage team.
Yes, I would agree that it is somewhat tricky with Azure Table Storage but it is doable :).
What you have to do is keep multiple copies of the same data. Each copy will serve a different purpose.
Considering the scenario where you want to fetch most recent lines for Resource A and B, here's what your entity structure would look like:
PartitionKey: Date/Time (in Ticks) reversed i.e. DateTime.MaxValue.Ticks - LastAccessedDateTime.Ticks. Reverse ticks is required to that most recent entries will show up on the top of the table.
RowKey: Resource name.
AccessDate: Indicates the last access date/time.
User: Name of the user who accessed that resource.
So when you are interested in just finding out most recently used resources, you could start fetching records from the top.
In short, your data storage approach should be primarily governed by how you want to fetch the data. It would even mean you will have to save the same data multiple times.
UPDATE
As discussed in the comments below, Table Service doesn't directly support Server Side Grouping. This is something that you would need to do on your own. What you could do is create a separate table to store the access counts. As and when the resources are accessed, you basically either insert a new record in that table or update the count for that resource in that table.
Assuming you're always interested in finding out resource access count within a date/time range, here's what your entity structure would look like:
PartitionKey: Date/Time (in Ticks). The precision would depend on your reporting requirement. For example, if you want to maintain access counts by day then your precision would be a day.
RowKey: Resource name.
AccessCount: This field will constantly update as and when a resource is accessed.
LastAccessDateTime: This field will denote when a resource was last accessed.
For updating access counts, I would recommend that you make use of a background process. Basically in this approach, as a resource is accessed you add a message in a queue. This message will have resource name and date/time resource was last accessed. Then have a background process poll this queue and fetch messages. As the messages are received, you first get the current count and last access date/time for that resource. If no records are found, you simply insert a record in this table with count as 1. If a record is found then you compare the date/time from the table with the date/time sent in the message. If the date/time from the table is smaller than the date/time sent in the message, you update both count (increase that by 1) and last access date/time. If the date/time from the table is more than the date/time sent in the message, you only update the count.
Now to find most accessed resources in a time span, you simply query this table. Assuming there are limited number of resources (say in 100s), you can get this information from the table with at least 1 request. Since you're dealing with small amount of data, you can simply download this data on the client side and order it anyway you see fit. However to see the access details for a particular resource, you would have to fetch detailed data (1000 entities at a time).
Part of your brain might still be unconsciously trapped in relational-table design paradigms, I'm still getting to grips with that issue myself.
Rather than think of table storage as a database table (with the "query-ability" that goes with it) try visualizing it in more simple (dumb) terms.
A design problem I'm working on now is storing financial transaction data, and I want to know what the total $ amount of these transactions are. Because Azure table storage doesn't (yet?) offer aggregate functions I can't simply go .Sum(). To get around that I'm going to:
Sum the values of the transactions in my app before I pass them to azure.
I'll then pass that the result of the sum into azure as a separate piece of information, called RunningTotal.
Later on I can just return RunningTotal rather than pulling down all the transactions, and I can repeat the process by increment the value of RunningTotal each time i get new transactions.
Of course there are risks to this but the app is a personal one so the risk level is low and manageable, at least as a proof-of-concept.
Perhaps you can use a similar approach for the design of your system: compute useful values in advance. I'll almost be using table storage as a long-term cache rather than a database.

How do I ensure consistency of aggregates with high availability?

My team needs to find a solution to the following problem:
Our application allows users to view total sales for the enterprise, totals by product, totals by region, totals by region x product, totals by regions x division, etc. You get the idea. There are so many values that need to be aggregated to get many of those totals that they cannot be computed on the fly - we have to pre-aggregate them to provide decent response times, a process that takes about 5 minutes.
The problem, which we thought was a common one but can find no references to, is how to allow updates to various sales without shutting off the users. Also, the users cannot accept eventual consistency - if they drill down on a total of 12 they better see numbers that add up to 12. So we need Consistency + Availability.
The best solution we've come up with so far is to direct all queries to a redundant database, "B" (optimized for queries) while updates are directed to the primary database, "A". When we decide to spend the 5 minutes to update all the aggregates, we update database "C", which is yet another redundant database just like "B". Then, new user sessions get directed to "C", while existing user sessions continue to use "B". Eventually, warning anyone left using "B", we kill the sessions on "B" and re-aggregate there, swapping the roles of "B" and "C". Typical drain-stop scenario.
We are surprised that we cannot find any discussion of this and are concerned that we are over-engineering this problem or maybe it's not the problem we think it is. Any advice is greately appreciated.
This was an interesting problem so I thought about it on the train, and I came up with the idea of storing a timestamp for each row in the database that you aggregate over. (I think this technique has a name, but it escapes me and googling isn't finding it...)
The timestamp would indicate when this row was inserted. In addition:
-If rows can be updated, then you will have two 'versions' of the row at once, one more recent than the other.
-If rows can be deleted, then there will need to be a 'deleted version' row that specifies when it was deleted.
Now you can do things such as:
1) Say you update the aggregates at Jan 1 2000 midnight. You can have views of the table return the table's data as though it was Jan 1 2000 midnight, ignoring all inserts/updates/deletes more recent than that. Now the aggregates are as up to date as the data in the view AND you can keep adding data to the underlying table.
2) I don't know how feasible/easy to guarantee it's reliable this would be, but you could have 'differentially computed aggregates' where on Jan 2 2000 midnight, you take the aggregates of Jan 1 2000 midnight and update them only with the data that has been changed since that time - saving you from recomputing so much historical data. (Of course, it gets hairier once you consider rows being updated or deleted that are older than 24 hours)
3) Whenever you bring your aggregates up to date, you can merge updated and deleted rows with their older version and get rid of the older version, so you only have to keep duplicates of rows around when you need them to separate rows that have been aggregated and rows that aren't (this also means that, for instance, if all your aggregates run at once, and you update a row three times in quick succession, you only need to keep the most recent update-indicating row)
If updates cannot be computed on the fly, then caching of results sets as you are doing in another database helps solve the issue of availability with faster response times.
For consistency, you may be able to make use of some form of transaction isolation. For example, MySQL supports a number of different transaction levels, of which REPEATABLE READ may go close to providing you with some consistency in a single transaction. If a transaction can be left open for multiple requests as the users drill down to see the data, they effectively see a snapshot of the database state as of the first request.
In a more generic sense, you're just after a handle which to the data which is provided by the client to indicate a consistent set. As in Patashu's answer, the handle for a client requesting a set of aggregates could be time based. The first stage of client interaction would be to get a handle to the latest aggregate data, eg the current time. If would then pass that handle with each request. As requests are made of the server, it uses the handle to determine which set of aggregate data to return. Rather than having both server "B" and "C", all aggregate data could be stored in server "B", with all aggregate data containing the handle information. This then allows requests to a single server for aggregate data both new and old. At some point, old aggregate data could be purged from "B".
Perhaps a search on transaction isolation will turn up more results for discussion on consistency.
I think you're looking for Data Warehousing concepts
In computing, a data warehouse or enterprise data warehouse (DW, DWH,
or EDW) is a database used for reporting and data analysis. It is a
central repository of data which is created by integrating data from
one or more disparate sources. Data warehouses store current as well
as historical data and are used for creating trending reports for
senior management reporting such as annual and quarterly comparisons.
...
Unlike the ETL-based data warehouse, the integrated source data
systems and the data warehouse are all integrated since there is no
transformation of dimensional or reference data. This integrated data
warehouse architecture supports the drill down from the aggregate data
of the data warehouse to the transactional data of the integrated
source data systems.

How to model data planning

I want to build a data model which supports:
Data history - store every change of data. This is not a problem: http://en.wikibooks.org/wiki/Java_Persistence/Advanced_Topics#History
Data planning - user should be able to prepare a record with validity sometime in the future (for example, I know that customer name changes from May so I prepare record with validity of 1st of May).
How can I do point 2?
How can I do these things together (points 1 & 2)
If you really need point 2 - and I would think very hard about this, because in my experience users will never use it, and you will be spending a lot of effort to support something no one will ever use - anyway, if you really need it, then:
Make no changes at all directly in the table. All changes go through history.
Behind the scenes, periodically you will run a batch updater. This goes through history, finds all unapplied changes (set a status flag in the history to be able to rapidly find them), and applies them, and it checks the date to make sure it is time to apply the change.
You are going to have to deal with merges. What if the user says: In one month my name changes. Then goes in a and changes their name effective today. You have a conflict. How do you resolve it? You can either prevent any immediate changes, until past ones are done (or at least all new changes have a date after the last unapplied one). Or you can change it now, and change it again in a month.
I think storing the change of data is handled in the background, Look into data warehousing and slowly changing dimensions http://en.wikipedia.org/wiki/Slowly_changing_dimension in a Stored Procedure to handle new records and predecessors of those new records which will be known as "expired records". Once you allowed for SCD it's quite easy to find those historic expired records that you're after.

Client-server synchronization pattern / algorithm?

I have a feeling that there must be client-server synchronization patterns out there. But i totally failed to google up one.
Situation is quite simple - server is the central node, that multiple clients connect to and manipulate same data. Data can be split in atoms, in case of conflict, whatever is on server, has priority (to avoid getting user into conflict solving). Partial synchronization is preferred due to potentially large amounts of data.
Are there any patterns / good practices for such situation, or if you don't know of any - what would be your approach?
Below is how i now think to solve it:
Parallel to data, a modification journal will be held, having all transactions timestamped.
When client connects, it receives all changes since last check, in consolidated form (server goes through lists and removes additions that are followed by deletions, merges updates for each atom, etc.).
Et voila, we are up to date.
Alternative would be keeping modification date for each record, and instead of performing data deletes, just mark them as deleted.
Any thoughts?
You should look at how distributed change management works. Look at SVN, CVS and other repositories that manage deltas work.
You have several use cases.
Synchronize changes. Your change-log (or delta history) approach looks good for this. Clients send their deltas to the server; server consolidates and distributes the deltas to the clients. This is the typical case. Databases call this "transaction replication".
Client has lost synchronization. Either through a backup/restore or because of a bug. In this case, the client needs to get the current state from the server without going through the deltas. This is a copy from master to detail, deltas and performance be damned. It's a one-time thing; the client is broken; don't try to optimize this, just implement a reliable copy.
Client is suspicious. In this case, you need to compare client against server to determine if the client is up-to-date and needs any deltas.
You should follow the database (and SVN) design pattern of sequentially numbering every change. That way a client can make a trivial request ("What revision should I have?") before attempting to synchronize. And even then, the query ("All deltas since 2149") is delightfully simple for the client and server to process.
As part of the team, I did quite a lot of projects which involved data syncing, so I should be competent to answer this question.
Data syncing is quite a broad concept and there are way too much to discuss. It covers a range of different approaches with their upsides and downsides. Here is one of the possible classifications based on two perspectives: Synchronous / Asynchronous, Client/Server / Peer-to-Peer. Syncing implementation is severely dependent on these factors, data model complexity, amount of data transferred and stored, and other requirements. So in each particular case the choice should be in favor of the simplest implementation meeting the app requirements.
Based on a review of existing off-the-shelf solutions, we can delineate several major classes of syncing, different in granularity of objects subject to synchronization:
Syncing of a whole document or database is used in cloud-based applications, such as Dropbox, Google Drive or Yandex.Disk. When the user edits and saves a file, the new file version is uploaded to the cloud completely, overwriting the earlier copy. In case of a conflict, both file versions are saved so that the user can choose which version is more relevant.
Syncing of key-value pairs can be used in apps with a simple data structure, where the variables are considered to be atomic, i.e. not divided into logical components. This option is similar to syncing of whole documents, as both the value and the document can be overwritten completely. However, from a user perspective a document is a complex object composed of many parts, but a key-value pair is but a short string or a number. Therefore, in this case we can use a more simple strategy of conflict resolution, considering the value more relevant, if it has been the last to change.
Syncing of data structured as a tree or a graph is used in more sophisticated applications where the amount of data is large enough to send the database in its entirety at every update. In this case, conflicts have to be resolved at the level of individual objects, fields or relationships. We are primarily focused on this option.
So, we grabbed our knowledge into this article which I think might be very useful to everyone interested in the topic => Data Syncing in Core Data Based iOS apps (http://blog.denivip.ru/index.php/2014/04/data-syncing-in-core-data-based-ios-apps/?lang=en)
What you really need is Operational Transform (OT). This can even cater for the conflicts in many cases.
This is still an active area of research, but there are implementations of various OT algorithms around. I've been involved in such research for a number of years now, so let me know if this route interests you and I'll be happy to put you on to relevant resources.
The question is not crystal clear, but I'd look into optimistic locking if I were you.
It can be implemented with a sequence number that the server returns for each record. When a client tries to save the record back, it will include the sequence number it received from the server. If the sequence number matches what's in the database at the time when the update is received, the update is allowed and the sequence number is incremented. If the sequence numbers don't match, the update is disallowed.
I built a system like this for an app about 8 years ago, and I can share a couple ways it has evolved as the app usage has grown.
I started by logging every change (insert, update or delete) from any device into a "history" table. So if, for example, someone changes their phone number in the "contact" table, the system will edit the contact.phone field, and also add a history record with action=update, table=contact, field=phone, record=[contact ID], value=[new phone number]. Then whenever a device syncs, it downloads the history items since the last sync and applies them to its local database. This sounds like the "transaction replication" pattern described above.
One issue is keeping IDs unique when items could be created on different devices. I didn't know about UUIDs when I started this, so I used auto-incrementing IDs and wrote some convoluted code that runs on the central server to check new IDs uploaded from devices, change them to a unique ID if there's a conflict, and tell the source device to change the ID in its local database. Just changing the IDs of new records wasn't that bad, but if I create, for example, a new item in the contact table, then create a new related item in the event table, now I have foreign keys that I also need to check and update.
Eventually I learned that UUIDs could avoid this, but by then my database was getting pretty large and I was afraid a full UUID implementation would create a performance issue. So instead of using full UUIDs, I started using randomly generated, 8 character alphanumeric keys as IDs, and I left my existing code in place to handle conflicts. Somewhere between my current 8-character keys and the 36 characters of a UUID there must be a sweet spot that would eliminate conflicts without unnecessary bloat, but since I already have the conflict resolution code, it hasn't been a priority to experiment with that.
The next problem was that the history table was about 10 times larger than the entire rest of the database. This makes storage expensive, and any maintenance on the history table can be painful. Keeping that entire table allows users to roll back any previous change, but that started to feel like overkill. So I added a routine to the sync process where if the history item that a device last downloaded no longer exists in the history table, the server doesn't give it the recent history items, but instead gives it a file containing all the data for that account. Then I added a cronjob to delete history items older than 90 days. This means users can still roll back changes less than 90 days old, and if they sync at least once every 90 days, the updates will be incremental as before. But if they wait longer than 90 days, the app will replace the entire database.
That change reduced the size of the history table by almost 90%, so now maintaining the history table only makes the database twice as large instead of ten times as large. Another benefit of this system is that syncing could still work without the history table if needed -- like if I needed to do some maintenance that took it offline temporarily. Or I could offer different rollback time periods for accounts at different price points. And if there are more than 90 days of changes to download, the complete file is usually more efficient than the incremental format.
If I were starting over today, I'd skip the ID conflict checking and just aim for a key length that's sufficient to eliminate conflicts, with some kind of error checking just in case. (It looks like YouTube uses 11-character random IDs.) The history table and the combination of incremental downloads for recent updates or a full download when needed has been working well.
For delta (change) sync, you can use pubsub pattern to publish changes back to all subscribed clients, services like pusher can do this.
For database mirror, some web frameworks use a local mini database to sync server side database to local in browser database, partial synchronization is supported. Check meteror.
This page clearly describes mosts scenarios of data synchronization with patterns and example code: Data Synchronization: Patterns, Tools, & Techniques
It is the most comprehensive source I found, considering whole of delta syncs, strategies on how to handle deletions and server-to-client and client-to-server sync. It is a very good starting point, worth a look.

Designing a Calendar system like Google Calendar [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have to create something similiar to Google Calendar, so I created an events table that contains all the events for a user.
The hard part is handling re-occurring events, the row in the events table has an event_type field that tells you what kind of event it is, since an event can be for a single date only, OR a re-occuring event every x days.
The main design challenge is handling re-occurring events.
When a user views the calendar, using the month's view, how can I display all the events for the given month? The query is going to be tricky, so I thought it would be easier to create another table and create a row for each and every event, including the re-occuring events.
What do you guys think?
I'm tackling exactly this problem, and I had completely spaced iCalendar (rfc 2445) up until reading this thread, so I have no idea how well this will or won't integrate with that. Anyway the design I've come up with so far looks something like this:
You can't possibly store all the instances of a recurring event, at least not before they occur, so I simply have one table that stores the first instance of the event as an actual date, an optional expiration, and nullable repeat_unit and repeat_increment fields to describe the repetition. For single instances the repition fields are null, otherwise the units will be 'day', 'week', 'month', 'year' and increment is simply the multiple of units to add to start date for the next occurrence.
Storing past events only seems advantageous if you need to establish relationships with other entities in your model, and even then it's not necessary to have an explicit "event instance" table in every case. If the other entities already have date/time "instance" data then a foreign key to the event (or join table for a many-to-many) would most likely be sufficient.
To do "change this instance"/"change all future instances", I was planning on just duplicating the events and expiring the stale ones. So to change a single instances, you'd expire the old one at it's last occurrence, make a copy for the new, unique occurrence with the changes and without any repetition, and another copy of the original at the following occurrence that repeats into the future. Changing all future instances is similar, you would just expire the original and make a new copy with the changes and repition details.
The two problems I see with this design so far are:
It makes MWF-type events hard to represent. It's possible, but forces the user to create three separate events that repeat weekly on M,W,F individually, and any changes they want to make will have to be done on each one separately as well. These kind of events aren't particularly useful in my app, but it does leave a wart on the model that makes it less universal than I'd like.
By copying the events to make changes, you break the association between them, which could be useful in some scenarios (or, maybe it would just be occasionally problematic.) The event table could theoretically contain a "copied_from" id field to track where an event originated, but I haven't fully thought through how useful something like that would be. For one thing, parent/child hierarchical relationships are a pain to query from SQL, so the benefits would need to be pretty heavy to outweigh the cost for querying that data. You could use a nested-set instead, I suppose.
Lastly I think it's possible to compute events for a given timespan using straight SQL, but I haven't worked out the exact details and I think the queries usually end up being too cumbersome to be worthwhile. However for the sake of argument, you can use the following expression to compute the difference in months between the given month and year an event:
(:month + (:year * 12)) - (MONTH(occursOn) + (YEAR(occursOn) * 12))
Building on the last example, you could use MOD to determine whether difference in months is the correct multiple:
MOD(:month + (:year * 12)) - (MONTH(occursOn) + (YEAR(occursOn) * 12), repeatIncrement) = 0
Anyway this isn't perfect (it doesn't ignore expired events, doesn't factor in start / end times for the event, etc), so it's only meant as a motivating example. Generally speaking though I think most queries will end up being too complicated. You're probably better off querying for events that occur during a given range, or don't expire before the range, and computing the instances themselves in code rather than SQL. If you really want the database to do the processing then a stored procedure would probably make your life a lot easier.
As previously stated, don't reinvent the wheel, just enhance it.
Checkout VCalendar, it is open source, and comes in PHP, ASP, and ASP.Net (C#)!
Also you could check out Day Pilot which offers a calendar written in Asp.Net 2.0. They offer a lite version that you could check out, and if it works for you, you could purchase a license.
Update (9/30/09):
Unless of course the wheel is broken! Also, you can put a shiny new coat of paint if you like (ie: make a better UI). But at least try to find some foundation to build off of, since the calendar system can be tricky (with Repeating events), and it's been done thousands of times.
Attempting to store each instance of every event seems like it would be really problematic and, well, impossible. If someone creates an event that occurs "every thursday, forever", you clearly cannot store all the future events.
You could try to generate the future events on demand, and populate the future events only when necessary to display them or to send notification about them. However, if you are going to build the "on-demand" generation code anyway, why not use it all the time? Instead of pulling from the event table, and then having to use on-demand event generation to fill in any new events that haven't been added to the table yet, just use the on-demand event generation exclusively. The end result will be the same. With this scheme, you only need to store the start and end dates and the event frequency.
I don't see any way that you can avoid having on-demand event generation, so I can't see the utility in the event table. If you want it for the sake of caching, then I think you're taking the wrong approach. First, it's a poor cache because you can't avoid on-demand event generation anyway. Second, you should probably be caching at a higher level anyway. If you want to cache, then cache generated pages, not events.
As far as making your polling more efficient, if you are only polling every 15 minutes, and your database and/or server can't handle the load, then you are already doomed. There's no way your database will be able to handle users if it can't handle much, much more frequent polling without breaking a sweat.
I would say start with the ical standard. If you use it as your model, then you'll be able to do everything that google calendar, outlook, mac ical (the program), and get virtually instant integration with them.
From there, time to bone up on your ajax and javascript cuz you can't have a flashy web ui with drag drop and multiple calendars without a ton of ajax and javascript.
You should have a start date, end date, and expiration date. Single day events would have the same start date and end date, and allows you to do partial day events as well. As for re-occuring events, then the start and end date would be for the same day, but have different times, then you have an enumeration or table that specifies the repeat frequency (daily, weekly, monthly, etc).
This allows you to say "this event appears every day" for daily, "this event appears on the 2nd day of every week" for weekly, "this event appears on the 5th day of every month" for monthly, "this event appears on the 215th day of every year" for yearly as long as the date is less than the expiration date.
Darren,
That is how I have designed my events table actually, but when thinking about it, say I have 100K users who have create events, this table will be hit pretty hard, especially if I am going to be sending out emails to remind people of their events (events can be for specific times of the day also!), so I could be polling this table every 15 minutes potentially!
This is why I wanted to create another table that would expand out all the events/re-occuring events, this way I can simple hit that table and get the users months view of events without doing any complicated querying and business logic, AND it will make polling much more effecient also.
The question is, should this secondary table be for the next day or month? What makes more sense? Since the maximum a user can view is a months view, I am leaning towards a table that writes out all the events for a given month.
(ofcourse I will have to maintain this secondary table for any edits the user might make to the original events table).
ChanChan,
I have designed it with the same sort of functionality actually, but I am just referring to how I will go about storing events, specifically how to handle re-occurring events.
The brute-force-ish but still reasonable way would be to create a new row in your single events table for every instance of the recurring event, all pointing not to the event preceding it in the series but to the first event in the series. This simplifies selecting and/or deleting all elements in a particular series, since you can select based on parent id. It also allows users to delete individual items from a series without affecting the rest of them.
This query gets you the series that starts on element 3:
SELECT * FROM events WHERE id = 3 OR parentid = 3
To get all items for this month, assuming you'd have a start date and an end date in your events table, all you'd have to do is:
SELECT * FROM events WHERE startdate >= '2008-08-01' AND enddate <= '2008-08-31'
Handling the creation/modification of series programmatically wouldn't be very difficult, but it really would depend on the feature set you want to provide and how you think it'll be used. If you want to differentiate between series and events, you could have a separate series table and a nullable series_id on your events, allowing you the freedom to muck about with individual events while still retaining control over your series.
From past experience I would create a new record for each occurring event and then have a column which references the previous event so you can track all events in a series.
This has two advantages:
No complicated routines to work out the next event date
Individual occurrences can be edited without effecting the rest
Hope this gives you some food for thought :)
I have to agree with #ChanChan on reading the ical spec for how to store these things. There is no easy way to handle recurrences, especially ones that have exceptions. I've built and rebuilt and rebuilt a database to handle this, and I keep coming back to ical.
It's not a bad idea to create a subordinate table, depending on use cases. The algorithm for calculating exactly when occurrences . . . um, occur . . . can indeed be quite complex. There's no getting away from running it, but it's worth considering caching the results.
#GateKiller
I hadn't thought of the case where you edit individual occurrences. It makes sense you would store the occurrences separately in that case.
When you do that, though, how far in the future do you store events? Do you pick an arbitrary date? Do you auto-generate the new occurrences the first time a user browses out into future months/years?
Also, how do you handle the case where the user wants to edit the whole series. "We've had a meeting every Tuesday morning at 10:30 but we're going to start meeting on Wednesday at 8"
I think I understand your second paragraph to mean you are considering a second events table that has a row for each occurrence of an event. I would avoid that.
Re-occurring events should have a start date and a stop date (which could be Null for events that continue every X days "forever") You'll have to decide what kinds of frequency you want to allow -- every X days, the Nth day of each month, every given weekday, etc.
I'd probably tend toward two tables - one for one time events and a second for recurring events. You'll have to query and display the two separately.
If I were going to tackle this (and I'd try as hard as I can to avoid reinventing this wheel) I'd look for open-source libraries or, at the very least, open source projects with Calendars that you can look at. Any recommendations guys?
undefined wrote:
…this table will be hit pretty
hard, especially if I am going to be
sending out emails to remind people of
their events (events can be for
specific times of the day also!), so I could be polling this table every 15 minutes potentially!
Create a table for the notifications. Poll only it.
Update the notification table when events (recurring or otherwise) are updated.
EDIT: A database View might not violate normal forms, if that's a concern. But, you'll probably want to track which notifications were sent and which have not yet been sent somewhere anyway.
Derek Park,
I would be creating each and every instance of an event in a table, and this table would be regenerated every month (so any event that was set to reoccurr 'forever' would be regenerated one month in advance using a windows service or maybe at the sql server level).
The polling won't only be done every 15 minutes, that might only be for polls related to email notifications. When someone wants to view their events for a month, I will have to fetech all their events, and re-occuring events and figure out which events to display (since a re-occuring event might have been created 6 months ago, but relates to a month the user is viewing).
Zack, i'm not too concerned with having a perfectly normalized database, the fact that I'm thinking of creating a secondary table is already breaking one of the rules hehe. My core database tables are following 'the rules', but I don't mind creating secondary tables/columns at times when it benefits things performance wise.
That is how I have designed my events table actually, but when thinking about it, say I have 100K users who have create events, this table will be hit pretty hard, especially if I am going to be sending out emails to remind people of their events (events can be for specific times of the day also!), so I could be polling this table every 15 minutes potentially!
Databases do exception jobs of handling sets of data, so i wouldn't be too worried about that. What you should do is use this as your primary table, and then as events expire then move them into another table (like an archive).
The next thing is you want to try is to query the db as little as possible, so move the information into a caching tier (like velocity) and just persist data to the database.
Then, you can partition the information across multiple databases for scaling purposes. ie users 1-10000 calendars exist on server 1, 10001 - 20000 exist on server 2, etc.
That's how i would scale a solution like this, but i still think the original solution i proposed is the way to go, it's just how you scale it that becomes the question.
The Ra-Ajax Calendar starter-kit features a sample of handling the RenderDate event which can modify the dates of specifically. Though the "recurring events" is more of an algorithmic thing and here I doubt very few calendars will help you much...
If anyone is doing Ruby there's a great library Runt that does this kind of thing. Worth checking out. http://runt.rubyforge.org/

Resources