Time table modelling in relational db - algorithm

I know that it has been told almost anything related to time table modeling in RDBS, but I can not find any well written documentation about available techniques to store time tables in DB.
My case:
I have table which holds available places, and table with actual classes.
Each place has it's own unique schedule
Each class can be scheduled in any place, and any time, with few exceptions:
One class can take one time-slot (Example: If class A is scheduled in place P1 at 12:00 for 1hour duration, next occurrence of class A can only be placed before 12:00 or after 13:00, in any place, which has free time-slot; It's forbidden to schedule class A in one time in two places)
In one place it can be one class with time-slot
Model should support versioning/history of scheduled classes
Now, how I can represent this data model in an SQL DB?
I'm not looking ready-to-use exact schema, rather I will be glad if anyone can write available modelling techniques and their comparison, which I can use to solve this task
For example: For tree-structure/hierarchical data, there is well documented "modified preorder tree traversal algorithm", is there some similar algorithm/technique to deal with time-slots?

A timetable is a matrix. Down the left hand side we have LOCATIONS. Across the top we have TIMESLOTS. The intersection of any given permutation of LOCATION and TIMESLOT is a cell with either a CLASS or null.
To model this we need a table (entity) of LOCATIONS, which is pretty fixed data. We need a table of TIMESLOTS (date/times) which is ever growing. We need a table CLASSES, which is also pretty fixed. Finally we need an intersection table CLASS_TIMESLOT_LOCATIONS. This is where the magic happens. This table has three foreign keys, one to CLASSES, one to LOCATIONS, one to TIMESLOTS. Its primary key is (LOCATION_ID, TIMESLOT_ID) but it also needs a unique constraint on (CLASS_ID, TIMESLOT_ID).
You are asking a modelling question, but there are a couple of implementation details which you will need to think about. They won't chnage the logical model but they will affect how you work with the physical tables. The first consideration is whether to spawn all the potential TIMESLOTS, and, if so, how big a window you store. The second is whether to store null entries for the intersection table, CLASS_TIMESLOT_LOCATIONS.
There are no straightforward answers here: some database products will find it easier to "fill in the gaps" than others. Also, generating the absent records on the fly may be too much of a performance hit, in which case disk space is a good trade-off.
As for storing history, this is presumably for storing changes to the schedule. Use separate tables for this, populated by triggers (you could use stored procedures but triggers is the industry standard). Don't be tempted to store history in the main tables. It breaks the normalised model and causes all sorts of grief.

From what I see in your question it looks like you have several constraints you would like to have handled on the database side.
• I have table which holds available places, and table with actual classes.
More can be elaborated on the table design, but this just needs a table schema to hold the information you need
• Each place has it's own unique schedule
How about creating a trigger on inserts to make sure that the class that is being inserted into the schedule does not conflict with any other schedule?
• Each class can be scheduled in any place, and any time, with few exceptions:
• One class can take one time-slot (Example: If class A is scheduled in place P1 at 12:00 for 1hour duration, next occurrence of class A can only be placed before 12:00 or after 13:00, in any place, witch has free time-slot; It's forbidden to schedule class A in one time in two places)
I would handle this constraint in a trigger also
• In one place it can be one class with time-slot
Have this constraint handled in a trigger
• Model should support versioning/history of scheduled classes
Have a separate table which mirrors the actual table that you have for schedules. As new records get inserted into the main table, you can trigger the updates and times for the updates/inserts/deletes into the table which has the history
Hope this helps you with some ideas.
-Vijay

Related

Database schema for rewarding users for their activities

I would like to provide users with points when they do a certain thing. For example:
adding article
adding question
answering question
liking article
etc.
Some of them can have conditions like there are only points for first 3 articles a day, but I think I will handle this directly in my code base.
The problem is what would be a good database design to handle this? I think of 3 tables.
user_activities - in this table I will store event types (I use
laravel so it would probably be the event class name) and points for
specific event.
activity_user - pivot table between user_activities and users.
and of course users table
It is very simple so I am worrying that there are some conditions I haven't thought of, and it would come and bite me in the future.
I think you'll need a forth table that is simply "activities" that is simply a list of the kinds of activities to track. This will have an ID column, and then in your user_activities table include an 'activity_id' to link to that. You'll no doubt have unique information for each kind, for example an activities table may have columns like
ID : unique ID per laravel
ACTIVITY_CODE : short code to use as part of application/business logic
ACTIVITY_NAME : longer name that is for display name like "answered a question"
EVENT : what does the user have to do to trigger the activity award
POINT_VALUE: how many points for this event
etc
If you think that points may change in the future (eg. to encourage certain user activities) then you'll want to track the actual point awarded at the time in the user activities table, or some way to track what the points were at any one time.
While I'm suggesting fourth table, what you really need is more carefully worded list of features to be implemented before doing any design work. My example of allowing for points awarded to change over time is such a feature that you don't mention but you'll need to design for if this feature is needed.
Well I have found this https://laracasts.com/lessons/build-an-activity-feed-in-laravel as very good solution. Hope it helps someone :)

SQLite database design for music chart tracker

I've been putting together a little SQLite database to track the top 100 songs from the iTunes RSS feed. I've built the script in Bash to do all the hard work and it's working finally, but I'm not sure if my database structure is correct, so I'm looking for some feedback on the best way to go as I am only learning SQL as I go at the moment so I don't want to dig myself into a hole when it comes to building the queries to retrieve data in time!
I have 3 tables like so;
artists_table
artist_id - PK
artist_name
songs_table
song_id - PK
artist_id - FK (from the artists table)
charts_table
chart_id - PK
song_id - FK (from the songs table)
position - (chart position 1-100)
date - (date of chart position xxxx-xx-xx)
The artists and songs table seem good to me, got foreign key constraint working...etc but I'm not sure about the charts table, anything obviously wrong with this structure?
I want to track songs/artists/positions over time so I can generate some stats...etc
Thanks,
Initial Response
I ask you about the data, in order to answer your Question, but you keep telling me about the process. No doubt, that is very important to you. And now you wish to ensure that the Record Filing System is correct.
Personally, I never write a line of code until I have the database designed. Partly because I hate to rewrite code (and I love to code). You have the sequence reversed, an unfortunate trend these days. Which means, whatever I give you, you will have to rewrite large chunks of your code.
(b.1) How exactly does it check if the artist[song] already exists ?
(b.2) How do you know that there is NOT more than occ of a specific artist/song on file ?
Right now, given the details in your Question, let's say that you have incoming, that Pussycat Dolls place 66 on the MTV chart today:
INSERT artist VALUES ( "Pussycat Dolls" ) -- succeeds, intended
INSERT artist VALUES ( "Pussycat Dolls" ) -- succeeds, unintended
INSERT artist VALUES ( "Pussycat Dolls" ) -- succeeds, unintended
Exactly which Pussycat Dolls record placed 66th today ?
When you RFS grows, and you have more fields in artist, eg. birth_date, which of the three records would you like to update ?
Ditto for Song.
How is Chart identified, is it something like US Top 40 ?
(b.1) How exactly does it check if the artist[song] already exists ?
When you execute code, it runs inside the sqLite program. What is the exact SQL string that you pass it ? Let's say you do this:
SELECT $artist_id = artist_id
FROM artist
WHERE artist_name = $artist_name
IF $artist_id = NULL
INSERT artist VALUES ( $artist_name )
Then you have going to have a few surprises when the system goes "live". Hopefully this interaction will eliminate them. Right now you have a few hundred artists.
when you have a few thousand artists, the system will slow down to a snails pace.
when things go awry, you will have duplicate artists, songs, charts.
Record Filing System
Right now, you have a pre-1970's ISAM Record Filing System, with no Relational integrity, power, or speed.
If you would like to understand more about the dangers of an RFS, in todays Relational context, please read this Answer.
Relational Database
As I understand it, you want the integrity, power, and speed of a Relational Database. Here is what you are heading towards. Obviously, it is incomplete, unconfirmed, there are may details missing, many questions remain open. But we have to model the data, only as data (as opposed to what you are going to do with it, the process), and nothing but the data.
This approach will ensure many things:
as the data grows and is added to (in terms of structure, not population), the existing data and code will not change
you will have data and referential integrity
you can obtain each of your stats via a single SELECT command.
you can execute any SELECT against the data, even SELECTs that you are not capable of dreaming about, meaning unlimited stats. As long as the data is stored in Relational form.
A database is a collection of facts about the real world, limited to the subject area of concern. Thus far we don't have facts, we have a recording of an incoming RSS stream. And the recording has no integrity, there is nothing that your code can rely on. This is heading in the direction of facts:
First Draft Music Chart TRD (Obsolete due to progression, see below.)
Response to Comments 1
Currently, I am only tracking one chart, but I see in your model that it also has the ability to track several, that is nice!
Not really. It is a side-effect of Doing Things Properly. The issue here is one of Identification. A Chart Position is not identified by RSS Feed ID, or chart_table.id, plus a PositionNo plus a DateTime. No. A Chart Position is identified as US Top 100/27 Apr 15/1… The side effect is that ChartName is part of the Identifier, and that allows multiple Charts, with no additional coding.
In these dark days of IT, people often write systems for one Country, and implement a StateCode all over the place. And then experience massive problems when they open up to an international customer base. The point is, there is no such thing as a State that does not have a Country, a State exists only in the context of a Country. So the Identifier for State must include a Country Identifier, it is (CountryCode, StateCode). Both Australia and Canada have NT for a StateCode.
If I can explain how I store the data from the rss feed, it might clear things up somewhat.
No, please. This is about the data, and only the data. Please review my previous comments on that issue, and the benefits.
I am away from my main computer at the moment, but I will respond within the next couple of hours if thats ok.
No worries. I will get to it tomorrow.
Your model does make sense to me though,
That is because you know the data values intimately, but you do not understand the data, and when someone lays it out for you, correctly, you experience pleasurable little twitches of recognition.
I don't mind having to recode everything, its a learning curve!
That's because you put the cart before the horse, and coded against data laid out in a spreadsheet, instead of designing the database first and coding against that second.
If you are not used to the Notation, please be advised that every little tick, notch, and mark, the solid vs dashed lines, the square vs round corners, means something very specific. Refer to the IDEF1X Notation.
Response to Comments 2
Just one more quick question.
Fire away, until you are completely satisfied.
In the diagram, would there be any disadvantage to putting the artist table above the song table and making the song table a child of the parent artist instead? As artists can have many songs, but each song can only have 1 artist. Is there any need for the additional table to contain just the artistPK and songPK. Could I not store the artistPK into the songs table as a FK, as a song can only exist if there is an associated artist?
Notice your attachment to the way you had it organised. I repeat:
A database is a collection of facts about the real world, limited to the subject area of concern.
Facts are logical, not physical. When those facts are organised correctly (Normalised, designed):
You can execute any SELECT against the data, even SELECTs that you are not capable of dreaming about, meaning unlimited stats. As long as the data is stored in Relational form.
When they aren't, you cant. All SQL (not only reports that are envisioned) against the data is limited to the limitations in the model, which boils down to one thing: discrete facts being recorded in logical form, or not.
With the TRD we have progressed to recording facts about the real world, limited only by the scope of the app, and not by the non-discretion of facts.
Could I not store the artistPK into the songs table as a FK, as a song can only exist if there is an associated artist?
In your working context, at this moment, that is true. But that is not true in the real world that you are recording. If the app or your scope changes, you will have to change great slabs of the db and the app. If you record the facts correctly, as they exist, not as limited to your current app scope, no such change will be necessary when the the app or your scope changes (sure, you will have to add objects and code, but not modify existing objects and code).
In the real world, Song and Artist are discrete facts, each can exist independent of the other. Your proposition is false.
Ave Maria existed for 16 centuries before Karen Carpenter recorded it.
And you already understand and accept that an Artist exists without a `Song.
Is there any need for the additional table to contain just the artistPK and songPK.
It isn't an "additional table to contain just the artistPK and songPK", it is recording a discrete fact (separate to the independent existence of Artist and Song), that a specific Artist recorded a specific Song. That is the fact that you will count on in theChartDatePosition`
Your proposition places Song as dependent on, subordinate to, Artist, and that is simply not true. Any and all stats (dreamed of or not) that are based on Song will have to navigate Artist::ArtistSong, then sort or ORDER BY, etc.
artists can have many songs, but each song can only have 1 artist.
That is half-true (true in your current working context, but not true in the real world). The truth is:
Each Artist is independent
Each Song is independent
Each Artist recorded 1-to-n Songs (via ArtistSong)
Each Song was recorded by 1-to-n Artists (via ArtistSong)
For understanding, changing your words above to form correct propositions (as opposed to stating technically correct Predicates):
Artists can have many RecordedSongs
Each RecordedSong can only have 1 Artist
Each RecordedSong can only have 1 Song
So yes, there are disadvantages, significant ones.
Which is why I state, you must divorce yourself from the app, the usage, and model the data, as data, and nothing but data.
Solution 2
I have updated the TRD.
Second Draft Music Chart TRD
Courier means example data; blue indicates a Key (Primary is always first); pipe indicates column separation; slash indicates Alternate Key (only the columns that are not in the PK are shown); green indicates non-key.
I am now giving you the Predicates. These are very important, for many reasons. The main reason here, is that it disambiguate the issues we are discussing.
If you would like more information on Predicates, visit this Answer, scroll down (way down!) to Predicate, and read that section. Also evaluate that TRD and those Predicates against it.
The index on ChartDateSong needs explanation. At first I assumed:
PK ( Chart, Date, Rank )
But then for Integrity purposes, as well as search, we need:
AK ( Chart, Date, ArtistId, SongId )
Which is a much better PK. So I switched them. We do need both. (I don't know about NONsqLite, if it has clustered indices, the AK, not the PK should be clustered.)
PK ( Chart, Date, ArtistId, SongId )
AK ( Chart, Date, Rank )
Response to Comments 3
What about the scenario when a song enters the charts with the same song_name as a record in the song_table but is completely unrelated (not a cover, completely original, but just happens to share the same name)
In civilised countries that is called fraud, obtaining benefit by deception, but I will try to think in devilish terms for a moment and answer the question.
Well, if it happens, then you have to cater for it. How does the feed inform you of such an event ? I trust it doesn't. So then your Song Identifier is still the Name.
and instead of a unique song record being created, the existing song_id is added to the artistssongs_table with the artist id, wouldn't this be a problem?
We don't know any better, so it is not a problem. No one watching that feed knows any better either. If and when you receive data informing you of that issue, through whatever channel, and you can specify it, you can change it.
Normally we have an app that allows us to navigate the hierarchies, and to change them, eg. A ReferenceMaintenance app, with an Exporer-type window on the left, and combo dialogues (list of occs on top, plus detail of one occ on the bottom) on the right .
Until then, it is not a form of corruption, because the constraint that prevents such corruption is undefined. You can't be held guilty of breaking a law that hasn't been written yet. Except in rogue states.
Although a song can have the same name, it doesn't necessarily mean it's the same record.
Yes.
Wouldn't it be better to differentiate a song by the artist?
They are differentiated by Artist.
You do appreciate that the fact of a Song, and the fact of an Artist playing a song, are two discrete facts, yes ? Please question any Predicates that do not mean perfect sense, those are the propositions that the database supports.
Ave Maria exists as an independent fact, in Song
Karen Carpenter, Celine Dion, and Yours Truly exist as three independent facts, in Artist
Karen Carpenter-Ave Maria, Celine Dion-Ave Maria, and Yours Truly-Ave Maria exist as three discrete facts in ArtistSong.
That is seven separate facts, about one Song, about three Artists.
Response to Comments 4
I do understand it now. The artistsong_table is where the 2 items "meet" and a relationship actually exists and is unique.
Yes. I just wouldn't state it in that way. The term Fact has a technically precise meaning, over and above the English meaning.
A database is a collection of facts about the real world, limited to the subject area of concern.
Perhaps read my Response 3 again, with that understanding of Fact in mind.
Each ArtistSong row is a Fact. That depends on the Fact of an Artist, and the Fact of a Song. It establishes the Fact that that Artist recorded that Song. And that ArtistSong Fact is one that other Facts, lower in the hierarchy, will depend upon.
"Relationship ... actually". I think you mean "instance". The relationship exists between the tables, because I drew a line, and you will implement a Foreign Key Constraint. Perhaps think of Fact as an "instance".
Just to make sure I understand the idea correctly, if I were to add "Genre" into the mix, would I be correct in thinking that a new 'independent' table genre_table would be created and the artistsong_table would inherit its PK as an FK?
Yes. It is a classic Reference or Lookup table, the Relationship will be Non-identifying. I don't know enough about the music brothelry to make any declarations, but as I understand it, Genre applies to a Song; an Artist; and an ArtistSong (they can play a Song in a Genre that is different to the Song.Genre). You have given me one, so I will model that.
The consequence of that is, when you are inserting rows in ArtistSong, you will have to have the Genre. If that is in the feed, well and good, if not, you have a processing issue to deal with. The simple method to overcome that is, implement a Genre "", which indicates to you that you need to determine it from other channels.
It is easy enough to add a classifier (eg. Genre) later, because it is a Non-identifying Relationship. But Identifying items are difficult to add later, because they force the Keys to change. Refer para 3 under my Response 1.
You are probably ready for a Data Model:
Third Draft Music Chart Data Model
It all depends on the relationships (one-to-one, one-to-many, many-to-many) your data is going to have.
The way you implemented your charts table indicates that:
Each chart has only/belongs to one song
A song can have many charts
It is a one-to-many relationship. And if that was what you intended then everything seems fine.
However:
If your charts can have many songs and a song will have only one
chart (also a one-to-many relationship but reversed), the song_id column needs to
be taken out from the charts table and the songs table needs
chart_id column in.
If your charts can have many songs and your songs can have many charts as well (many-to-many relationship), then you need a "joint table" which could be something like this:
TABLE: charts_songs, COLUMNS: id, chart_id, song_id, position

Advantage of splitting a table

My question may seems more general. But only answer I got so far is from the SO itself. My question is, I have a table customer information. I have 47 fields in it. Some of the fields are optional. I would like to split that table into two customer_info and customer_additional_info. One of its column is storing a file in byte format. Is there any advantage by splitting the table. I saw that the JOIN will slow down the query execution. Can I have more PROs and CONs of splitting a table into two?
I don't see much advantage in splitting the table unless some of the columns are very infrequently accessed and fairly large. There's a theoretical advantage to keeping rows small as you're going to get more of them in a cached block, and you improve the efficiency of a full table scan and of the buffer cache. Based on that I'd be wary of storing this file column in the customer table if it was more than a very small size.
Other than that, I'd keep it in a single table.
I can think of only 2 arguments in favor of splitting the table:
If all the columns in Customer_Addition_info are related, you could potentially get the benefit of additional declarative data integrity that you couldn't get with a single table. For instance, lets say your addition table was CustomerAddress. Your business logic may dictate that a customer address is optional, but once you have a customer Zip code, the addressL1, City and State become required fields. You could set these columns to non null if they exist in a customerAddress table. You couldn't do that if they existed directly in the customer table.
If you were doing some Object-relational mapping and your had a customer class with many subclasses and you didn't want to use Single Table Inheritance. Sometimes STI creates problems when you have similar properties of various subclasses that require different storage layout. Being that all subclasses have to use the same table, you might have name clashes. The alternative is Class Table inheritance where you have a table for the superclass, and an addition table for each subclass. This is a similar scenario to the one you described in your question.
As for CONS, The join makes things harder and slower. You also run the risk of accidentally creating a 1 to many relationship. I.E. You create 2 addresses in the CustomerAddress table and now you don't know which one is valid.
EDIT:
Let me explain the declarative ref integrity point further.
If your business rules are such that a customer address is optional, and you embed addressL1, addressL2, City, State, and Zip in your customer table, you would need to make each of these fields Nullable. That would allow someone to insert a customer with a City but no state. You could write a table level check constraint to cover this situation. But that isn't as easy as simply setting the AddressL1, City, State and Zip columns in the CustomerAddress table not nullable. To be clear, I am NOT advocating using the multi-table approach. However you asked for Pros and Cons, and I'm just pointing out this aspect falls on the pro side of the ledger.
I second what David Aldridge said, I'd just like to add a point about the file column (presumably BLOB)...
BLOBs are stored up to approx. 4000 bytes in-line1. If a BLOB is used rarely, you can specify DISABLE STORAGE IN ROW to store it out-of-line, removing the "cache pollution" without the need to split the table.
But whatever you do, measure the effects on realistic amounts of data before you make the final decision.
1 That is, in the row itself.

How to Program a Spring with Hibernate web app?

I am Working on web application where i have 90 fields for a Person class which are divided in to family details,education details, personal details etc....
I want separate form for each, like for family details has-father name, mother name siblings etc... fields and so on for other
I want separate table for each detail with common reference id for all tables
My question is how many bean classes should i write? Is it with one bean class can i map from multiple forms to multiple tables?
class PersonRegister{
private Long iD;
private String emailID;
private String password;
.
.
}//for register.......
once logged in i need to maintain his/her details
Either
class person{
}
or
class PersonFamilyDetails{}
class PersonEducationDetails{}
etc
which way software developing standards specify to create?
Don't go overboard, I believe in your case single but very wide (i.e. with a lot of columns) table would be most efficient and simplest from maintenance perspective. Only thing to keep in mind is too query only for a necessary subset of columns/fields when loading lots of rows. Otherwise you'll be fetching kilobytes of unnecessary data, not needed for particular use case.
Unfortunately Hibernate doesn't have direct support for that, when designing a mapping for Person, you'll end up with huge class and even worse - Hibernate will always fetch all simple columns (and many-to-one relationships). You can however overcome this problem either by creating several views in the database containing only subset of columns or by having several Java classes mapping to the same table but only to subset of columns.
Splitting your database model into several tables is beneficial only if your schema is not normalized. E.g. when storing siblings first name and last name you may wish to have a separate Sibling table and next time some other family member is entered, you can reuse the same row. This makes database smaller and might be faster when searching by sibling.
Your question comes down to database normalization, as described in-depth by Boyce and Codd, see
http://en.wikipedia.org/wiki/Database_normalization.
The main advantage of database normalization is avoiding modification anomalies. In your case, if you got one table with for each person e.g. father-firstname and father-lastname, and you have multiple people with the same father, this data will be duplicated, and when you discover a typo in the father-lastname, you could modify it for one sibling, and not for the next.
In this simplified case, database design best practices would call for a first normalization into a separate table with father-id, father-firstname and father-lastname, and your person table having a one-to-many relation to it.
For one-to-one relations, e.g. person->personeducationdetails, there's some debate. In the original definition of 1st Normal Form, every optional field would be normalized by putting it's own table. This was later weakened by introducing 'null' in relational databases, see http://en.wikipedia.org/wiki/First_normal_form#cite_note-CoddRule-12. But still, if a whole set of columns could be null at the same time, you put them in a separate table with a one-to-one relation.
E.g. if you don't know a person's educationdetails, all of its related fields are null, so you better split them off in a separate table, and simply not have a personeducationdetails record for that person.

Implementing User Defined Fields

I am creating a laboratory database which analyzes a variety of samples from a variety of locations. Some locations want their own reference number (or other attributes) kept with the sample.
How should I represent the columns which only apply to a subset of my samples?
Option 1:
Create a separate table for each unique set of attributes?
SAMPLE_BOILER: sample_id (FK), tank_number, boiler_temp, lot_number
SAMPLE_ACID: sample_id (FK), vial_number
This option seems too tedious, especially as the system grows.
Option 1a: Class table inheritance (link): Tree with common fields in internal node/table
Option 1b: Concrete table inheritance (link): Tree with common fields in leaf node/table
Option 2: Put every attribute which applies to any sample into the SAMPLE table.
Most columns of each entry would most likely be NULL, however all of the fields are stored together.
Option 3: Create _VALUE_ tables for each Oracle data type used.
This option is far more complex. Getting all of the attributes for a sample requires accessing all of the tables below. However, the system can expand dynamically without separate tables for each new sample type.
SAMPLE:
sample_id*
sample_template_id (FK)
SAMPLE_TEMPLATE:
sample_template_id*
version *
status
date_created
name
SAMPLE_ATTR_OF
sample_template_id* (FK)
sample_attribute_id* (FK)
SAMPLE_ATTRIBUTE:
sample_attribute_id*
name
description
SAMPLE_NUMBER:
sample_id* (FK)
sample_attribute_id (FK)
value
SAMPLE_DATE:
sample_id* (FK)
sample_attribute_id (FK)
value
Option 4: (Add your own option)
To help with Googling, your third option looks a little like the Entity-Attribute-Value pattern, which has been discussed on StackOverflow before although often critically.
As others have suggested, if at all possible (eg: once the system is up and running, few new attributes will appear), you should use your relational database in a conventional manner with tables as types and columns as attributes - your option 1. The initial setup pain will be worth it later as your database gets to work the way it was designed to.
Another thing to consider: are you tied to Oracle? If not, there are non-relational databases out there like CouchDB that aren't constrained by up-front schemas in the same way as relational databases are.
Edit: you've asked about handling new attributes under option 1 (now 1a and 1b in the question)...
If option 1 is a suitable solution, there are sufficiently few new attributes that the overhead of altering the database schema to accommodate them is acceptable, so...
you'll be writing database scripts to alter tables and add columns, so the provision of a default value can be handled easily in these scripts.
Of the two 1 options (1a, 1b), my personal preference would be concrete table inheritance (1b):
It's the simplest thing that works;
It requires fewer joins for any given query;
Updates are simpler as you only write to one table (no FK relationship to maintain).
Although either of these first options is a better solution than the others, and there's nothing wrong with the class table inheritance method if that's what you'd prefer.
It all comes down to how often genuinely new attributes will appear.
If the answer is "rarely" then the occasional schema update can cope.
If the answer is "a lot" then the relational DB model (which has fixed schemas baked-in) isn't the best tool for the job, so solutions that incorporate it (entity-attribute-value, XML columns and so on) will always seem a little laboured.
Good luck, and let us know how you solve this problem - it's a common issue that people run into.
Option 1, except that it's not a separate table for each set of attributes: create a separate table for each sample source.
i.e. from your examples: samples from a boiler will have tank number, boiler temp, lot number; acid samples have vial number.
You say this is tedious; but I suggest that the more work you put into gathering and encoding the meaning of the data now will pay off huge dividends later - you'll save in the long term because your reports will be easier to write, understand and maintain. Those guys from the boiler room will ask "we need to know the total of X for tank grouped by this set of boiler temperature ranges" and you'll say "no prob, give me half an hour" because you've done the hard yards already.
Option 2 would be my fall-back option if Option 1 turns out to be overkill. You'll still want to analyse what fields are needed, what their datatypes and constraints are.
Option 4 is to use a combination of options 1 and 2. You may find some attributes are shared among a lot of sample types, and it might make sense for these attributes to live in the main sample table; whereas other attributes will be very specific to certain sample types.
You should really go with Option 1. Although it is more tedious to create, Option 2 and 3 will bite you back when trying to query you data. The queries will become more complex.
In fact, the most important part of storing the data, is querying it. You haven't mentioned how you are planning to use the data, and this is a big factor in the database design.
As far as I can see, the first option will be most easy to query. If you plan on using reporting tools or an ORM, they will prefer it as well, so you are keeping your options open.
In fact, if you find building the tables tedious, try using an ORM from the start. Good ORMs will help you with creating the tables from the get-go.
I would base your decision on the how you usually see the data. For instance, if you get 5-6 new attributes per day, you're never going to be able to keep up adding new columns. In this case you should create columns for 'standard' attributes and add a key/value layout similar to your 'Option 3'.
If you don't expect to see this, I'd go with Option 1 for now, and modify your design to 'Option 3' only if you get to the point that it is turning into too much work. It could end up that you have 25 attributes added in the first few weeks and then nothing for several months. In which case you'll be glad you didn't do the extra work.
As for Option 2, I generally advise against this as Null in a relational database means the value is 'Unknown', not that it 'doesn't apply' to a specific record. Though I have disagreed on this in the past with people I generally respect, so I wouldn't start any wars over it.
Whatever you do option 3 is horrible, every query will have join the data to create a SAMPLE.
It sounds like you have some generic SAMPLE fields which need to be join with more specific data for the type of sample. Have you considered some user_defined fields.
Example:
SAMPLE_BASE: sample_id(PK), version, status, date_create, name, userdata1, userdata2, userdata3
SAMPLE_BOILER: sample_id (FK), tank_number, boiler_temp, lot_number
This might be a dumb question but what do you need to do with the attribute values? If you only need to display the data then just store them in one field, perhaps in XML or some serialised format.
You could always use a template table to define a sample 'type' and the available fields you display for the purposes of a data entry form.
If you need to filter on them, the only efficient model is option 2. As everyone else is saying the entity-attribute-value style of option 3 is somewhat mental and no real fun to work with. I've tried it myself in the past and once implemented I wished I hadn't bothered.
Try to design your database around how your users need to interact with it (and thus how you need to query it), rather than just modelling the data.
If the set of sample attributes was relatively static then the pragmatic solution that would make your life easier in the long run would be option #2 - these are all attributes of a SAMPLE so they should all be in the same table.
Ok - you could put together a nice object hierarchy of base attributes with various extensions but it would be more trouble than it's worth. Keep it simple. You could always put together a few views of subsets of sample attributes.
I would only go for a variant of your option #3 if the list of sample attributes was very dynamic and you needed your users to be able to create their own fields.
In terms of implementing dynamic user-defined fields then you might first like to read through Tom Kyte's comments to this question. Now, Tom can be pretty insistent in his views but I take from his comments that you have to be very sure that you really need the flexibility for your users to add fields on the fly before you go about doing it. If you really need to do it, then don't create a table for each data type - that's going too far - just store everything in a varchar2 in a standard way and flag each attribute with an appropriate data type.
create table sample (
sample_id integer,
name varchar2(120 char),
constraint pk_sample primary key (sample_id)
);
create table attribute (
attribute_id integer,
name varchar2(120 char) not null,
data_type varchar2(30 char) not null,
constraint pk_attribute primary key (attribute_id)
);
create table sample_attribute (
sample_id integer,
attribute_id integer,
value varchar2(4000 char),
constraint pk_sample_attribute primary key (sample_id, attribute_id)
);
Now... that just looks evil doesn't it? Do you really want to go there?
I work on both a commercial and a home-made system where users have the ability to create their own fields/controls dynamically. This is a simplified version of how it works.
Tables:
Pages
Controls
Values
A page is just a container for one or more controls. It can be given a name.
Controls are linked to pages and represents user input controls.
A control contains what datatype it is (int, string etc) and how it should be represented to the user (textbox, dropdown, checkboxes etc).
Values are the actual data that the users have typed into the controls, a value contains one column for every datatype that it can represent (int, string, etc) and depending on the control type, the relevant column is set with the user input.
There is an additional column in Values which specifies which group the value belong to.
Each time a user fills in a form of controls and clicks save, the values typed into the controls are saved into the same group so that we know that they belong together (incremental counter).
CodeSpeaker,
I like you answer, it's pointing me in the right direction for a similar problem.
But how would you handle drop-downlist values?
I am thinking of a Lookup table of values so that many lookups link to one UserDefinedField.
But I also have another problem to add to the mix. Each field must have multiple linked languages so each value must link to the equivilant value for multiple languages.
Maybe I'm thinking too hard about this as I've got about 6 tables so far.

Resources