My predecessor built our database with some "overloaded" child tables that are shared by multiple parents, using a "tabletype" column that specifies which parent table is the parent of a particular child record. Also, the parents and child are often joined using multiple columns which are not keys, or a compound key, or unique in any way. Multiple parents records can be related to multiple child records this way. Usually, SELECT DISTINCT or GROUP BY are used to eliminate duplicates in the results, in reports or forms. Apparently this is the way our data really works, and users are fine with it. I am not mandated to change this structure.
In one example, the child table has a "tabletype" column with one of three possible values (and currently no constraint to enforce them). It has a foreign key column to relate to the ID of one parent table (call it ParentA). This column is blank for records related to the other two parents. It has an identifying number (not unique) column (we will call it "IdentiNum") and a "BatchID" column, and joins with either of the other two parents using those two columns.
As you'd expect, Referential Integrity is not enforced, and probably can't be enforced with simple RI triggers and constraints. I'm an Access programmer, new to Oracle and PL/SQL. I can write code to enforce RI in the Access interface using VBA. That'll do no good if we replace this interface with one using APEX or another tool, as we plan to do. I want RI in the database where it belongs.
Here's what I think I need for this case:
a constraint on tabletype allowing one of three values that specify which table contains a record's parent.
a constraint on the child's ForeignKey column requiring its value to exist in the ID column of ParentA, unless it is null, which it might be.
a delete trigger on ParentA which cascade-deletes related records in the child table, but still allows the child's ForeignKey to be nullable.
a constraint on the child's IdentiNum and BatchID columns, requiring the values to exist (together) in either ParentB or ParentC, depending on the value of TableType.
delete triggers on ParentB and ParentC which cascade-deletes related records in the child table, the relation determined by IdentiNum, BatchID, and TableType. However, when a ParentB or ParentC record is deleted, the procedure would have to check to make sure there were no other parent records with the same IdentiNum and BatchID values before deleting all related child records.
Related
I have two fact tables: FactSales & FactInvoices. Both have an foreign key relationship with DimDate.Datekey. In VS, the SSAS DSV displays these relationships (the lines are drawn between the tables).
In the DSV I decided to to create a named query that limits the dimdate to 2021. After doing this, I still see the relationships between the two fact tables and dimdate (which is now a named query).
At the DB-level, I created a 3rd fact table called FactExpenses. FactExpenses also has an FK relationship with DimDate.Datekey. The problem is that my dsv (in SSAS) does not recognize this relationship (ie. It doesn't draw the line between both tables).
Two questions: why doesn't VS display the relationship between my 3rd fact table with the named query but it does with the other two fact tables? I understand that the relationship isn't with the named query, but the relationship should disappear in all the fact tables.
When I want to limit the amount of data displayed in dimdate, should I use a named query?
The relationships in the DSV are separate to the foreign keys on the base tables, but they get added automatically based on the database schema when you add tables to the DSV. My guess would be when you added the initial dim and fact tables to the DSV in Visual Studio it automatically added the relationships based on the foreign keys that exist on the base tables, but this may not occur automatically for named queries. You can manually add the relationship yourself for the third table to get the same result.
I think a named query is a reasonable approach for the filtering you want to do. An alternative would be to create a view in the source database if you need to do more intense or complex filtering.
Using Java and Oracle.
We need to update changes in Email, UserID of employee to third party.
Actual table is Employee and intermediate table we keep which we will use for comparison of changes before sending to third party.
Following are database designs coming in mind for intermediate table:
Only Single table:
EmployeeiD|Value|Type|UpdateDate
Value is userid or email, type will be 'email' or 'userid'. Update date is kept so to figure out that which of email or userid was different and update to third party.
Multiple Table:
Employee_EmailID
EmpId|EmailID|Updatedate
Employee_UserID
EmpId|UserID|Updatedate
Java flow will be:
Pick employee from actual table.
Pick employee from above intermediate table.
Compare differences. Update difference to third party.
Update above table with updated value and last update date.
Which one is consider as best way, single table approach or multiple table or is there any standard way to implement the same? There are 10,000 Employees in system.
Intermediate table is just storing Delta records i.e Records transferred to third party so that it can be compared next day.
Good database design has separate tables for different concepts. Using the same database column to hold different types of data will lead to code which is harder to understand, prone to data corruption and less performative.
You may think it's only two tables and a few tens of thousands of rows, so does it matter? But that is only your current requirement. What you choose now will set the template for what happens when (say) you need to add telephone numbers to the process.
Now in future if we get 5 more entities to update
Do you mean "entities", like say Customers rather than Employees? Or do you really mean "attributes" as in my example of Employee Telephone Number?
Generally speaking we have a separate table for distinct entities, and all the attributes of that entity are grouped at the same cardinality. To take your example, I would expect an Employee to have one UserID and one Email Address so I would design the table like this:
Employee_audit
EmpId|UserID|EmailID|Updatedate
That is, I have one record which stores the complete state of the Employee record at the Updatedate.
If we add a new entity, Customers then we have a new table. Simple. But a new attribute like Employee Phone Number offers a choice, because an employee can have more than one: work landline, mobile, fax, home, etc. So we could represent this in three ways: a child table with a type column, multiple child tables for each type, or as distinct columns on the Employee record.
For the main Employee table I would choose the separate table (or tables, depending on whether I'm shooting for 6NF). But for an audit table I would choose one record per Employee and pivot the phone numbers like this:
Employee_audit
EmpId|UserID|EmailID|Landline|Mobile|Fax|Home|Updatedate
The one thing I would never do is have a single table with type and value columns. It seems attractive because it means we could track additional entities without any further DDL. But in fact it becomes harder to re-assemble the complete state of an Employee at any given time with each attribute we add. Also it means the auditing process itself is more complicated (because it needs to determine which attributes have changed and whether it needs to audit the change) and more expensive (because changing three attributes on the same record entails inserting three audit records).
I have three database views that are mapped in Hibernate as entities.
The entities are in a parent-child relationship (1 parent (A), 2 children(B & C)).
One of the children views (B) uses Oracle's dbms_utility.get_hash_value() to calculate its ID.
This is because it does a UNION over several tables that use different ID sequences and thus the IDs from there may not be unique.
I now have the very puzzling effect that a simple entityManager.find(B.class, id) cannot find the appropriate row.
When I look at the children through a loaded parent (A) entity, I can see that the ID shown in B is completely different from the one in the database. If I use this ID with entityManager.find(B.class, hibernateId), Hibernate finds the appropriate entity.
The database, on the other hand, only returns a value when using the ID shown in the ID column there (and not with the ID Hibernate shows).
Child entity C does not use the hash function and does not show this peculiar behaviour - which means the hash must be responsible.
Does anyone have an idea why?
We found the reason:
Child view B used all of its (content containing) columns as a string concatenation for the hash function.
This included date fields, which were not explicitly formatted when creating the string.
So, when Hibernate selected from the view, it obviously used another format than SQL Developer and thus produced completely different (but consistent) IDs.
Explicitly formatting the used date fields removed the problem.
I have a 1:1 relationship between table 'A' and 'B' in my .DBML. The FK in the database is in place and the .DBML diagram shows an association line between 'A' and 'B'. However, I cannot get the code generator to create a child property in the 'A' entity. All I have is the FK column. In the Association properties, I have ChildProperty set to true. However, the code generator will not create the child property. I have dropped and added the two tables several times.
Anyone have any ideas?
The O/R designer will refuse to create an association property if a primary key is missing on one of the associated tables. Make sure all of your associated tables have a primary key.
Not sure, but I think what you call 1:1 is actually seen by the DBML as 1:* because the list can "have" many of your fk-table, e.g. one empley oyee can have one city, but each city can "have" many employees.
AFAIK a primary key in each table is a prerequisite without which the DBML will not "work". An error is issued when saving it. Your project will compile, but you'll see the errors later. HTH
I have to add some security for a C#/.NET WinForms/Desktop application. I am using Oracle DB back-end.
The tables are simple: User (ID,Name), Role(ID,Role), UserRole(UserID,RoleID).
I am using the windows account name to populate User table. Role table will for now just be simply 'Admin','SuperUser','BasicUser'...
Since no two people could ever possible have the same windows account name... even when I do not control these name management (netops does, hence why I want to use windows accounts so I don't have to manage it ;)). For Role table, I should again never have dupe value - I control the input, there will only be 3 (tactical app going away within year). UserRole is a join table to represent the Many-To-Many relationships of users and roles, so no surragate key is justified.
Simple question - Why bother with 'ID' (int) in the User and Role table? Any point or advantage here? Is this one of those 'I've always done it this way' type things? Or have I just not done this in awhile and forget the reason?
Names change - primary key values must not. Abigail Smith becomes Abigail Jones and the username changes but a surrogate key protects against having to cascade those changes everywhere.
If you are using a surrogate key but there is a column or combination of columns which should be unique, then enforce that using a unique index. There's a good chance you'll want indexes on your user.name and role.role columns anyway, and a unique index is more space efficient and supplies useful metadata to the optimizer. If you have a surrogate key but don't have another combination of columns that uniquely identify a row then think again whether you have your entity definition right.
One caution. Especially for very narrow tables with few access paths, you may use an index-organized table. Oracle will only allow an index organized table on the primary key, but does allow foreign keys against a unique set of columns (if it is enforced by a unique constraint, not simply a unique index).
It is possible that you'll end up with a table where a unique ID is enforced through a unique index and treated as PK by an ORM and used as the parent for foreign key relationships, but the primary key (as defined in the DB) is the rolename/username/whatever because you want that as the driver for an index-organised table.
A surrogate key is not required on intersection tables, but here are a few reasons to do so:
Consistency: If every table has a single artificial key, you always know the key name when you know the table name.
Ease Of Use: Less typing — one key means ON and WHERE clauses are shorter and thus less error-prone.
Interoperability: Some ORMs only work well with tables with a single primary key column.