update/delete transformations on same target field - informatica-powercenter

I have a target table within an ETL mapping.
Pipe A, performs an update to fieldA.
Pipe B I need to delete rows based on the value in fieldA (as well as a few other fields)
To perform the update, fieldA cannot be a PK. To perform the delete, fieldA is required to be a PK.
I'm trying to stay away from post-mapping SQL if I can so am seeking other options that may be available to me.

To perform update or delete, the actual physical table does not need to have Primary key, only the definition in Informatica should have the keys. I believe you can create two separate target definitions of the same table, and define different keys. You can then use those separate target definitions in the two pipelines.

Related

Complex Ref. Integrity in Oracle: child with multiple parents, non-key joins

My predecessor built our database with some "overloaded" child tables that are shared by multiple parents, using a "tabletype" column that specifies which parent table is the parent of a particular child record. Also, the parents and child are often joined using multiple columns which are not keys, or a compound key, or unique in any way. Multiple parents records can be related to multiple child records this way. Usually, SELECT DISTINCT or GROUP BY are used to eliminate duplicates in the results, in reports or forms. Apparently this is the way our data really works, and users are fine with it. I am not mandated to change this structure.
In one example, the child table has a "tabletype" column with one of three possible values (and currently no constraint to enforce them). It has a foreign key column to relate to the ID of one parent table (call it ParentA). This column is blank for records related to the other two parents. It has an identifying number (not unique) column (we will call it "IdentiNum") and a "BatchID" column, and joins with either of the other two parents using those two columns.
As you'd expect, Referential Integrity is not enforced, and probably can't be enforced with simple RI triggers and constraints. I'm an Access programmer, new to Oracle and PL/SQL. I can write code to enforce RI in the Access interface using VBA. That'll do no good if we replace this interface with one using APEX or another tool, as we plan to do. I want RI in the database where it belongs.
Here's what I think I need for this case:
a constraint on tabletype allowing one of three values that specify which table contains a record's parent.
a constraint on the child's ForeignKey column requiring its value to exist in the ID column of ParentA, unless it is null, which it might be.
a delete trigger on ParentA which cascade-deletes related records in the child table, but still allows the child's ForeignKey to be nullable.
a constraint on the child's IdentiNum and BatchID columns, requiring the values to exist (together) in either ParentB or ParentC, depending on the value of TableType.
delete triggers on ParentB and ParentC which cascade-deletes related records in the child table, the relation determined by IdentiNum, BatchID, and TableType. However, when a ParentB or ParentC record is deleted, the procedure would have to check to make sure there were no other parent records with the same IdentiNum and BatchID values before deleting all related child records.

Database: Storing multiple Types in single table or multiple intermediate tables for Delta Tables

Using Java and Oracle.
We need to update changes in Email, UserID of employee to third party.
Actual table is Employee and intermediate table we keep which we will use for comparison of changes before sending to third party.
Following are database designs coming in mind for intermediate table:
Only Single table:
EmployeeiD|Value|Type|UpdateDate
Value is userid or email, type will be 'email' or 'userid'. Update date is kept so to figure out that which of email or userid was different and update to third party.
Multiple Table:
Employee_EmailID
EmpId|EmailID|Updatedate
Employee_UserID
EmpId|UserID|Updatedate
Java flow will be:
Pick employee from actual table.
Pick employee from above intermediate table.
Compare differences. Update difference to third party.
Update above table with updated value and last update date.
Which one is consider as best way, single table approach or multiple table or is there any standard way to implement the same? There are 10,000 Employees in system.
Intermediate table is just storing Delta records i.e Records transferred to third party so that it can be compared next day.
Good database design has separate tables for different concepts. Using the same database column to hold different types of data will lead to code which is harder to understand, prone to data corruption and less performative.
You may think it's only two tables and a few tens of thousands of rows, so does it matter? But that is only your current requirement. What you choose now will set the template for what happens when (say) you need to add telephone numbers to the process.
Now in future if we get 5 more entities to update
Do you mean "entities", like say Customers rather than Employees? Or do you really mean "attributes" as in my example of Employee Telephone Number?
Generally speaking we have a separate table for distinct entities, and all the attributes of that entity are grouped at the same cardinality. To take your example, I would expect an Employee to have one UserID and one Email Address so I would design the table like this:
Employee_audit
EmpId|UserID|EmailID|Updatedate
That is, I have one record which stores the complete state of the Employee record at the Updatedate.
If we add a new entity, Customers then we have a new table. Simple. But a new attribute like Employee Phone Number offers a choice, because an employee can have more than one: work landline, mobile, fax, home, etc. So we could represent this in three ways: a child table with a type column, multiple child tables for each type, or as distinct columns on the Employee record.
For the main Employee table I would choose the separate table (or tables, depending on whether I'm shooting for 6NF). But for an audit table I would choose one record per Employee and pivot the phone numbers like this:
Employee_audit
EmpId|UserID|EmailID|Landline|Mobile|Fax|Home|Updatedate
The one thing I would never do is have a single table with type and value columns. It seems attractive because it means we could track additional entities without any further DDL. But in fact it becomes harder to re-assemble the complete state of an Employee at any given time with each attribute we add. Also it means the auditing process itself is more complicated (because it needs to determine which attributes have changed and whether it needs to audit the change) and more expensive (because changing three attributes on the same record entails inserting three audit records).

DynamoDB Throughput vs Search time

I've just figured out a big mistake I had while creating the dynamodb structure.
I've created 11 tables, whereas one of them is the table mostly refereed to and the others are complementary tables.
For example, I have a table where I hold names (together with other info) called "Names" and another table called "NamesMappings" holding all these names added to the "Names" table so that each time a user wants to add a name to the "Names" table he first tries to put the name in "NamesMappings" and only if it succeed (therefore this name doesn't exist) he can add the name into the "Names" table. This procedure helps if the name is not unique and is not the primary key in the "Names" table and with this technique I don't have to search inside the "Names" table if the name exists, but instead I can try to add it to the "NamesMappings" table and only if it succeed I know this is a unique name.
First of all, I would like to ask you if this is a common approach or there is a better one?
Next, I figured out that with this design I soon reached to 11 tables each has 5 provisioned capacity of read and write which leads to overall 55 provisioned read and write under the free-tier. Then I understood why I get all these payments each month, because as the number of tables is getting bigger, and I leave the provisioned capacity as default (both read/write capacity are 5) I get more and more provisioned capacity.
So, what should be my conclusion from this understanding? Should I try to reduce the number of tables even if it takes more effort to preform scanning and querying inside the table? Or should I split the table same as I do but reduce the capacity of these mappings tables used only for indication if an item exists or not in another table?
If I understand your problem correctly you're missing the whole concept of NoSQL Databases.
Your Names table should have a Hash key (which is similar to a Primary key) that has a uniformly generated identifier (an UUID is a great candidate). This would automatically make this Table queryable by this unique identifier. You said, however, that you don't know the ID but you only know the Name instead. This leads me to think you could create a Global Secondary Index (GSI) on the Name attribute inside the Names table so you can also query by Name. Up to this point, your table structure should look like this:
id | name
Both of them are independently queryable, which gives you a lot of flexibility already.
Now, let's say you want to add the NameMapping attribute (which I don't know how it looks like), you can simply add it under the Names table, getting rid of the NamesMappings table, greatly reducing the number of WCUs and RCUs across your account. Your table structure should now look like this:
id | name | mappings
where mappings is, let's say, a JSON object.
Since you can only query on top level attributes in DynamoDB, you can now perform a query against the name attribute which has a GSI configured. If the query returns nothing, then name is unique. But let's say you still need some data inside the mappings object, then you could query by name and, in your code, you could apply a map/filter/reduce operation on the mappings attribute and decide what to do next.
Remember that duplication is just OK in a NoSQL world. This may look scary if you come from a purely SQL background, but data should be stored in such a way in NoSQL databases that you should be able to fetch all the needed information in one go, therefore avoiding "joins" (joins are still possible in a NoSQL database, but since there are no strong relationships between entities, you need to perform these joins manually on the code level). To give you some real context, imagine you have a Orders table where you keep track of the ordered Products and the Store that the Order belongs to: you'd save both the Products and the Store objects (and not their IDs, as it would happen in the SQL way) inside the Order object, so if you want to query for a given OrderId in the future, you wouldn't need to make extra calls (aka "joins") to the Product/Store tables to fetch the information, since everything would already be stored inside the Order object.

Supporting ABP's IEntityCache for entities with multi-column primary keys

The current implementation of IEntityCache supports a 1-column primary key type (default int, can be sent as a different type, e.g. long, string).
Currently there are 2 approaches (that I can see) to enable caching for tables/entities whose primary key consists of 2 (or more) columns, such as 2 string columns:
Modify the schema of the table and simply add an integer Id column (auto-increment) whose sole purpose is to enable the IEntityCache to do its magic. The application logic querying the entity would remain untouched as it would still use the 2-column unique index. Modifying the schema however may not be an option, and another problem is that the upper application layers (Domain, AppServices) are not aware of the Id they need to query, instead they are aware of the entity's 2 keys.
Rewrite a new implementation of IEntityCache to support multiple columns (e.g. IEntityCache<string, string>?). Not sure if this is even feasible given the cachemanager's limitations?
Here's the question: Would the inner workings of the ICacheManager - itself being an abstraction over different cache providers (e.g. IMemoryCache, Redis etc...) - prevent a multi-column implementation?

typed data set; parent/child select and update with ONE trip to the database (for each op)?

Is it possible, using an ADO.NET typed DataSet containing two tables in a parent/child relationship, to populate the DataSet with ONE trip to the d/b (query could return one or two tables; if one, then result set has columns from both tables, right?), and to update the d/b with ONE trip to the d/b (call to generated stored proc, I guess).
By "is it possible", I mean is it possible to have Visual Studio (2012) automagically generate the classes and SQL code to make this happen?
Or am I kind of on my own? It's looking an awful lot like VS really wants to generate one d/b server round trip for each table involved.
*I guess the update stored proc would have to take table-typed parameters from both parent and child, and perform inserts/updates/deletes appropriately.
Yes, one round trip per table is the way to go.
(- It's certainly possible to use a join query to populate a datatable but VS will then be reluctant to generate update etc SQL. This may or may not be a problem, depending on what you intend to do with the dataset.)
But if you have two tables in a dataset, lets say customers - orders, then you would typically use two queries, and two trips to the db:
SELECT * FROM customers WHERE customers.customerid=#customerid
and
SELECT * FROM orders WHERE orders.customerid=#customerid
Somewhat more counter-intuitive is the situation where you want all customers and orders for one country:
SELECT * FROM customers WHERE customers.countryid=#countryid
and
SELECT orders.* FROM orders INNER JOIN customers ON customers.customerid=orders.customerid WHERE customers.countryid=#countryid
Note how the join query returns data from only one table, but uses the join to identify which rows to return.
Then, once you have the data in your dataset, you can navigate it using the getparentrow and getchildrows methods. This is how ADO.Net manages hierarchical data.
You do need this one-table-at-a-time approach, because, assuming you have foreign key constraints in your db, you need to insert and update in reverse order from delete.
EDIT Yes, this does mean that in some circumstances, depending on the data you want and the structure of your primary keys, you could end up with a humungous set of JOINS that still only pull the data from the table at the end of the hierarchy. This might seem wrong in terms of traditional SQL, but actually it's fine. The time you have lost in the multiple, more complex queries is saved by the reduced amount of data you have to pull back across the wire, compared with one big join query that would be returning multiple copies of the parent data.

Resources