Microsoft Access Update Queries - ms-access-2013

Let me start by saying that I am indeed self-taught and I have no business trying to do what I am trying to do...However, I am combining two Access databases that track different information about the same projects (one tracks the proposal information for projects considered for selection and the other tracks budget information for those that are selected). The same projects have different IDs in each database. Each database has lookup fields (yeah, yeah, I know lookup fields are evil, but that is what I have). I want to update the values in the lookup fields for the related tables in the Budget DB to the ID fields for the same projects in the Project DB. Is there a way to write an update query to change the value in the project ID field of one table to the ID fields from another table?
Details: ProjectListTbl contains ProjectID field and the BudgetID field which refers to project ID from the Budget database (BDBID). I want the Reporting table values to change from the Budget Database ID (BDBProjectID) to the ProjectID from the ProjectListTbl. I have tried
UPDATE Reporting INNER JOIN ProjectListTbl ON ReportingTbl.BDBProjectID = ProjectListTbl.ProjectID SET ReportingTbl.BDBProjectID = [ProjectListTbl].[ProjectID], WHERE (((ReportingTbl.BDBProjectID)=DLookUp([ProjectListTbl].[ProjectID],[ProjectListTbl],[BudgetID]=[ReportingTbl]![BDBProjectID])));

Related

Loading records into Dynamics 365 through ADF

I'm using the Dynamics connector in Azure Data Factory.
TLDR
Does this connector support loading child records which need a parent record key passed in? For example if I want to create a contact and attach it to a parent account, I upsert a record with a null contactid, a valid parentcustomerid GUID and set parentcustomeridtype to 1 (or 2) but I get an error.
Long Story
I'm successfully connecting to Dynamics 365 and extracting data (for example, the lead table) into a SQL Server table
To test that I can transfer data the other way, I am simply loading the data back from the lead table into the lead entity in Dynamics.
I'm getting this error:
Failure happened on 'Sink' side. ErrorCode=DynamicsMissingTargetForMultiTargetLookupField,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=,Source=,''Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Cannot find the target column for multi-target lookup field: 'ownerid'.
As a test I removed ownerid from the list of source columns it loads OK.
This is obviously a foreign key value.
It raises two questions for me:
Specifically with regards to the error message: If I knew which lookup it needed to use, how can I specify which lookup table it should validate against? There's no settings in the ADF connector to allow me to do this.
This is obviously a foreign key value. If I only had the name (or business key) for this row, how can I easily lookup the foreign key value?
How is this normally done through other API's, i.e. the web API?
Is there an XRMToolbox addin that would help clarify?
I've also read some posts that imply that you can send pre-connected data in an XML document so perhaps that would help also.
EDIT 1
I realised that the lead.ownertypeid field in my source dataset is NULL (that's what was exported). It's also NULL if I browse it in various Xrmtoolbox tools. I tried hard coding it to systemuser (which is what it actually is in the owner table against the actual owner record) but I still get the same error.
I also notice there's a record with the same PK value in systemuser table
So the same record is in two tables, but how do I tell the dynamics connector which one to use? and why does it even care?
EDIT 2
I was getting a similar message for msauto_testdrive for customerid.
I excluded all records with customerid=null, and got the same error.
EDIT 2
This link appears to indicate that I need to set customeridtype to 1 (Account) or 2 (Contact). I did so, but still got the same error.
Also I believe I have the same issue as this guy.
Maybe the ADF connector suffers from the same problem.
At the time of writing, #Arun Vinoth was 100% correct. However shortly afterwards there was a documentation update (in response to a GitHub I raised) that explained how to do it.
I'll document how I did it here.
To populate a contact with against a parent account, you need the parent accounts GUID. Then you prepare a dataset like this:
SELECT
-- a NULL contactid means this is a new record
CAST(NULL as uniqueidentifier) as contactid,
-- the GUID of the parent account
CAST('A7070AE2-D7A6-EA11-A812-000D3A79983B' as uniqueidentifier) parentcustomerid,
-- customer id is an account
'account' [parentcustomerid#EntityReference],
'Joe' as firstname,
'Bloggs' lastname,
Now you can apply the normal automapping approach in ADF.
Now you can select from this dataset and load into contact. You can apply the usual automapping approach, this is: create datasets without schemas. Perform a copy activity without mapping columns
This is the ADF limitation with respect to CDS polymorphic lookups like Customer and Owner. Upvote this ADF idea
Workaround is to use two temporary source lookup fields (owner team and user in case of owner, account and contact in case of customer) and with parallel branch in a MS Flow to solve this issue. Read more, also you can download the Flow sample to use.
First, create two temporary lookup fields on the entity that you wish to import Customer lookup data into it, to both the Account and Contact entities respectively
Within your ADF pipeline flow, you will then need to map the GUID values for your Account and Contact fields to the respective lookup fields created above. The simplest way of doing this is to have two separate columns within your source dataset – one containing Account GUID’s to map and the other, Contact.
Then, finally, you can put together a Microsoft Flow that then performs the appropriate mapping from the temporary fields to the Customer lookup field. First, define the trigger point for when your affected Entity record is created (in this case, Contact) and add on some parallel branches to check for values in either of these two temporary lookup fields
Then, if either of these conditions is hit, set up an Update record task to perform a single field update, as indicated below if the ADF Account Lookup field has data within it

Database: Storing multiple Types in single table or multiple intermediate tables for Delta Tables

Using Java and Oracle.
We need to update changes in Email, UserID of employee to third party.
Actual table is Employee and intermediate table we keep which we will use for comparison of changes before sending to third party.
Following are database designs coming in mind for intermediate table:
Only Single table:
EmployeeiD|Value|Type|UpdateDate
Value is userid or email, type will be 'email' or 'userid'. Update date is kept so to figure out that which of email or userid was different and update to third party.
Multiple Table:
Employee_EmailID
EmpId|EmailID|Updatedate
Employee_UserID
EmpId|UserID|Updatedate
Java flow will be:
Pick employee from actual table.
Pick employee from above intermediate table.
Compare differences. Update difference to third party.
Update above table with updated value and last update date.
Which one is consider as best way, single table approach or multiple table or is there any standard way to implement the same? There are 10,000 Employees in system.
Intermediate table is just storing Delta records i.e Records transferred to third party so that it can be compared next day.
Good database design has separate tables for different concepts. Using the same database column to hold different types of data will lead to code which is harder to understand, prone to data corruption and less performative.
You may think it's only two tables and a few tens of thousands of rows, so does it matter? But that is only your current requirement. What you choose now will set the template for what happens when (say) you need to add telephone numbers to the process.
Now in future if we get 5 more entities to update
Do you mean "entities", like say Customers rather than Employees? Or do you really mean "attributes" as in my example of Employee Telephone Number?
Generally speaking we have a separate table for distinct entities, and all the attributes of that entity are grouped at the same cardinality. To take your example, I would expect an Employee to have one UserID and one Email Address so I would design the table like this:
Employee_audit
EmpId|UserID|EmailID|Updatedate
That is, I have one record which stores the complete state of the Employee record at the Updatedate.
If we add a new entity, Customers then we have a new table. Simple. But a new attribute like Employee Phone Number offers a choice, because an employee can have more than one: work landline, mobile, fax, home, etc. So we could represent this in three ways: a child table with a type column, multiple child tables for each type, or as distinct columns on the Employee record.
For the main Employee table I would choose the separate table (or tables, depending on whether I'm shooting for 6NF). But for an audit table I would choose one record per Employee and pivot the phone numbers like this:
Employee_audit
EmpId|UserID|EmailID|Landline|Mobile|Fax|Home|Updatedate
The one thing I would never do is have a single table with type and value columns. It seems attractive because it means we could track additional entities without any further DDL. But in fact it becomes harder to re-assemble the complete state of an Employee at any given time with each attribute we add. Also it means the auditing process itself is more complicated (because it needs to determine which attributes have changed and whether it needs to audit the change) and more expensive (because changing three attributes on the same record entails inserting three audit records).

Visual Studio 2013 Dataset Designer refresh relations

I have an application with a dataset linked to an sql server database. I have updated some of the names or foreign keys and primary keys in the sql server. How do I make those changes translate to the data set. For example, I had a primary key called fk_temsempl_xxxxx. I changed it to fk_temsempl on the sql database. How do I get that change to show in the dataset designer in visual studio?
I have tried running custom tool by right clicking on the dataset and clicking run custom tool. That didnt work. I tried configuring the table adapter of one of the tables where a change occured, but the name of the relation didnt change.
You actually just right click the relation and choose Edit Relation... or double click on the line (when the mouse cursor changes from arrow to drag symbol) but I honestly wouldn't bother; you'll then have further refactoring to do in the code anywhere the relation is used, and it can be heavily used by visual designers.
You also get the problem that VS may not help you with the refactoring: in data binding scenarios most things that can be a source of data can also be a collection of multiple things that can be a valid DataSource. They then rely on a string DataMember to determine which of the collections of data in the data source should be used for the data.
For example, when a bindingsource is bound to list a DataTable, the bindingsource.DataSource property might be the DataSet object that contains the DataTable, and thebindingsource.DataMemberis a string of "YOUR_TABLE_NAME". the BindingSource might not be bound asmyBindignSource.DataSource = myDataSet.MyDataTable`. Refactoring inside strings involves a find and replace
DataRelations in a DataSet are created from foreign keys as they were discovered when the relevant table(s) were added to the dataset but it is important to note that, like DataTables and everything else, they are NOTHING to do with the database schema objects at all - they aren't permanently associated with them, the dataset entities are just set up looking something like the database objects when they (dataset entities) are first created. DataTables are created from only those columns selected, and whatever .NET datatypes closely resemble the types output by the query. For a table of:
Person
------
Name VARCHAR(50)
SSN INTEGER
Birthdate DATE
If you created the table with SELECT * FROM Person you'd get a datatable with Name (string), SSN (int), Birthdate (datetime) but if you made a new datatable in the dataset based on SELECT LEFT(Name, 1) as Initial, PADLEFT(SSN, 20) as PadSSN, DATEDIFF(day, Birthdate, NOW()) as AgeDays FROM Person then you'd get a datatable of Initial (string), PadSSN (string), AgeDays (int) - i.e. the datatable looks nothing like the db table. This concept of disconnection between dataset and db is pervasive, and really the only things that relate in any way to the database are the properties that specify which DB table/column a particular DataTable/DataColumn relates to for purposes of loading/saving data. Your Person.Name datacolumn can be renamed to Blahblah, but it will still have a .SourceColumn property that is set to "Name" - that's how the mapping between dataset and db works; dataset is predominantly completely independent of the db. Renaming a DB column would require a change to the SourceColumn property only
DataRelations don't even have this notion of linking to the parent relation in the database; there's no SourceRelation or SourceFK proeprty because there is no need to. They're set up with the same rules and a generated name all based on the rules of the FK, but then they function independently and only within the dataset. If you rename or even remove an FK from the db the dataset will carry on working in the same restricted way it always did; adding a datarow to a child table when no aprent row exists for it will throw an exception - none of it anything to do with the FK in the db, and the DataRelation can have different rules to the FK (e.g it can cascade deletes when the FK is NOACTION) or even different columns. You can have more or fewer DataRelations than the DB has FKs
Run Custom Tool is not a "contact the DB and see what changes have occurred there and replicate them into the dataset", it is a "turn the XSD that describes the dataset into a bunch of C# classes that implement strongly typed dataset/table/relation/column etc objects". Any time you change the XSD by making an edit in the visual designer and hit save, the custom tool is run. If you edit the XSD directly in a text editor you may need to run it manually to have your changes reflected in c# classes
Reconfiguring a tableadapter probably won't do anything to the relations either; its solely concerned with changing the datatable and tableadapter. If you really want to refresh the relations, delete the datatable from the set and recreate it. Be prepared for a potentially significant mop up/refactoring of code

breeze.js insert parent/child with identity

Simple parent/child scenario like Order and OrderLineItems. I am inserting a new Order, the OrderID is an identity column (sql server). I'm also inserting OrderLineItems in the same SaveChanges transaction. I need to get the new OrderID into the OrderLineItems, but not sure how to do it. I have the appropriate FK relationships setup properly. When I save, I get an error that OrderID is a required field in OrderLineItems.
Will I have to split this out into 2 server calls? First to insert the Order, which will return the OrderID. And then another to insert the OrderLineItems?
The Breeze documentation discusses this topic (key generation) at several points including but not limited to: http://www.breezejs.com/documentation/save-changes, http://www.breezejs.com/documentation/extending-entities and http://www.breezejs.com/documentation/add-new-entity.
The basic idea idea is that providing that your model and metadata are set up properly, breeze can assign a temporary id in place of the identity column for use in linking your order and orderlineitem entities prior to being saved. As part of the save process, Breeze updates these temporary keys to their "real" key values and updates the local cache as well upon successful completion of the save.

Extract data from two tables of DB2 database and load into a temporary table

I am creating an informatica workflow which can extract data from two tables of DB2 database and load into a temporary table. Suppose the two source tables name are Account (Parent) and Activities (Child). They have 1:M relationship. Means an Account can have many Activities (Account.PK = Activities.FK). Activities table has two columns- first 'Type' whose value could be 'Paid', 'Will-Pay' or 'Not-Paid'.And second column is 'Created_Date' datetime datatype, whenever you create new activity record, date and time would get stamp in this field. Now, condition to load data in temporary table is - "For an Account record, it would 1st check in Activities table for today's Paid activities (Type = Paid). If it finds more than one paid activities, then it would pick the Latest created one (Created_Date column) out of them. If there is no Paid activity record for the Account, then it would pick latest created 'Will-Pay' activity." Means, it should pick latest Paid activity for today (Sysdate) for an Account, if it is not present then only It will pick latest Will-pay activity for today. Please help me to understand how I can implement this logic in Informatica workflow and which transformations I should use and how? Thanks alot. Kindly help.
Best way to do it on SQL cause realize business logic on ETL it's not good. But if you insist it can be created by many ways. As example:
With SQL override
You can create 3 lookup transformation for Activities table with overrided SQL (and columns too) and one expression transformation for condition.
Lookup to find more than one 'paid' activities accounts
Lookup to find last 'paid' activity per account
Lookup to find last 'will pay' activity per account
Expression to return correct Activities key based from 1-3 lookup results
Without SQL override you need to recreate similar logic with filter, aggregator, joiner transformations.

Resources