How can I find the query for InfoProviders in SAP Business Warehouse (BW)? - etl

How can I see the explicit query/extraction/transformation/load that was executed to create an InfoProvider in SAP Business Warehouse (BW)? To clarify, I have numerous InfoProviders in SAP Business Warehouse and I need to identify:
The sources for each respective field, i.e., InfoProvider field: [ABC_123] came from PRISM table: [XXX], PRISM Field [XXX]
Any added computed fields during the ETL from the SAP module into the InfoProvider
Any filtering that was applied to the InfoProvider's columns where applicable

Related

Can I create a (new) lookup table in Power Pivot by querying other tables in my data model?

Context:
I am creating a dashboard in Excel based on the data model I am building in Power Pivot. The source data in the data model is based on various other excel tables I am regularly receiving and copy-pasting into my workbook (their incoming structure is out of my control). My goal is to perform all data processing within Power Pivot/DAX rather than manipulating the data in the worksheets before loading into the model.
Problem:
In my model, I have a table (tabCases) which includes status updates on all cases from a management system. This table has a column named case-ID (not unique). I need to create a lookup-table with unique case-id's where I can create new columns with various KPIs for each case.
How can I do this in Power Pivot?
I found two suggestions in this article but none of them work for me (opt. 1 because it requires a manual creation of the unique ID list and opt. 2 because I don't have a database access).
In my mind there should be something really simple I could do, such as i.e.:
Add new table to data model
Set first column to be equal to DISTINCT(tabCases[caseID])
Is there such a way?
A Linkback Table might help you. Please see the link below:
https://www.sqlbi.com/articles/linkback-tables-in-powerpivot-for-excel-2013/
Thanks

Loading records into Dynamics 365 through ADF

I'm using the Dynamics connector in Azure Data Factory.
TLDR
Does this connector support loading child records which need a parent record key passed in? For example if I want to create a contact and attach it to a parent account, I upsert a record with a null contactid, a valid parentcustomerid GUID and set parentcustomeridtype to 1 (or 2) but I get an error.
Long Story
I'm successfully connecting to Dynamics 365 and extracting data (for example, the lead table) into a SQL Server table
To test that I can transfer data the other way, I am simply loading the data back from the lead table into the lead entity in Dynamics.
I'm getting this error:
Failure happened on 'Sink' side. ErrorCode=DynamicsMissingTargetForMultiTargetLookupField,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=,Source=,''Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Cannot find the target column for multi-target lookup field: 'ownerid'.
As a test I removed ownerid from the list of source columns it loads OK.
This is obviously a foreign key value.
It raises two questions for me:
Specifically with regards to the error message: If I knew which lookup it needed to use, how can I specify which lookup table it should validate against? There's no settings in the ADF connector to allow me to do this.
This is obviously a foreign key value. If I only had the name (or business key) for this row, how can I easily lookup the foreign key value?
How is this normally done through other API's, i.e. the web API?
Is there an XRMToolbox addin that would help clarify?
I've also read some posts that imply that you can send pre-connected data in an XML document so perhaps that would help also.
EDIT 1
I realised that the lead.ownertypeid field in my source dataset is NULL (that's what was exported). It's also NULL if I browse it in various Xrmtoolbox tools. I tried hard coding it to systemuser (which is what it actually is in the owner table against the actual owner record) but I still get the same error.
I also notice there's a record with the same PK value in systemuser table
So the same record is in two tables, but how do I tell the dynamics connector which one to use? and why does it even care?
EDIT 2
I was getting a similar message for msauto_testdrive for customerid.
I excluded all records with customerid=null, and got the same error.
EDIT 2
This link appears to indicate that I need to set customeridtype to 1 (Account) or 2 (Contact). I did so, but still got the same error.
Also I believe I have the same issue as this guy.
Maybe the ADF connector suffers from the same problem.
At the time of writing, #Arun Vinoth was 100% correct. However shortly afterwards there was a documentation update (in response to a GitHub I raised) that explained how to do it.
I'll document how I did it here.
To populate a contact with against a parent account, you need the parent accounts GUID. Then you prepare a dataset like this:
SELECT
-- a NULL contactid means this is a new record
CAST(NULL as uniqueidentifier) as contactid,
-- the GUID of the parent account
CAST('A7070AE2-D7A6-EA11-A812-000D3A79983B' as uniqueidentifier) parentcustomerid,
-- customer id is an account
'account' [parentcustomerid#EntityReference],
'Joe' as firstname,
'Bloggs' lastname,
Now you can apply the normal automapping approach in ADF.
Now you can select from this dataset and load into contact. You can apply the usual automapping approach, this is: create datasets without schemas. Perform a copy activity without mapping columns
This is the ADF limitation with respect to CDS polymorphic lookups like Customer and Owner. Upvote this ADF idea
Workaround is to use two temporary source lookup fields (owner team and user in case of owner, account and contact in case of customer) and with parallel branch in a MS Flow to solve this issue. Read more, also you can download the Flow sample to use.
First, create two temporary lookup fields on the entity that you wish to import Customer lookup data into it, to both the Account and Contact entities respectively
Within your ADF pipeline flow, you will then need to map the GUID values for your Account and Contact fields to the respective lookup fields created above. The simplest way of doing this is to have two separate columns within your source dataset – one containing Account GUID’s to map and the other, Contact.
Then, finally, you can put together a Microsoft Flow that then performs the appropriate mapping from the temporary fields to the Customer lookup field. First, define the trigger point for when your affected Entity record is created (in this case, Contact) and add on some parallel branches to check for values in either of these two temporary lookup fields
Then, if either of these conditions is hit, set up an Update record task to perform a single field update, as indicated below if the ADF Account Lookup field has data within it

How to join more than two tables using Lightswitch LINQ query

I am new to lightswitch as well as linq, i am using LS 2015. I want to know how to display the data of one table in other table's screen.
For example I have job vacancies, where I want to show candidate professional information who has applied for job. These two tables are related through Candidate personal ID, means a foreign key between candidate personal table and professional table.
The relation is as follows
JobVacancies->CandidatePersonalInfor->candidateProfessionalInfo. They are many to one related.
I have written a linq query to fetch data, where i have included some other tables also, my code is as follows
Any help is appreciated.
Have you defined the relationships you describe between the entities? If so, you'll have the ability to add and drop the details you require onto the form in the designer. The entities and their properties will be in the left pane of the screen designer.
If the join operation is more complex, you can use a WCF RIA Service to create a composite entity, as described in http://lightswitchhelpwebsite.com/Blog/tabid/61/EntryId/2226/Creating-a-WCF-RIA-Service-for-Visual-Studio-2013.aspx

TCR 2.1.1 importing ITM reports

I am trying to run some reports in TCR I imported from the 6.2.3-TIV-ITM_TMV-Agent-Reports-FP0001
Seeing that I get this error: UDA-SQL-0196 The table or view "KSY_SUMMARIZATION_CONFIG_DV" was not found in the dictionary.
I checked and the table is not in the database.
Seeing that regarding that table it says this:
The Summarization and Pruning configuration is shown in a specific query subject (Summarization and Pruning Configuration). The result is one row that represents the most recent entry in the KSY_SUMMARIZATION_CONFIG_DV view.
Maybe the WAREHOUS is lacking something? If the agents are running shouldn't there be a view named KSY_SUMMARIZATION_CONFIG_DV?
I don't seem to find other tables like: KLZ_CPU_HV, KLZ_CPU_DV, KLZ_CPU_WV, KLZ_CPU_MV,
KLZ_CPU_QV, KLZ_CPU_YV
Thanks for your help
You have to configure historical collection for the appropriate agent attribute groups for those tables to show up in your TDW. For instance create a historical collection for the "Linux CPU" attribute group to get the KLZ_CPU table. For tables ending in _D, _H, etc, configure hourly, daily, etc, summarization for those attribute groups.
Depending on collection intervals eventually the warehouse proxy agent will create the necessary tables.

Extract data from two tables of DB2 database and load into a temporary table

I am creating an informatica workflow which can extract data from two tables of DB2 database and load into a temporary table. Suppose the two source tables name are Account (Parent) and Activities (Child). They have 1:M relationship. Means an Account can have many Activities (Account.PK = Activities.FK). Activities table has two columns- first 'Type' whose value could be 'Paid', 'Will-Pay' or 'Not-Paid'.And second column is 'Created_Date' datetime datatype, whenever you create new activity record, date and time would get stamp in this field. Now, condition to load data in temporary table is - "For an Account record, it would 1st check in Activities table for today's Paid activities (Type = Paid). If it finds more than one paid activities, then it would pick the Latest created one (Created_Date column) out of them. If there is no Paid activity record for the Account, then it would pick latest created 'Will-Pay' activity." Means, it should pick latest Paid activity for today (Sysdate) for an Account, if it is not present then only It will pick latest Will-pay activity for today. Please help me to understand how I can implement this logic in Informatica workflow and which transformations I should use and how? Thanks alot. Kindly help.
Best way to do it on SQL cause realize business logic on ETL it's not good. But if you insist it can be created by many ways. As example:
With SQL override
You can create 3 lookup transformation for Activities table with overrided SQL (and columns too) and one expression transformation for condition.
Lookup to find more than one 'paid' activities accounts
Lookup to find last 'paid' activity per account
Lookup to find last 'will pay' activity per account
Expression to return correct Activities key based from 1-3 lookup results
Without SQL override you need to recreate similar logic with filter, aggregator, joiner transformations.

Resources