Want to convert given source table into expected Target - informatica-powercenter

Want to conver below source to expected Target Table.
Source:
========================
Course | Year | Earning
========================
.NET | 2012 | 10000
Java | 2012 | 20000
.NET | 2012 | 5000
.NET | 2013 | 48000
Java | 2013 | 30000
Expected Output:
=====================
Year | .NET | Java
=====================
2012 | 15000 | 20000
2013 | 48000 | 30000

You can do this like rows to column problem. Here i am assuming you have only .Net and Java courses available. if you have more, you need to add more columns to below transformations.
First use expression transformation with below out ports. They will calculate earning for each course.
java = = IIF(Course ='Java', earning, 0)
dotnet= = IIF(Course ='.Net', earning, 0)
Use an aggregator to calculate those columns.
Year -- input output port with group by
out_java = SUM(java)
out_dotnet = SUM(dotnet)
Link Year, out_java ,out_dotnet to corresponding target columns.
So, whole mapping should look like
SQ --> EXP--> AGG --> Target

Related

OBIEE EVALUATE or EVALUATE_AGGR, MAX/MIN Group By

I am trying to summarize the table in OBIEE Analysis Tool (11g) using the EVALUATE or EVALUATE_AGGR Function. I have tried using the traditional MAX and MIN without EVALUATE but due to a bug with the union functionality I am not getting the desired result.
+------------------+------+-----------+----------+
| Loan ID | Year | Month | Balance |
+------------------+------+-----------+----------+
| L201618100000009 | 2021 | March | 232,000 |
| L201618100000009 | 2021 | June | 232,000 |
| L201618100000009 | 2021 | September | 232,000 |
| L201618100000009 | 2021 | December | 232,000 |
+------------------+------+-----------+----------+
EVALUATE_AGGR('MAX(%1 by %2, %3 )', "Loan and Debt Interest"."Loan BOP Amount", "Time"."Year","Loans"."Loan ID" )
I am getting this error: [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 43119] Query Failed: Please have your System Administrator look at the log for more details on this error. (HY000)
Below is a table of what I am expecting but instead because of the UNION the traditional MAX and MIN Functions are not working. (MAX = 928K, MIN = 928K)
+------------------+------+------------------+-------------------+
| Loan ID | Year | (MAX)BOP Balance | (MIN)EOP Balance |
+------------------+------+------------------+-------------------+
| L201618100000009 | 2021 | 232,000 | 232,000 |
+------------------+------+------------------+-------------------+
I'm a bit confused by the recent (re-)increase of questions like "I want to do this SQL in OBI". That's not how the tool works. That's not how it is designed.
a) If you are forced to do UNION requests, then your data model is poor to begin with.
b) You can easily create a level-based measure in the RPD which is tied to the year level of your time hierarchy and then set the aggregation rule to MAX. Same for MIN. That requires a proper data model though.
c) In the analysis you can also create a new calculated column using MAX("Balance" by "Loan ID", "Year") and it will also give you the same result.

Get 1 value of each date SSRS

Ussing SSRS, I have data with duplicate values in Field1. I need to get only 1 value of each month.
Field1 | Date |
----------------------------------
30 | 01.01.1990 |
30 | 01.01.1990 |
30 | 01.01.1990 |
50 | 02.01.1990 |
50 | 02.01.1990 |
50 | 02.01.1990 |
50 | 02.01.1990 |
40 | 03.01.1990 |
40 | 03.01.1990 |
40 | 03.01.1990 |
It should be ssrs expression with average value of each month or mb there are other solutions to get requested data by ssrs expression. Requested data in table:
30 | 01.01.1990 |
50 | 02.01.1990 |
40 | 03.01.1990 |
Hope for help.
There is no SumDistinct function in SSRS, and it is real lack of it (CountDistinct exist although). So you obviously can't achieve what you want easy way. You have two options:
Implement a new stored procedure with select distinct, returning reduced set of fields to avoid repeated data that you need. You then need to use this stored procedure to build new dataset and use in your table. But this way obviously may be not applicable in your case.
The other option is to implement your own function, which will save state of aggregation and perform distinct sum. Take a look at this page, it contains examples of code that you need.

Cucumber - run same feature a number of times depending on records in a database

I have a cucumber feature that checks a website has processed payment files correctly (BACS,SEPA, FPS etc). The first stage of the process is to create the payment files which in-turn create expected result data in a database. This data is then used to validate against the payment processing website.
If I process one file, my feature works perfectly validating the expected results. Where I'm stuck is how I get the feature to run (n) number of times depending on the number of records/files that were originally processed.
I've tried an 'Around' hook using a record count iteration with no joy, can't see how I can fit it into an outline scenario and now think that perhaps a rake task to call the feature might work.
Any ideas would be greatly appreciated.
Here's a sample of the feature:
Feature: Processing SEPA Credit Transfer Files. Same Day Value Payments.
Background:
Given we want to test the "SEPA_Regression" scenario suite
And that we have processed a "SEPA" file from the "LDN" branch
And we plan to use the "ITA1" environment
Then we log in to "OPF" as a "SEPA Department" user
#feature #find_and_check_sepa_interchange #all_rows
Scenario: Receive SEPA Credit Transfer Files for branch
Given that we are on the "Payment Management > Interchanges" page
When I search for our Interchange with the following search parameters:
| Field Name |
| Transport Date From |
| Bank |
| Interchange Reference |
Then I can check the following fields for the given file in the "Interchanges" table:
| Field Name|
| Interchange Reference |
| Transport Date |
| File Name |
| File Format |
| Clearing Participant |
| Status |
| Direction |
| Bank |
When I select the associated "Interchange Id" link
Then the "Interchange Details" page is displayed
Update I've implemented nested steps for the feature so that I can call the database records first and feed each set of records (or at least the row id) into the main feature like so:
Feature
#trial_feature
Scenario: Validate multiple Files
Given we have one or more records in the database to process for the "SEPA_Regression" scenario
Then we can validate each file against the system
Feature steps:
Then(/^we can validate each file against the system$/) do
x = 0
while x <= $interchangeHash.count - 1
$db_row = x
# Get the other sets of data using the file name in the query
id = $interchangeHash[x]['id']
file_name = $interchangeHash[x]['CMS_Unique_Reference_Id']
Background.get_data_for_scenario(scenario, file_name)
steps %{
Given that we are on the "Payment Management > Interchanges" page
When I search for our Interchange with the following search parameters:
| Field Name |
| Transport Date From |
| Bank |
| Interchange Reference |
Then I can check the following fields for the given file in the "Interchanges" table:
| Field Name|
| Interchange Reference |
| Transport Date |
| File Name |
| File Format |
| Clearing Participant |
| Status |
| Direction |
| Bank |
When I select the associated "Interchange Id" link
Then the "Interchange Details" page is displayed
Seems a bit of a 'hack' but it works.
If you have batch processing software, then you should have several Given (setup) steps, 1 When (trigger) step, several Then (criteria) steps.
Given I have these SEPA bills
| sepa bill 1 |
| sepa bill 2 |
And I have these BAC bills
| bac bill 1 |
| bac bill 2 |
When the payments are processed
Then these sepa bills are completed
| sepa bill 1 |
| sepa bill 2 |
And I these bac bills are completed
| bac bill 1 |
| bac bill 2 |
It's simpler, easier to read what is supposed to be done, and can be expanded to more. The works should be done in the step definitions of setting up and verifying.

SSRS Report matrix

I want to display a graph with a variable number of curves (that I get from a dataset) :
So I can have this :
Trend 1 | Tag 1.1 | Unit 1 |
Trend 1 | Tag 1.2 | Unit 2 |
Trend 2 | Tag 2.1 | Unit 1 |
Trend 2 | Tag 2.2 | Unit 2 |
Trend 2 | Tag 2.3 | Unit 3 |
I Want to display this in a matrix with a group on Trend name to have somethning like this :
Trend 1 | Graph SubReport |
Trend 2 | Graph Subreport |
In the subreport I have Tag1Name, Tag2Name, ... Tag10Name (It was designed like this and I cannot change it :/)
So I have to pass as a parameter to the subreport tag names and I don't find a way to to do this, because when I do it "normally" I have only the first Tag (Tag 1.1 for trend1 and Tag 2.1 for Trend 2).
Do you have any idea ?
I use SSRS 2008 (NOT R2 - so no lookup feature).
Thanks in advance.

Calculate Average Count Using MapReduce in HBase

I have a table called Log which every single row represent the single activity and have a table structure like this
info:date, info:ip_address, info:action, info:info
The example of data is like this
Column Family : info
date | ip_address | action | info
3 March 2014 | 191.2.2.2 | delete | blabla
4 March 2014 | 191.2.2.3 | view | blabla
5 March 2014 | 191.2.2.4 | create | blabla
3 March 2014 | 191.2.2.5 | delete | blabla
4 March 2014 | 191.2.2.5 | create | blabla
4 March 2014 | 191.2.2.6 | delete | blabla
What i want to do is to calculate the average of total of activity based on time. The first things to do is compute the total activity based on time:
time | total_activity
3 March 2014 | 2
4 March 2014 | 3
5 March 2014 | 1
Then, i want to calculate the average of that total_activity which the output will be represent like this
(2 + 3 + 1) / 3 = 2
How i can do this in HBase using MapReduce? I am already thinking that only using one reducer just capable to compute the total of activity based on time.
Thanks
Suggest you look into Scalding - it's the easiest and fastest way to write production Hadoop jobs that can tie in easily to HBase and stuff. Here is a project to help with HBase & Scalding https://github.com/ParallelAI/SpyGlass/blob/master/src/main/scala/parallelai/spyglass/hbase/example/SimpleHBaseSourceExample.scala
Then have a look at the Scalding API to work out how to do what you want:
https://github.com/twitter/scalding/wiki/Fields-based-API-Reference

Resources