Oracle BI EE filter on same dimension - oracle

I am new to OBIEE and would like to create an analysis where I can place one next to the other 2 columns with figures from same dimension but with different data.
To better explain it: let's say that in Dim1 we have Invoices and Payments as members. We also have other dims as Date, Invoice Number and so on. This would be the current output:
Date | Dim1 | Invoice Number | Amount
10/01/17 Invoice 1234 -450
10/02/17 Payment 1234 450
So, what I want is, instead of creating 2 reports, one for the Invoices and the other one for Payments, a single report with the following output:
Invoice Date | Invoice | Payment date | Payment | Invoice Number | Amount inv | Amount paid
10/01/17 Invoice 10/02/17 Payment 1234 -450 450
Is this kind of output achievable inside OBIEE?
Thanks!

You are not trying to "filter on same dimension" but you are trying to convert rows into columns.
While it is possible to cheat your way around this it is definitely not something which is suggested! You are facing an analytical system - not Excel.
If this is an actual requirement and not simply a "I wish to see it this way" then the best approach is to store the data properly.
Second-best approach is to model it in the RPD with different logical table sources.
Last and the option NOT to go for right away is what you are asking for: Doing it in the front-end.
Apart from that: It's "analyses" that you are working with in OBI. If you have a "report" then you are in BI Publisher which is a completely different tool.

Related

Perform an OR operation on same field from multiple rows SSRS

I am a beginner trying to achieve a simple operation in SSRS using Visual Studio 2019. I have a query which returns a table as follows
ID | Name | Married
1 | Jack | Y
2 | Jack | N
The number of records might vary depending on the number of results. On the report, I want to display only the field 'Married' once. The value of the field will be determined using an OR operation, i.e. if the field 'Married' is 'Y' for any one record, I want to display a 'Y' on the report.
Assuming the Values are either Y or N, you should be able to use something like
=MAX(Fields!Married.Value)
If you report is grouped by, for example, Name then this will give you the MAX value within each group which is probably what you want.
If this does not help, edit your question and show
Your report design
Row Group panel plus details of grouping
A larger sample of data
Expected results from that sample data

Column slicer in Power BI report?

I have a requirement to cook my report with local languages. I have three description columns in my table and need to show one at a time based on user input.
Example:
CustName | Product | English_Description | Swedish_Description
My table has 5 millions of records, so i can't go for un-pivot the description columns. if I do un-pivot, my table becomes 10 millions of records. it's not a feasible one.
Some sample data would be useful. However, you could do a disconnected (or parameter) table for the language selection:
Language
--------
English
Swedish
This table wouldn't be related to anything else, but you could then use measures for your product descriptions such as:
Multi-lingual Description =
IF (
'Disconnected Table'[Language] = "Swedish",
MAX ( [Swedish_Description] ),
MAX ( [English_Description] )
)
With this logic, if no language is picked, the English description will be used. You could use different behavior too (for example use HASONEVALUE to ensure a single value is selected, and display an error message if not).
MAX in the measure is because a measure has to aggregate; however, as long as your table has one product for each row, then the MAX of the product name will be the product name you expect. Having more than one product per row doesn't make sense, so this should be an acceptable limitation. Again, to make your measure more robust, you could build in logic using HASONEVALUE to show BLANK() or an error message if there is more than one product (e.g. for subtotals).
More information:
HASONEVALUE: https://msdn.microsoft.com/en-us/library/gg492190.aspx
Disconnected Tables: http://www.daxpatterns.com/parameter-table/

Filter after grouping columns in Power BI

I want to accomplish something easy to understand (and maybe easy to do but I can't find a way...).
I have a table which represents the date when a client has bought something.
Let's have this example:
=============================================
Purchase_id | Purchase_date | Client_id
=============================================
1 | 2016/03/02 | 1
---------------------------------------------
2 | 2016/03/02 | 2
---------------------------------------------
3 | 2016/03/11 | 3
---------------------------------------------
I want to create a single number card which will be the average of purchase realised by day.
So for this example, the result would be:
Result = 3 purchases / 2 different days = 1.5
I managed doing it by grouping in my query by Purchase_date and my new column is the number of rows.
It gives me the following query:
==================================
Purchase_date | Number of rows
==================================
2016/03/02 | 2
----------------------------------
2016/03/11 | 1
----------------------------------
Then I put the field Number of rows in a single number card, selecting "Average".
I have to precise that I am using Direct Query with SQL Server.
But the problem is that I want to have a filter on the Client_id. And once I do the grouping, I lose this column.
Is there a way to have this Client_id as a parameter?
Maybe even the fact of grouping is not the right solution here.
Thank you in advance.
You can create a measure to calculate this average.
From Power BI's docs:
The calculated results of measures are always changing in response to
your interaction with your reports, allowing for fast and dynamic
ad-hoc data exploration
This means filtering client_id's will change the measure accordingly.
Here is an easy way of defining this measure:
Result = DISTINCTCOUNT(tableName[Purchase_date])/DISTINCTCOUNT(tableName[Purchase_id])

Increase scan performance in Apache Hbase

I am working on an use case and help me in improving the scan performance.
Customers visiting our website are generated as logs and we will be processing it which is usually done by Apache Pig and inserts the output from pig into hbase table(test) directly using HbaseStorage. This will be done every morning. Data consists of following columns
Customerid | Name | visitedurl | timestamp | location | companyname
I have only one column family (test_family)
As of now I have generated random no for each row and it is inserted as row key for that table. For ex I have following data to be inserted into table
1725|xxx|www.something.com|127987834 | india |zzzz
1726|yyy|www.some.com|128389478 | UK | yyyy
If so I will add 1 as row key for first row and 2 for second one and so on.
Note : Same id will be repeated for different days so I chose random no to be row-key
while querying data from table where I use scan 'test', {FILTER=>"SingleColumnValueFilter('test_family','Customerid',=,'binary:1002')"} it takes more than 2 minutes to return the results.`
Suggest me a way so that I have to bring down this process to 1 to 2 seconds since I am using it in real-time analytics
Thanks
As per the query you have mentioned, I am assuming you need records based on Customer ID. If it is correct, then, to improve the performance, you should use Customer ID as Row Key.
However, multiple entries could be there for single Customer ID. So, better design Row key as CustomerID|unique number. This unique number could be the timestamp too. It depends upon your requirements.
To scan the data in this case, you need to use PrefixFilter on row key. This will give you better performance.
Hope this help..

Simplifying a Cascading pipeline used for aggregating sales data

I'm very new to Cascading and Hadoop both, so be gentle... :-D
I think I'm finding myself way over-engineering something. Basically my situation is that I have a pipe delimited file with 9 fields. I want to compute some aggregated statistics over those 9 fields using different groupings. The result should be 10 fields of which only 6 are either counts or sums. So far I'm up to 4 Unique pipes, 4 CountBy pipes, 1 SumBy, 1 GroupBy, 1 Every, 2 Each, 5 CoGroups and a couple others. I'm needing to add another small piece of functionality and the only way I can see to do it is to add in 2 Filters, 2 more CoGroups and 2 more Each pipes. This all seems like way overkill just to compute a few aggregated statistics. So I'm thinking I'm really misunderstanding something.
My input file looks like this:
storeID | invoiceID | groupID | customerID | transaction date | quantity | price | item type | customer type
Item type is either "I", "S" or "G" for inventory, service or group items, customers belong to groups. The rest should be self-explanatory
The result I want is:
project ID | storeID | year | month | unique invoices | unique groups | unique customers | customer visits | inventory type sales | service type sales |
project ID is a constant, customer visits is how many days during the month the customer came in and bought something
The setup that I'm using right now uses a TextDelimited Tap as my source to read the file and passes the records to an Each pipe which uses a DateParser to parse the transaction date and adds in year, month and day fields. So far so good. This is where it gets out of control.
I'm splitting the stream from there up into 5 separate streams to process each of the aggregated fields that I want. Then I'm joining all the results together in 5 CoGroup pipes, sending the result through Insert (to insert the project ID) and writing through a TextDelimited sink Tap.
Is there an easier way than splitting into 5 streams like that? The first four streams do almost the exact same thing just on different fields. For example, the first stream uses a Unique pipe to just get unique invoiceID's then uses a CountBy to count the number of records with the same storeID, year and month. That gives me the number of unique invoices created for each store by year and month. Then there is a stream that does the same thing with groupID and another that does it with customerID.
Any ideas for simplifying this? There must be an easier way.

Resources