Multiple group by for dataset query - birt

I am currently working on report generation using BIRT Tool. consider the table below,
TaskId Status SLAMiss
----------------------------------------------------------
1 Completed Yes
2 In Progress No
3 Completed No
I need to create a table which shows the count of Completed ,In progress tasks along with the count SLA missed tasks like below,
Tasks Completed Tasks InProgress SLA Adherence SLA Miss
---------------------------------------------------------------------------
2 1 2 1
Now i need to create the dataset using sql query. For the first two columns i have to group by 'Status'. And for the last two columns i have to group by 'SLA Miss'. So,
1.Is it possible to achieve this using a single dataset?
2.If yes what will be the sql query for the dataset?
3.If not, I can create 4 dataset's for each column and apply that to the table.
Will that be a good idea?
Thanks in advance.

The easiset way to do this is use computed columns. You would use some JavaScript like the following in a new colum named "CompletedCount" as Interger. Then when you build your report you sum the values with an "Aggregation" item from the palette.
if (row["Status"] == "Completed" )
{
"1"
}else{
"0"}

Related

Compare Dynamic Lists Power BI

I have a table ("Issues") which I am creating in PowerBI from a JIRA data connector, so this changes each time I refresh it. I have three columns I am using
Form Name
Effort
Status
I created a second table and have summarized the Form Names and obtained the Total Effort:
SUMMARIZE(Issues,Issues[Form Name],"Total Effort",SUM(Issues[Effort (Days)]))
But I also want to add in a column for
Total Effort for each form name where the Status field is "Done"
My issue is that I don't know how to compare both tables / form names since these might change each time I refresh the table.
I need to write a conditional, something like
For each form name, print the total effort for each form name, print the total effort for each form name where the status is done
I have tried SUMX, CALCULATE, SUM, FILTER but cannot get these to work - can someone help, please?
If all you need is to add a column to your summarized table that sums "Effort" only when the Status is set to 'Done' -- then this is the right place to use CALCULATE.
Table =
SUMMARIZE(
Issues,
Issues[Form Name],
"Total Effort", SUM(Issues[Effort]),
"Total Effort (Done)", CALCULATE(SUM(Issues[Effort]), Issues[Status] = "Done")
)
Here is a quick capture of what some of the mock data that I used to test this looks like. The Matrix is just the mock data with [Form Name] on the rows and [Status] on the columns. The last table shows the 'summarized' data calculated by the DAX above. You can compare this to the values in the matrix and see that they tie out.

AWS Quicksight aggregate data

i have a dataset like this
Order
id
expected date
1
11-04-2022
2
10-04-2022
2
14-04-2022
Order Event
Id
Order Id
Order status
Date
1
1
created
01-04-2022
2
1
completed
12-04-2022
3
2
created
01-04-2022
4
2
in progress
07-04-2022
5
2
completed
10-04-2022
6
3
created
10-04-2022
and i need to create a graph that show, for all order with completed status the difference between expected date and actual order date.
How can i archueve that
First, you have to join both of the tables into one because QuickSight can only work with multiple data files if they are merged. You can apply an inner join on the order ID.
Then, you can calculate the difference between the expected date and the order date and add an if-statement to filter out the orders who are not completed yet. You do this by adding a calculated field to your dataset with the following code:
ifelse(
{Order_status}="completed",
dateDiff({expected_date},{Date},"DD"),
0
)
You can also modify this field. Here, I wrote "DD" for the date difference in days, you can also select hours etc.. Also, if the order is not completed, I selected 0 as a default value. To find out more about the commands used in this calculated field, visit this AWS Docs links:
If-Else Command
Date-Diff Command
Now that the calculated field is created, you can plot it together with the order ID.
BR mylosf

SSIS: flagging ALL the Data Quality issues in each row with Conditional Split

I have been tasked with performing Data Quality checks on data from a SQL table, whereby I export problem rows into a separate SQL table.
So far I've used a main Conditional Split that goes into derived columns: 1 per conditional split condition. It is working whereby it checks for errors, and depending on which condition is failed first, the data is output with a DQ_TYPE column populated with a certain code (e.g. DQ_001 if it had an error with the Hours condition, DQ_002 if it hit an error with the Consultant Code condition, and so on).
The problem is that I need to be able to see all of the errors within each row. For example at the moment, if Patient 101 has a row in the SQL table that has errors in all 5 columns, it'll fail the first condition in Conditional Split and 1 row will get output into my results with the code DQ_001. I would instead need it to be output 5 times, once for each error that it encountered, i.e. 1 row with DQ_001, a 2nd row with DQ_002, a 3rd row with DQ_003 and so on.
The goal is that I will use the DataQualityErrors SQL table to create an SSRS report that groups on DQ_TYPE and we can therefore Pie Chart to show the distribution of which error DQ_00X codes are most prevalent.
Is this possible using straightforward toolbox functions? Or is this only available with complex Script tasks, etc.?
Assuming I understand your problem, I would structure this as a series of columns added to the data flow via Derived Column transformation.
Assume I have inbound like this
SELECT col1, col2, col3, col4;
My business rules
col1 cannot contain nulls DQ_001
col2 must be greater than 5 DQ_002
col3 must be less than 3 DQ_003
col4 has no rules
From my source, I would add a Derived Column Component
New Column named Pass_DQ_001 as a boolean with an expression !isnull([col1])
New Column named Pass_DQ_002 as a boolean with an expression [col2] > 5
New Column named Pass_DQ_003 as a boolean with an expression [col3] < 3
etc
At this point, your data row could look something like
NULL, 4, 4, No Rules, False, False, False
ABC, 7, 2, Still No Rules, True, True, True
...
If you have more than 3 to 4 data quality conditions, I'd add a final Derived Column component into the mix
New column IsValid as yet another boolean with an expression like Pass_DQ_001 && Pass_DQ_002 && Pass_DQ_003 etc
The penalty for adding additional columns is trivial compared to trying to debug complex expressions in a dataflow so don't do it - especially for bit columns.
At this point, you can put a data viewer in there and verify that yes, all my logic is correct. If it's wrong, you can zip in and figure out why DQ_036 isn't flagging correctly.
Otherwise, you're ready to then connect the data flow to a Conditional Split. Use our final column IsValid and things that match that go out the Output 1 path and the default/unmatched rows head to your "needs attention/failed validation" destination.

I have two tablix in ssrs report.I am using the same dataset for first tablix which shows details second shows

There are two tablix in ssrs report. I am using the same dataset for both tablix. First tablix which shows JOB details and $amnt BY Date (5 month worth of data) and second tablix shows records Grouped by Job and total of $amonts from tablix1.
Tablix 2 shows correct $Sum but for some records there are duplicate rows- if Tablix#1 has more than 1 $amnt.
Example Tablix1: ProjectABC - 1/1/2019 =$2 ; 1/5/2019=$5
ProjectHTG -1/1/2019 =$3
Exampl Tablix2: ProjectABC -$7
ProjectABC -$7
ProjectHTG -$3
how do i modify my expression "=sum(Fields!units.Value,"project2")"
to print "ProjectABC -$7" as one line?
Assuming that your field name if JOB for the project, you would add the field along with the dash to your current expression.
You should NOT group by amount if you want to SUM the amount. You are getting a separate line for each different amount for the same JOB. Only JOBs with the same amounts will be SUMmed as one.
=Fields!JOB.Value & " - " & sum(Fields!units.Value)
A few other issues:
Why are you using the Dataset name in your SUM? It sounds like you have a simple table that groups by JOB and Amount. The table is associated to the Dataset that you want to use. You should only use the dataset name in a table when you're referring to a different dataset than the table is using.
Why do you need two datasets if they have the same info? The second table can do the grouping and summing (and already is) from the same dataset as the first table.

OBIEE Merge two queries (join)

I need help.
I am new to obiee (recently moved from business objects and re-creating all reports in obiee).
Here is an example I need help with. I have created an analysis where I am listing all orders with their target delivery dates and number of products in each order.
Order Id......Target Delivery Date...No of products
Abc....1/1/2016.....5
I want to add to a column next to No of products called "No of prods delivered on time". I want to compare delivery date of each product within a order with the target delivery date and
Give count of products delivered within the target date.. So the output should be
Abc....1/1/2016....5.....3
Where 3 is number of products delivered on time.
I could do it in BO by running two queries and merging them however in obiee I am not able to add second query to my analysis. I did try at product level using case when target date >=delivery date then 1 else 0 and wrapped this with sum function to aggregate but it didn't work ..
Appreciate your help in this. Searching for this topics give me results for running queries from multiple subject area :(
You also have unions in OBIEE, you union the results of 2 queries which return the same structure, so you have query A with Order ID, Target Date, No Products and a Dummy column with a 0 and default agregation Sum, and a second query with Order ID, Target Date, Dummy column summing 0 and the number of products delivered.
You do all this in the criteria tab of the analysis. It's important the order in which you put your columns, because that's what OBIEE is using to do the union.
Regards

Resources