SSRS Matrix - Sorting rows by a specific column - sorting

Can somebody explain me how to properly sort "[Arrivals] group" by Count(SearchDate)] for particular "Departure" in this Matrix?
I tried this fx in Row Group Sorting Properities, but it didn't work.
[Count(SearchDate)]
Then I tried specify which column I would like to sort, but same problem.
=Count(IIF(Fields!Departures.Value = "PRG", 1, 0))
After deeper inspection I found that I am able to sort "[Arrivals] group" only by [Count(SearchDate)] but not grouped for particular "Departure".
After a advice in MSDN forum I tried this fx:
=IIF(Fields!Departures.Value = "PRG", Count(Fields!SearchDate.Value), 0)
In first view result looks good but only for the first couple of records.
When I tried pivot table in SQL server everything looks fine:
SELECT * FROM (SELECT Arrivals, Departures, SearchDate FROM Destination WHERE SearchDate > '2016-03-01T00:00:00' AND SearchDate < '2016-03-28T14:03:46') as a
PIVOT (COUNT(SearchDate) for Departures in (PRG, LON)) as PivotTable
Order by PRG Desc
I spent a lot of time and tried a lot of solution but I have realy no idea how to solve it.
Thank you very much for your help, Petr.

I faced the same problem before. Try using this expression:
=COUNT(
IIF(
Fields!Departures.Value="PRG",
COUNT(Fields!SearchDate.Value),
Nothing
)
)
It could be a pain since depending on the dataset number of rows it increases the report processing time causing poor performance, it works though.
Let me know if this helps.

Related

How to customize partial sum in QlikView Pivot table

I'm a beginner for QlikView. I learnt that by going to presentation tab and click on show partial sum box and subtotal box, I can get the partial sum and subtotal. But my task is to calculate the average which means (subtotal/no of record)/ (partial sum/no of record). I've been suggested to use set analysis to calculate too. I have somehow calculated the numerator part but unable to get denominator.
This is my numerator :
Let vNumerator = 'SUM({<ACTIVITY={num1,num2}>} BOX_COUNT)/Count (record1 & record2)';
Any help is much appreciated.
From this scrap data
load * inline [
A,B,C,D,E
1,21,a,b,z,x
2,22,b,b,z,x
3,23,a,a,x,z
4,24,b,b,x,z
5,25,b,a,x,x
6,21,a,b,z,x
7,22,b,b,z,x
8,23,a,a,x,z
9,24,b,b,x,z
10,25,b,a,x,x
11,21,a,b,z,x
12,22,b,b,z,x
13,23,a,a,x,z
14,24,b,b,x,z
15,25,b,a,x,x
16,21,a,b,z,x
17,22,b,b,z,x
18,23,a,a,x,z
19,24,b,b,x,z
20,25,b,a,x,x
];
Which of these tables is what you are looking for out of the data?
All of these by defining a third expression as a combination of the others rather than trying to force it into an automatic subtotal.
Or make a table that shows the answers you want and post a picture of it. Even in excel is fine.
Also please let me know if I'm a million miles off the track here.
If you want to sum over ACTIVITY you need to use the following syntax:
sum( TOTAL<ACTIVITY> _your_expression_with_Set_ananaysis_ )
/
count( TOTAL<ACTIVITY> record1 & record2 )

Can I compare values in the same column in adjacent rows in PowerPivot?

I have a PowerPivot table for which I need to be able to determine how long an item was in an Error state. My data set looks something like this:
What I need to be able to do is to look at the values in the ID and State columns, and see if the value in the previous row is ERROR in the State column, and the same in the ID column. If it is, I then need to calculate the difference between the Changed Date values in those two rows.
So, for example, when I got to row 4, I would see that the value in the State column for Row 3, the previous row, is ERROR, and that the value in the ID column in the previous row is the same as the current row, so I would then calculate the difference between the Changed Date values in Row 3 and Row 4 (I don't care about the values in any of the other columns for this particular requirement).
Is there a way to do this in PowerPivot? I've done a fair amount of Internet searching, and it looks like if it can be done, it would use the EARLIER or EARLIEST DAX functions, but I can't find anything that tells me how, or even if, this can be done.
Thanks.
Chris,
I have had similar requirements many times and after a really long time of trial-and-error, I finally understood how EARLIER works. It can be very powerful, but also very slow so always check for the performance of your calculations.
To answer your question, you will need to create 4 calculated columns:
1) Item Rank - used for ranking the issues with same Item ID
=COUNTROWS(FILTER('ID', EARLIER([Item ID]) = [Item ID] && EARLIER([Date]) >= [Date]))
2) Follows Error - to easily find issue that follows EROR issue
=IF([State] = "EROR",[Item Rank]+1)
3) Time of Following Issue - simple lookup so that you can calculate the different
=IF([Follows Error]>0,
LOOKUPVALUE([Date], [User], [User], [Item Rank], [Follows Error]),
BLANK()
)
4) Time Diff - calculation of time different for the specific issue
=IF([State]="EROR",
DAY([Time of Following Issue])-DAY([Date]),
BLANK()
)
With those calculated columns, you can then easily create a powerpivot table, drag State and Item Id onto the ROWS pane and then simply add Time Diff to Values. You will get an overview of issues that contain string "EROR" issue and the time it took to resolve them.
This is what it looks like in PowerPivot window:
And the resulting Pivot table:
You can download my Excel file here (2013).
As I mentioned, be careful with the performance as the calculated columns with nested EARLIER and IF conditions might be a bit too performance-demanding. If there is a smarter way, I would be very happy to see it, but for now this works for me just fine.
Also, keep in mind that all calculated columns could be nested into 1, but I kept them separated to make it easier to understand the formulas.
Hope this helps :-)

Not getting the correct totals using Cognos Report Studio. Need to get totals that show up in column

newparts_calc
if (([MonthToDateQuery].[G/L Account] = 4200 and [Query1].[G_L_Group] = 'NEW')) THEN ([Credit Amount]-[Debit Amount]) ELSE (0)
Data Item1
total([newparts_calc])
I need Data Item1 to return newparts_calc values only.
So for example in 1st row Data Item1 should be 8,540.8, but is 34,163.2
Whats wrong? how do i fix?
REVISED QUESTION
I apologize for not making sense on the original question.
I have many of the calc's that im trying to gather and put on a crosstab. I want to see sales by month (row) and part category (column)
[Query2] is the one shown in picture above.
It joins [MonthToDateQuery] AND [Query1]
The join is on 'Invoice' and carnality is 1..1 = 1..1
[MonthToDateQuery] is based on the package im working in. General ledger. It supplies the g/l entries for each sales g/l account
[Query1] is a SQL query i brought in to be able to break out categories even further from g/l group.
For example g/l account 4300 is rebuilt. However i needed to break out even further to see Rebuilt-Production and Rebuilt-New. I can do that with the g/l group.
I saw in my g/l account ledger entries that it referenced the invoice number. So thats how i tied in my SQL.
So as you can see from the table below (which is the view tabular data from query) i need a total. I have tried plugging newparts_calc into my crosstab and setting aggregation to total but the numbers still dont seem right. I dont think i have something set as it should be.
All the calc's im doing are based on single or multiple G/L Accounts and single or multiple G/L Groups.
Any Advice?
As you can see the problem seems to be duplicate invoice numbers.
How can i fix?
Couple things come to mind:
-Set the processing order to 2
-Since your calc is always a multiple and you are joining two queries, you may need to check your cardinality. Sometimes it helps to add derived queries to ensure you are working with the correct grain.
I'm obviously missing something, but if you want
I need Data Item1 to return newparts_calc values only.
just use newparts_calc, without total? That would give you proper value for row 1 -)
If you need a running-total for days (sum of values for previous days) — you should use a running_total function.
At a guess, one of your two queries is returning multiple rows for each invoice, which will cause this double counting. Look at the output of the two queries and see if that's happening. If so, then you just need to work out how to collapse that down to one row per invoice.
Per your new question - The underlying data has got to be causing the issue. Its clearly not 1:1 (note that even though this is what your stated cardinality is, Cognos does not enforce 1:1). Invoice number is not unique, GL Group is at a lower level.

Subselecting with MDX

Greetings stack overflow community.
I've recently started building an OLAP cube in SSAS2008 and have gotten stuck. I would be grateful if someone could at least point me towards the right direction.
Situation: Two fact tables, same cube. FactCalls holds information about calls made by subscribers, FactTopups holds topup data. Both tables have numerous common dimensions one of them being the Subscriber dimension.
FactCalls FactTopups
SubscriberKey SubscriberKey
CallDuration DateKey
CallCost Topup Value
...
What I am trying to achieve is to be able to build FactCalls reports based on distinct subscribers that have topped up their accounts within the last 7 days.
What I am basically looking for an MDX equivalent to SQL's:
select *
from FactCalls
where SubscriberKey in
( select distinct SubscriberKey from FactTopups where ... );
I've tried creating a degenerate dimension for both tables containing SubscriberKey and doing:
Exist(
[Calls Degenerate].[Subscriber Key].Children,
[Topups Degenerate].[Subscriber Key].Children
)
Without success.
Kind regards,
Vince
You would probably find something like the following would perform better. The filter approach will be forced to iterate through each subscriber, while the NonEmpty() function can take advantage of optimizations in the storage engine.
select non empty{
[Measures].[Count],
[Measures].[Cost],
[Measures].[Topup Value]
} on columns,
{
NonEmtpy( [Subscriber].[Subscriber Key].Children,
( [Measures].[Topups Count],
[Topup Date].[Calendar].[Month Name].&[2010]&[3] ) )
} on rows
from [Calls] ;
You know how sometimes it's the simplest and most obvious solutions that somehow elude you? Well, this is apparently one of them. They say "MDX is not SQL" and I now know what they mean. I've been working at this from an entirely SQL point of view, completely overlooking the obvious use of the filter command.
with set [OnlyThoseWithTopupsInMarch2010] as
filter(
[Subscriber].[Subscriber Key].Children,
( [Measures].[Topups Count],
[Topup Date].[Calendar].[Month Name].&[2010]&[3] ) > 0
)
select non empty{
[Measures].[Count],
[Measures].[Cost],
[Measures].[Topup Value]
} on columns,
non empty{ [Test] } on rows
from [Calls] ;
Embarrassingly simple.

How to Sort Data Table like FogBugz Cases Table

Anyone ever see how fogbugz sorts their tables? When you click to sort the column, they actually break the table up into many small tables that have each category of info.
Wondering if anyone knows how they do this?
Looking to implement this feature.
If you take a look through the cases page, and sort you can see what I mean.
Any help would be AWESOME!
Still Haven't figured this one out.
EDIT: #Peter, I don't want to postback and recreate a table every time the header title is clicked for a sort. I also want to know if their is a generic solution for this. If I click on the header to sort, by the way of javascript, it seperates the "one" table into many and I want to know if their is any generic solution for this because its just a MUCH better way of viewing a sorted Table.
EDIT: I do need a javascript sorter, but if you look right down at the implementation of fogbugz, it produces a different result...
Yup, Rich got it (I coded this feature into FogBugz a long while back).
If you have to do this on the client you have no choice but to sort the data, iterate through it generating table row after table row, and every time you hit a new sort value you create a new thead w/ the appropriate information.
To be honest it would be a pretty cool modification to this jQuery plugin: http://tablesorter.com/docs/ and you'd be able to leverage a lot of their work. If you're going to put in the time and create a general solution, might as well make it accessible to the community.
Without knowing specifically how Fog Creek accomplishes this, the way that I would do it is to output a table header, then iterate through the list, outputting a footer and a new header each time the group value changed.
Not sure what answer do you expect. SQL query for this would simply use ordering on selected column, and UI would start new table each time this value changes.
Here is screenshot of FogBugz with this sorting, after clicking on Priority column.
http://img297.imageshack.us/img297/6974/76755363ee3.png
Of course, starting new table doesn't make sense for every column (title, case #).
Edit: If I understand correctly, you're looking for a way how to do this in a browser without loading new page. If this is the case, I would suggest at least some server-side support, which would return your data in correct order, and properly structured for subtables (in xml/json/whatever you use). Your javascript will use this data to recreate tables. I am sure others with more web-ui experience will provide you with better answers.
I've used the Sortable Tables script from Kryogenix with some good results.
I don't know if it is relevant, but we store the results of a query in a temporary table in SQL, and then reference current-row-less-one to see if a Category has changed, and indicate this in the resulset.
In some instances we "indicate" this with a column containing
<tr><td colspan=999>Category Heading</td></tr>
so that the web page can just "inject" that into the table it is building.
SELECT Col1, Col2, ...,
[CATEGORY] = CASE WHEN T1.CategoryCol <> COALESCE(T2.CategoryCol, '')
THEN '<tr><td colspan=999>' + T1.CategoryCol + '</td></tr>'
ELSE ''
END
FROM #MyTempTable AS T1
LEFT OUTER JOIN #MyTempTable AS T2
ON T2.ID = T1.ID - 1

Resources