I need to have a total for each grouping within a row group in SSRS. Currently, a total is added right at the end of the row group.
So, if I have the following data:
TeamName BusinessSegment PaymentPeriod BusinessArea ProductType PolicyCount201501 Premium201501
--------------------------- ------------- ---------------- ---------------- ------------------ ---------------
Office Non-property Monthly Commercial Lines Sectional Title 0.00 0.00
Office1 Non-property Annual Commercial Lines C&I Generic (Web 1.00 24025.00
Office1 Non-property Annual Commercial Lines Property Protect 1.00 24025.00
Office1 Non-property Monthly Commercial Lines BizzInsure 1.00 24025.00
Office1 Non-property Monthly Commercial Lines Sectional Title 1.00 24025.00
Office2 Non-property Annual Commercial Lines Property Protect 1.00 24025.00
Office2 Non-property Annual Commercial Lines Sectional Title 1.00 24025.00
Office2 Non-property Annual Commercial Lines Sectional Title 1.00 24025.00
Office2 Non-property Monthly Commercial Lines M&F Commercial B 1.00 24025.00
Office2 Non-property Monthly Commercial Lines Sectional Title 1.00 24025.00
I want the output to be like this:
Team Name Business Segment Payment Period Business Area Product Type Policy Count 201501 Premium 201501
Office Non-property Monthly Commercial Lines Sectional Title 0.00 0.00
Total 0.00 0.00
Monthly Total 0.00 0.00
Non-property Total 0.00 0.00
Office Total 0.00 0.00
Office1 Non-property Annual Commercial Lines Something1 1.00 1.00
Office1 Non-property Annual Commercial Lines Something2 1.00 1.00
Total 2.00 2.00
Annual Total 2.00 2.00
Office1 Non-property Monthly Commercial Lines Something1 0.00 1.00
Office1 Non-property Monthly Commercial Lines Something2 1.00 1.00
Total 1.00 2.00
Monthly Total 1.00 2.00
Non-property Total 3.00 4.00
Office1 Total 3.00 4.00
Office2 Non-property Annual Commercial Lines Something1 0.00 1.00
Office2 Non-property Annual Commercial Lines Something2 1.00 1.00
Total 1.00 2.00
Annual Total 1.00 2.00
Office2 Non-property Monthly Commercial Lines Something1 2.00 1.00
Office2 Non-property Monthly Commercial Lines Something2 1.00 1.00
Total 3.00 2.00
Monthly Total 3.00 2.00
Non-property Total 4.00 4.00
Office2 Total 4.00 4.00
Grand Total 7.00 8.00
Note that PaymentPeriod is grouped by Monthly and Annually and there is a total after Monthly and a total after Annually.
Currently, the following is rendered (Note the single Annual total at the end of each PaymentPeriod grouping:
Team Name Business Segment Payment Period Business Area Product Type Policy Count 201501 Premium 201501
Office Non-property Monthly Commercial Lines Sectional Title 0.00 0.00
Total 0.00 0.00
Monthly Total 0.00 0.00
Non-property Total 0.00 0.00
Office Total 0.00 0.00
Office1 Non-property Annual Commercial Lines Something1 1.00 1.00
Office1 Non-property Annual Commercial Lines Something2 1.00 1.00
Total 2.00 2.00
Office1 Non-property Monthly Commercial Lines Something1 0.00 1.00
Office1 Non-property Monthly Commercial Lines Something2 1.00 1.00
Total 1.00 2.00
Annual Total 3.00 4.00
Non-property Total 3.00 4.00
Office1 Total 3.00 4.00
Office2 Non-property Annual Commercial Lines Something1 0.00 1.00
Office2 Non-property Annual Commercial Lines Something2 1.00 1.00
Total 1.00 2.00
Office2 Non-property Monthly Commercial Lines Something1 2.00 1.00
Office2 Non-property Monthly Commercial Lines Something2 1.00 1.00
Total 3.00 2.00
Annual Total 4.00 4.00
Non-property Total 4.00 4.00
Office2 Total 4.00 4.00
Grand Total 7.00 8.00
How would I achieve this please?
Add another child group to your Payment Period group and add total to your second group. Delete the column from display for the first group (Do not delete the first group).
Output:
Related
What is the F1-score of the model in the following? I used scikit learn package.
print(classification_report(y_true, y_pred, target_names=target_names))
precision recall f1-score support
<BLANKLINE>
class 0 0.50 1.00 0.67 1
class 1 0.00 0.00 0.00 1
class 2 1.00 0.67 0.80 3
<BLANKLINE>
accuracy 0.60 5
macro avg 0.50 0.56 0.49 5
weighted avg 0.70 0.60 0.61 5
This article explains it pretty well
Basically it's
F1 = 2 * precision * recall / (precision + recall)
After successfully implementing OpenMP to my code, I am trying to check how much the implementation has improved my code performance, but using gprof it gives me totally different flat profile. Below is my main program calling all subroutines.
program main
use my_module
call inputf !to read inputs from a file
! call echo !to check if the inputs are read in correctly, but is muted
call allocv !to allocate dimension to all array variable
call bathyf !to read in the computational domain
call inicon !to setup initial conditions
call comput !computation from iteration 1 to n
call deallv !to deallocate all array variables
end program main
Following is the cpu_time and OMP_GET_WTIME() for both serial and parallel codes. The OpenMP parallel region is within subroutine comput.
!serial code
CPU time elapsed = 260.5080 seconds.
!parallel code
CPU time elapsed = 153.3600 seconds.
OMP time elapsed = 49.3521 seconds.
And the following are the flat profile for both serial and parallel codes.
!Serial code
Flat profile:
Each sample counts as 0.01 seconds.
% cumulative self self total
time seconds seconds calls s/call s/call name
96.26 227.63 227.63 1 227.63 236.45 comput_
3.60 236.13 8.50 2001 0.00 0.00 update_
0.08 236.32 0.19 2000 0.00 0.00 openbc_
0.05 236.45 0.13 41 0.00 0.00 output_
0.01 236.47 0.02 1 0.02 0.02 bathyf_
0.01 236.49 0.02 1 0.02 0.03 inicon_
0.00 236.50 0.01 1 0.01 0.01 opwmax_
0.00 236.50 0.00 1001 0.00 0.00 timser_
0.00 236.50 0.00 2 0.00 0.00 timestamp_
0.00 236.50 0.00 1 0.00 0.00 allocv_
0.00 236.50 0.00 1 0.00 0.00 deallv_
0.00 236.50 0.00 1 0.00 0.00 inputf_
!Parallel code
Flat profile:
Each sample counts as 0.01 seconds.
% cumulative self self total
time seconds seconds calls s/call s/call name
95.52 84.90 84.90 openbc_
1.68 86.39 1.49 2001 0.74 0.74 update_
0.10 86.48 0.09 41 2.20 2.20 output_
0.00 86.48 0.00 1001 0.00 0.00 timser_
0.00 86.48 0.00 2 0.00 0.00 timestamp_
0.00 86.48 0.00 1 0.00 0.00 allocv_
0.00 86.48 0.00 1 0.00 0.00 bathyf_
0.00 86.48 0.00 1 0.00 0.00 deallv_
0.00 86.48 0.00 1 0.00 2.20 inicon_
0.00 86.48 0.00 1 0.00 0.00 inputf_
0.00 86.48 0.00 1 0.00 0.00 comput_
0.00 86.48 0.00 1 0.00 0.00 opwmax_
subroutine update, openbc, output and timser are called within subroutine comput. As you can see, the subroutine comput is suppose to spend the most runtime, but the flat profile of the parallel code shows otherwise. Please let me know if you need other information.
gprof is poorly suited for analysis of parallel programs as it doesn't understand the intricacies of OpenMP. You should instead use something like a combination of Score-P and Cube. The former is an instrumentation framework while the latter is a visualisation tool for hierarchical performance data. Both are open-source projects. On the commercial front, Intel VTune Amplifier could be used.
This article says:
One problem with gprof under certain kernels (such as Linux) is that it doesn’t behave correctly with multithreaded applications. It actually only profiles the main thread, which is quite useless.
The article also provides a work-around, but since you don't create your threads manually, but instead use OpenMP (which creates the threads transparently), you will have to modify it to make it work for you.
You could also choose a profiler that is able to work with parallel programs instead.
If I run
ruby-prof -p graph -s self aggregate.rb > graph.txt
the first few lines of my graph.txt will look like:
Total Time: 40.092432
%total %self total self wait child calls Name
--------------------------------------------------------------------------------
5.16 5.16 0.00 0.00 98304/98304 Object#totalDurationFromFile
100.00% 100.00% 5.16 5.16 0.00 0.00 98304 IO#read
--------------------------------------------------------------------------------
4.91 4.91 0.00 0.00 98304/98304 <Class::IO>#new
95.17% 95.17% 4.91 4.91 0.00 0.00 98304 File#initialize
--------------------------------------------------------------------------------
0.37 0.19 0.00 0.17 32768/32769 Hash#each
28.89 4.67 0.00 24.22 1/32769 Object#readFiles
566.81% 94.24% 29.26 4.86 0.00 24.39 32769 Array#collect
14.71 1.98 0.00 12.73 98304/98304 Object#totalDurationFromFile
9.11 0.64 0.00 8.48 98304/131072 Class#new
0.39 0.39 0.00 0.00 98304/196609 <Class::File>#basename
0.00 0.17 0.00 0.00 98304/1202331 Object#main
--------------------------------------------------------------------------------
3.76 3.35 0.00 0.42 524288/524288 Module#class_eval
72.94% 64.85% 3.76 3.35 0.00 0.42 524288 Module#define_method
0.42 0.42 0.00 0.00 524288/524288 BasicObject#singleton_method_added
I don't think that this is specific to my script aggregate.rb. Therefore, I am leaving the source code out for the sake of brevity.
Question is: Why are there percentages higher than 100% in the %total column? Is sorting by self not allowed with the graph printer? Is this a bug or did I overlook something. Help greatly appreciated.
Thanks!
Have you checked if this change on Github resolves the issue? Apparently, the gem version is out of date and/or does not include that change (as it would also increase the number of decimal places to three).
I have three columns in my result set;
o.id o.value_one o.value_two o.value_three
---- ----------- ----------- -------------
1 1.00 0.00 0.00
2 1.00 1.00 0.00
3 1.00 1.00 1.00
4 0.00 1.00 1.00
5 0.00 0.00 1.00
6 0.00 0.00 0.00
I want to compare all three value columns and return the value of the column where it is not 0.00.
So I would return;
o.id o.new_value
---- -----------
1 1.00
2 1.00
3 1.00
4 1.00
5 1.00
Thanks for any help!
Chris
You could use a combination of NULLIF and COALESCE:
SELECT o.id, COALESCE(NULLIF(o.value_one, 0.0),
NULLIF(o.value_two, 0.0),
NULLIF(o.value_three, 0.0)) AS new_value
FROM Foo
select id,
decode(value_one, 0,
decode(value_two, 0
decode(value_three, 0, 0, value_three),
value_two),
value_one) new_value
from ...
Works w/o all expressions evaluation.
You might also consider an expression involving GREATEST, like:
GREATEST(o.value_one, o.value_two, o.value_three)
I'm trying to profile some Ruby code I wrote using ruby-prof gem and see that basic operations like i += 1 (listed as Fixnum#+ in the table below) take over 24 seconds to run (in this particular test, the operation is performed 2,199,978 times). Is this normal?
Thread 582936
%Total %Self Total Self Wait Child Calls Name
203.93 81.72 0.00 122.21 100001/100001 InputFile#parse
46.96% 18.82% 203.93 81.72 0.00 122.21 100001 InputFile#split_on_semicolon
24.59 24.59 0.00 0.00 2199978/3200094 Fixnum#+
16.02 16.02 0.00 0.00 100001/399998 String#split
14.72 14.72 0.00 0.00 999990/999991 String#[]
13.12 13.12 0.00 0.00 1199988/1199990 Fixnum#<
10.97 10.97 0.00 0.00 999990/2239978 String#empty?
10.49 10.49 0.00 0.00 1199988/1199988 String#<<
9.75 9.75 0.00 0.00 1199988/1200074 Array#[]
7.77 7.77 0.00 0.00 999990/999990 String#eql?
6.76 6.76 0.00 0.00 599994/599994 Fixnum#-
4.62 4.62 0.00 0.00 599994/599994 Array#delete_at
1.25 1.25 0.00 0.00 100001/1339989 Kernel#nil?
1.14 1.14 0.00 0.00 100001/300003 Array#size
1.01 1.01 0.00 0.00 100001/300002 Fixnum#>
Your results don't say += takes 25 seconds. They say that 2199978 calls to + took 24.59 seconds, which comes to 89.5 calls per ms. That's a bit slow, but probably only because it's being profiled. I don't see anything unusual in that.