Sorting Matrix Report Columns in Specified Order - reportingservices-2005

I've written the following TSQL to use in a report:
SELECT patcnty, CASE WHEN CAST(age_yrs AS int) <= '5' THEN 'Ages 5 and Younger' WHEN CAST(age_yrs AS int) BETWEEN '6' AND
'17' THEN 'Ages 6 to 17' WHEN CAST(age_yrs AS int) BETWEEN '18' AND '44' THEN 'Ages 18 to 44' WHEN CAST(age_yrs AS int) BETWEEN '45' AND
'64' THEN 'Ages 45 to 64' WHEN CAST(age_yrs AS int) >= '65' THEN 'Ages 65 and Older' END AS AgeGroup,
CASE WHEN patcnty = '54' THEN 'Tulare' WHEN patcnty = '16' THEN 'Kings' WHEN patcnty = '15' THEN 'Kern' WHEN patcnty = '10' THEN 'Fresno' END
AS County, oshpd_id, age_yrs
FROM OSHPD2009
WHERE (patcnty IN ('10', '15', '16', '54')) AND (age_yrs <> ' ')
I've created a SSRS 2005 Matrix report and have placed AgeGroup as my column and County as my row. When the report is displayed, the order of the columns are: Ages 18 to 44, Ages 45 to 64, Ages 5 and Younger, Ages 6 to 17 and Ages 65 and Older. This makes sense in that I set the sort order for the group to ascending.
How can I change the group sort order for the matrix column to sort in this manner:
Age 5 and Younger, Ages 6 to 17, Ages 18 to 44, Ages 45 to 64 and Ages 65 and Older?
Thanks much for your help!

After a little digging, I decided that SSRS needed to be told in what order to sort the groups. So, in the column grouping sort screen, I added an expression of
=iif(Fields!AgeGroup.Value="Ages 5 and Younger","1",
iif(Fields!AgeGroup.Value="Ages 6 to 17","2",
iif(Fields!AgeGroup.Value="Ages 18 to 44","3",
iif(Fields!AgeGroup.Value="Ages 45 to 64","4",
iif(Fields!AgeGroup.Value="Ages 65 and Older","5","Other")))))
This basically forces SSRS to sort the groups in the order I specify by assigning Ages 5 and Younger a value of 1, so that it's the first sort group etc. My report now works like a charm!

Or you can modify your source table to include a new column for AGe_Group order. And on the SSRS you can do the sorting based on the new column order. To clarify:
Newcolumn int
Set NewColumn = 1
where AgeGroup = "Ages 5 and Younger"
Set NewColumn = 2
where AgeGroup = ="Ages 6 to 17"
Set NewColumn = 3
where AgeGroup = Ages 18 to 44"

Related

Finding Max Weekly Average on a month

I am stuck on my query attempt. I have a table that lists test results with their dates. I need to run a query to return the highest weekly average for a particular month.
I have the first part figured out:
SELECT Effluent BOD5, WEEK(Date)
FROM bod
WHERE YEAR(Date) = 2020 AND MONTH (Date) = 4
ORDER BY WEEK(Date)
Returns:
Effluent BOD5 / WEEK(Date)
10 14
14 14
9 15
6 16
7 16
11 17
8 17
I need to get the result of 12 (which is the highest weekly average (week 14).
Any help would be great![enter image description here][1]
I messed around with this and figured it out! Here is what I used:
SELECT max(Total)
FROM
(SELECT week, avg(test) AS Total
From
(SELECT Effluent BOD5 test, WEEK(Date) week
FROM bod
WHERE YEAR(Date) = 2020 AND MONTH(Date) = 4
ORDER BY WEEK(Date),Effluent BOD5 desc)ab
GROUP BY week)ac

Oracle Query: Convert rownum in range of (1-7)

I have a table as follows:
Date Line Qty RN
15-03-18 1 10 1
16-03-18 1 10 2
17-03-18 1 10 3
18-03-18 1 10 4
15-03-18 2 10 5
16-03-18 2 10 6
17-03-18 2 10 7
I have a combobox which has options 5,6 and 7
If user selects 5, I wish to make qty for rownum 6 and 7= 0
If user selects 6, I wish to make qty for rownum 7= 0
If user selects 7, I wish to keep qty as it is (for each line)
Meaning select qty for dates 5, 6 or 7 days make rest 0
But my row numbers are 1,2, mabbe upto 100.
I need to convert them in range of 7(weekly) and then accordingly make the
qty 0.
Trying something like this:
If 5 is selected from combobox:
SELECT date,line,(CASE
when rn=6 then 0
when rn=7 then 0
else qty
end) as qty
from above table
I am selecting date, line and changing qty according to the rn(rownum) Appreciate any help.
Use functions mod() and sign(), this construction gave me desired values:
select tbl.*, decode(sign(5 - mod(rn - 1, 7)), 1, qty, 0)
from tbl
demo
In place of hardcoded 5 put your parameter.

BIRT interpolate data in crosstab for rows/cols without data

Crosstab interpolate data so graph 'connects the dots'
I have trouble with my crosstab or graph not interpolating the data correctly. I think this can be solved, but I'm not sure. Let me explain what I did.
I have a datacube with the rows grouping data by weeknumber and the cols grouping data per recordtype. I added to flagbits to the dataset so I can see if a record is new or old. In the datacube I added a measure to sum these bits per row/col group. Thus effectively counting new and old records per week per coltype. I also added a sum of records per coltype.
In the crosstab I added two runningsums for the new and old records sums. In the crosstab I added a datacell to calculate the actual records per row/colgroup. Thus actual records = totalcoltype - runningsum(old) + runningsum(new)
So lets say there are 20 records in this set for a coltype.
In week 1: 3 old records, 2 new records. Then the running sum becomes 3 and -2. Actual = 20 - 3 + 2 = 19 (correct)
In week 2: no data. The runningsums are not visible. Actual = 20 - null + null = 20 (wrong)
In week 3: no data. The runningsums are not visible. Actual = 20 - null + null = 20 (wrong)
In week 4: 2 old records, 1 new recored. The runningsums becomes 5 and 3. Actual = 20 - 5 + 3 = 18 (correct)
In week 5: no data. The runningsums are not visible. Actual = 20 - null + null = 20 (wrong)
In week 6: 6 old records, 2 new records. The runningsums becomes 11 and 5. Actual = 20 - 11 + 5 = 14 (correct)
In week 7: no data. The runningsums are not visible. Actual = 20 - null + null = 20 (wrong)
So the graph will display:
19
20
20
18
20
14
20
If I add a condition to the actual calculation, it becomes:
19
null
null
18
null
14
null
The graph doesn't ignore the null values, but thinks they are 0. The graph is wrong.
Is there a way to let the graph realy ignore the null values?
Another solution is that the runningsums display the last know value or just add 0 if there is no data. Any idea how to do this?
Obviously, a correct graph would read:
19
19
19
18
18
14
14

Sum multiple columns using PIG

I have multiple files with same columns and I am trying to aggregate the values in two columns using SUM.
The column structure is below
ID first_count second_count name desc
1 10 10 A A_Desc
1 25 45 A A_Desc
1 30 25 A A_Desc
2 20 20 B B_Desc
2 40 10 B B_Desc
How can I sum the first_count and second_count?
ID first_count second_count name desc
1 65 80 A A_Desc
2 60 30 B B_Desc
Below is the script I wrote but when I execute it I get an error "Could not infer matching function for SUM as multiple of none of them fit.Please use an explicit cast.
A = LOAD '/output/*/part*' AS (id:chararray,first_count:chararray,second_count:chararray,name:chararray,desc:chararray);
B = GROUP A BY id;
C = FOREACH B GENERATE group as id,
SUM(A.first_count) as first_count,
SUM(A.second_count) as second_count,
A.name as name,
A.desc as desc;
Your load statement is wrong. first_count, second_count is loaded as chararray. Sum can't add two strings. If you are sure that these columns will take numbers only then load them as int. Try this-
A = LOAD '/output/*/part*' AS (id:chararray,first_count:int,second_count:int,name:chararray,desc:chararray);
It should work.

SQL / VBScript / Intelligent Algorithm to find sum combinations quickly

Am trying to list out all the possible sequential (continuous and forward direction only) sum combinations , within the same subject.
Listing out the row_id and the number of rows involved in the sum.
Sample :
Input (Source Table :)
DLID Subject Total
1 Science 70
2 Science 70
3 Science 70
4 Science 70
5 Maths 80
6 Maths 80
7 English 90
8 English 90
9 English 90
10 Science 75
Expected Result :
ID Number of Rows Subject Total
1 1 Science 70
2 1 Science 70
3 1 Science 70
4 1 Science 70
5 1 Maths 80
6 1 Maths 80
7 1 English 90
8 1 English 90
9 1 English 90
10 1 Science 75
1 2 Science 140
2 2 Science 140
3 2 Science 140
5 2 Maths 160
7 2 English 180
8 2 English 180
1 3 Science 210
2 3 Science 210
7 3 English 270
1 4 Science 280
VBSript Code :
' myarray - reads the entire table from access database
' "i" is the total number of rows read
' "j" if to access each row one by one
' "m" is the number of subsequent rows with same subject , we are trying to check
' "n" is a counter to start from each row and check upto m - 1 rows whether same sub
' "k" is used to store the results into "resultarray"
myarray(0,j) = holds the row_id
myarray(1,j) = holds the subject
myarray(2,j) = holds the score
myarray(3 4 5 6 are other details
i is the total number of rows - around 80,000
There can be conitnuous records from the same subject as many as 700 - 800
m = is the number of rows matching / number of rows leading to the sum
For m = 1 to 700
For j = 0 to i-m
matchcount = 1
For n = 1 to m-1
if myarray(1,j) = myarray (1,j+n) Then
matchcount = matchcount + 1
Else
Exit For
End If
Next
If matchcount = m Then
resultarray(2,k) = 0
For o = 0 to m - 1
resultarray(2,k) = CDbl(resultarray(2,k)) + CDbl (myarray (2,j+o))
resultarray(1,k) = m
resultarray(0,k) = ( myarray (0,j) )
resultarray(3,k) = ( myarray (3,j) )
resultarray(4,k) = ( myarray (4,j) )
resultarray(5,k) = ( myarray (1,j) )
resultarray(7,k) = ( myarray (5,j) )
resultarray(8,k) = ( myarray (6,j) )
Next
resultarray(2,k) = round(resultarray(2,k),0)
k = k + 1
ReDim Preserve resultarray(8,k)
End If
Next
Next
Code is working perfect , but is very slow.
Am dealing with 80,000 row and from 5 to 900 continuous rows of same subject.
So the number of combinations , comes in a few millions.
Takes few hours for one set of 80,000 rows. have to do many sets daily.
Please suggest how to speed this up.
Better Algorithm / Code Improvements / Different Language to code
Please assist.
Here are the building blocks for a "real" Access (SQL) solution.
Observation #1
It seems to me that a good first step would be to add two Numeric (Long Integer) columns to the [SourceTable]:
[SubjectBlock] will number the "blocks" of rows where the subject is the same
[SubjectBlockSeq] will sequentially number the rows within each block
They both should be indexed (Duplicates OK). The code to populate these columns would be...
Public Sub UpdateBlocksAndSeqs()
Dim cdb As DAO.Database, rst As DAO.Recordset
Dim BlockNo As Long, SeqNo As Long, PrevSubject As String
Set cdb = CurrentDb
Set rst = cdb.OpenRecordset("SELECT * FROM [SourceTable] ORDER BY [DLID]", dbOpenDynaset)
PrevSubject = "(an impossible value)"
BlockNo = 0
SeqNo = 0
DBEngine.Workspaces(0).BeginTrans ''speeds up bulk updates
Do While Not rst.EOF
If rst!Subject <> PrevSubject Then
BlockNo = BlockNo + 1
SeqNo = 0
End If
SeqNo = SeqNo + 1
rst.Edit
rst!SubjectBlock = BlockNo
rst!SubjectBlockSeq = SeqNo
rst.Update
PrevSubject = rst!Subject
rst.MoveNext
Loop
DBEngine.Workspaces(0).CommitTrans
rst.Close
Set rst = Nothing
End Sub
...and the updated SourceTable would be...
DLID Subject Total SubjectBlock SubjectBlockSeq
1 Science 70 1 1
2 Science 60 1 2
3 Science 75 1 3
4 Science 70 1 4
5 Maths 80 2 1
6 Maths 90 2 2
7 English 90 3 1
8 English 80 3 2
9 English 70 3 3
10 Science 75 4 1
(Note that I tweaked your test data to make it easier to verify the results below.)
Now as we iterate through the ever-increasing "length of sequence to be included in the total" we can quickly identify the "blocks" that are of interest simply by using a query like...
SELECT SubjectBlock FROM SourceTable WHERE SubjectBlockSeq=3
...which will return...
1
3
...indicating that when calculating the totals for a "run of 3" we won't need to look at blocks 2 ("Maths") and 4 (the last "Science" one) at all.
Observation #2
The first time through, when NumRows=1, is a special case: it just copies the rows from [SourceTable] into the [Expected Results] table. We can save time by doing that with a single query:
INSERT INTO ExpectedResult ( DLID, NumRows, Subject, Total, SubjectBlock, NextSubjectBlockSeq )
SELECT SourceTable.DLID, 1 AS Expr1, SourceTable.Subject, SourceTable.Total,
SourceTable.SubjectBlock, SourceTable.SubjectBlockSeq+1 AS Expr2
FROM SourceTable;
You may notice that I have added two columns to the [ExpectedResult] table: [SubjectBlock] (as before) and [NextSubjetBlockSeq] (which is just [SubjectBlockSeq]+1). Again, they should both be indexed, allowing duplicates. We'll use them below.
Observation #3
As we continue looking for longer and longer "runs" to sum, each run is really just an earlier (shorter) run with an additional row tacked onto the end. If we write our results to the [ExpectedResults] table as we go along, we can re-use those values and not bother going back and adding up the individual values for the entire run.
When NumRows=2, the "add-on" rows are the ones where SubjectBlockSeq>=2...
SELECT SourceTable.*
FROM SourceTable
WHERE (((SourceTable.SubjectBlockSeq)>=2))
ORDER BY SourceTable.DLID;
...that is...
DLID Subject Total SubjectBlock SubjectBlockSeq
2 Science 60 1 2
3 Science 75 1 3
4 Science 70 1 4
6 Maths 90 2 2
8 English 80 3 2
9 English 70 3 3
...and the [ExpectedResult] rows with the "earlier (shorter) run" onto which we will be "tacking" the additional row are the ones
from the same [SubjectBlock],
with [NumRows]=1, and
with [ExpectedResult].[NextSubjectBlockSeq] = [SourceTable].[SubjectBlockSeq]
so we can get the new totals and append them to [ExpectedResult] like this
INSERT INTO ExpectedResult ( DLID, NumRows, Subject, Total, SubjectBlock, NextSubjectBlockSeq )
SELECT SourceTable.DLID, 2 AS Expr1, SourceTable.Subject,
[ExpectedResult].[Total]+[SourceTable].[Total] AS NewTotal,
SourceTable.SubjectBlock, [SourceTable].[SubjectBlockSeq]+1 AS Expr2
FROM SourceTable INNER JOIN ExpectedResult
ON (SourceTable.SubjectBlockSeq = ExpectedResult.NextSubjectBlockSeq)
AND (SourceTable.SubjectBlock = ExpectedResult.SubjectBlock)
WHERE (((SourceTable.SubjectBlockSeq)>=2) AND (ExpectedResult.NumRows=1));
The rows appended to [ExpectedResult] are
DLID NumRows Subject Total SubjectBlock NextSubjectBlockSeq
2 2 Science 130 1 3
3 2 Science 135 1 4
4 2 Science 145 1 5
6 2 Maths 170 2 3
8 2 English 170 3 3
9 2 English 150 3 4
Now we're cookin'...
Using the same logic as before, we can now process for NumRows=3. The only differences are that we will be inserting the value 3 into NumRows, and our selection criteria will be
WHERE (((SourceTable.SubjectBlockSeq)>=3) AND (ExpectedResult.NumRows=2))
The complete query is
INSERT INTO ExpectedResult ( DLID, NumRows, Subject, Total, SubjectBlock, NextSubjectBlockSeq )
SELECT SourceTable.DLID, 3 AS Expr1, SourceTable.Subject,
[ExpectedResult].[Total]+[SourceTable].[Total] AS NewTotal,
SourceTable.SubjectBlock, [SourceTable].[SubjectBlockSeq]+1 AS Expr2
FROM SourceTable INNER JOIN ExpectedResult
ON (SourceTable.SubjectBlockSeq = ExpectedResult.NextSubjectBlockSeq)
AND (SourceTable.SubjectBlock = ExpectedResult.SubjectBlock)
WHERE (((SourceTable.SubjectBlockSeq)>=3) AND (ExpectedResult.NumRows=2));
and the rows appended to [ExpectedResult] are
DLID NumRows Subject Total SubjectBlock NextSubjectBlockSeq
3 3 Science 205 1 4
4 3 Science 205 1 5
9 3 English 240 3 4
Parameterization
Since each successive query is so similar, it would be awfully nice if we could just write it once and use it repeatedly. Fortunately, we can, if we turn it into a "Parameter Query":
PARAMETERS TargetNumRows Long;
INSERT INTO ExpectedResult ( DLID, NumRows, Subject, Total, SubjectBlock, NextSubjectBlockSeq )
SELECT SourceTable.DLID, [TargetNumRows] AS Expr1, SourceTable.Subject,
[ExpectedResult].[Total]+[SourceTable].[Total] AS NewTotal,
SourceTable.SubjectBlock, [SourceTable].[SubjectBlockSeq]+1 AS Expr2
FROM SourceTable INNER JOIN ExpectedResult
ON (SourceTable.SubjectBlock = ExpectedResult.SubjectBlock)
AND (SourceTable.SubjectBlockSeq = ExpectedResult.NextSubjectBlockSeq)
WHERE (((SourceTable.SubjectBlockSeq)>=[TargetNumRows])
AND ((ExpectedResult.NumRows)=[TargetNumRows]-1));
Create a new Access query, paste the above into the SQL pane, and then save it as pq_appendToExpectedResult. (The "pq_" is just a visual cue that it's a Parameter Query.)
Invoking a Parameter Query from VBA
You can invoke (execute) a Parameter Query in VBA via a QueryDef object:
Dim cdb As DAO.Database, qdf As DAO.QueryDef
Set cdb = CurrentDb
Set qdf = cdb.QueryDefs("pq_appendToExpectedResult")
qdf!TargetNumRows = 4 '' parameter value
qdf.Execute
Set qdf = Nothing
When to stop
Now you can see that it's simply a matter of incrementing NumRows and re-running the Parameter Query, but when to stop? That's easy:
After incrementing your NumRows variable in VBA, test
DCount("DLID", "SourceTable", "SubjectBlockSeq=" & NumRows)
If it comes back 0 then you're done.
Show me (all) the code
Sorry, not right away. ;) Play around with this and let us know how it goes.
Your question is:
"Please suggest how to speed this up. Better Algorithm / Code Improvements / Different Language to code Please assist."
I can answer part of your question quickly in short. "Different Language to code" == SQL.
In detail:
Whatever it is you're trying to achieve looks dataset intensive. I'm almost certain this processing would be handled more efficiently within the DBMS that houses your data, as the DBMS is able to take a (reasonably well written) SQL query and optimise it based on its own knowledge of the data you are interrogating, and perform aggregation over large sets/sub-sets of data very quickly and efficiently.
Iterating over large datasets row-by-row to accumulate values is rarely (dare I say never) going to yield acceptable performance. Which is why DBMSes don't do this natively (if you don't force them to by using iterative code, or code that needs to investigate each row, such as your VB code).
Now, for the implementation of Better Algorithm / Code Improvements / Different Language.
I've done this in SQL, but regardless of if you use my solution or not, I would still highly recommend you migrate your data to eg MS SQL or Oracle or mySql etc if you find that your use of MS Access is binding you to iterative approaches (which is not to suggest it is doing that... I don't know if this is the case or not).
But if this is genuinely not homework, and/or you are genuinely tied to MS Access, then perhaps an investment of effort to convert this to MS Access might be fruitful in terms of performance. The principles should all be the same - it's a relational database and this is all fairly standard SQL, so I would've thought there'd be Access equivalents for what I've done here.
Failing that, you should be able to "point" an MSSQL instance at the MS Access file, as a linked server via an Access provider. If you'd like advice on this, let me know.
There's some code here that is procedural by nature, in order to set up some "helper" tables that will allow the heavy-lifting aggregation on your sequences to be done using set-based operations.
I've called the source table "Your_Source_Table". Do a search-replace on all instances to rename as whatever you've called it.
Note also that I haven't set up indexes on anything... you should do this. Indexes should be created for all the columns involved in joins, I expect. Checking the execution plan to ensure there's no unnecessary table scans would be wise.
I used the following to create Your_Source_Table:
-- Create Your apparent table structure
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[Your_Source_Table]') AND type in (N'U'))
DROP TABLE [dbo].[Your_Source_Table]
GO
CREATE TABLE [dbo].[Your_Source_Table](
[DLID] [int] NOT NULL,
[Subject] [nchar](10) NOT NULL,
[Total] [int] NOT NULL
) ON [PRIMARY]
GO
And populated it as:
DLID Subject Total
----------- ---------- -----------
1 Science 70
2 Science 70
3 Science 70
4 Science 70
5 Maths 80
6 Maths 80
7 English 90
8 English 90
9 English 90
10 Science 75
Then, I created the following "helpers". Explanations in code.
-- Set up helper structures.
-- Build a number table
if object_id('tempdb..##numbers') is not null
BEGIN DROP TABLE ##numbers END
SELECT TOP 10000 IDENTITY(int,1,1) AS Number -- Can be 700, 800, or 900 contiguous rows, depending on which comment I read. So I'll run with 100000 to be sure :-)
INTO ##numbers
FROM sys.objects s1
CROSS JOIN sys.objects s2
ALTER TABLE ##numbers ADD CONSTRAINT PK_numbers PRIMARY KEY CLUSTERED (Number)
-- Determine where each block starts.
if object_id('tempdb..#tempGroups') is not null
BEGIN DROP TABLE #tempGroups END
GO
CREATE TABLE #tempGroups (
[ID] [int] NOT NULL IDENTITY,
[StartID] [int] NULL,
[Subject] [nchar](10) NULL
) ON [PRIMARY]
INSERT INTO #tempGroups
SELECT t.DLID, t.Subject FROM Your_Source_Table t WHERE DLID=1
UNION
SELECT
t.DLID, t.Subject
FROM
Your_Source_Table t
INNER JOIN Your_Source_Table t2 ON t.DLID = t2.DLID+1 AND t.subject != t2.subject
-- Determine where each block ends
if object_id('tempdb..##groups') is not null
BEGIN DROP TABLE ##groups END
CREATE TABLE ##groups (
[ID] [int] NOT NULL,
[Subject] [nchar](10) NOT NULL,
[StartID] [int] NOT NULL,
[EndID] [int] NOT NULL
) ON [PRIMARY]
INSERT INTO ##groups
SELECT
g1.id as ID,
g1.subject,
g1.startID as startID,
CASE
WHEN g2.id is not null THEN g2.startID-1
ELSE (SELECT max(dlid) FROM Your_Source_Table) -- Boundary case when there is no following group (ie return the last row)
END as endID
FROM
#tempGroups g1
LEFT JOIN #tempGroups g2 ON g1.id = g2.id-1
DROP TABLE #tempGroups;
GO
-- We now have a helper table called ##groups, that identifies the subject, start DLID and end DLID of each continuous block of a particular subject in your dataset.
-- So now, we can build up the possible sequences within each group, by joining to a number table.
if object_id('tempdb..##sequences') is not null
BEGIN DROP TABLE ##sequences END
CREATE TABLE ##sequences (
[seqID] [int] NOT NULL IDENTITY,
[groupID] [int] NOT NULL,
[start_of_sequence] [int] NOT NULL,
[end_of_sequence] [int] NOT NULL
) ON [PRIMARY]
INSERT INTO ##sequences
SELECT
g.id,
ns.number start_of_sequence,
ne.number end_of_sequence
FROM
##groups g
INNER JOIN ##numbers ns
ON ns.number <= (g.endid - g.startid + 1) -- number is in range for this block
INNER JOIN ##numbers ne
ON ne.number <= (g.endid - g.startid + 1) -- number is in range for this block
and ne.number >= ns.number -- end after start
ORDER BY
1,2,3
Then, the results you're after can be achieved with a single set-based operation:
-- By joining groups to your dataset we can add a group identity to each record.
-- By joining sequences we can generate copies of the rows for aggregation into each sequence.
select
min(t.dlid) as ID, -- equals (s.start_of_sequence + g.startid - 1) (sequence positions offset by group start position)
count(t.dlid) as number_of_rows,
g.subject,
sum(t.total) as total
--select *
from
Your_Source_Table t
inner join ##groups g
on t.dlid >= g.startid and t.dlid <= g.endid -- grouping rows into each group.
inner join ##sequences s
on s.groupid = g.id -- get the sequences for this group.
and t.dlid >= (s.start_of_sequence + g.startid - 1) -- include the rows required for this sequence (sequence positions offset by group start position)
and t.dlid <= (s.end_of_sequence + g.startid - 1)
group by
g.subject,
s.seqid
order by 2, 1
BUT NOTE:
This result is NOT exactly the same as your "Expected Result".
You've incorrectly included a duplicate instance of the 1 row sequence starting at row 1 (for science, sum total 1*70=70), but not included the 4 row sequence starting at row 1 (for science, sum total 4*70 = 280).
The correct results, IMHO are:
ID number_of_rows subject total
----------- -------------- ---------- -----------
1 1 Science 70 <-- You've got this row twice.
2 1 Science 70
3 1 Science 70
4 1 Science 70
5 1 Maths 80
6 1 Maths 80
7 1 English 90
8 1 English 90
9 1 English 90
10 1 Science 75
1 2 Science 140
2 2 Science 140
3 2 Science 140
5 2 Maths 160
7 2 English 180
8 2 English 180
1 3 Science 210
2 3 Science 210
7 3 English 270
1 4 Science 280 <-- You don't have this row.
(20 row(s) affected)

Resources