I have a column containing many price rows I want to determine the sum of 25 rows range then the next 25 rows range etc. I've done this, it's ok only for the first list:
select sum(substr(UNICOLON,-14,10)) over (order by to_number(:P22_UNI) rows between current row and 24 following) as R
from CNAM_CONCAT;
Report is the sum of the 25 rows in the page 1/2 and TOTAL A PAYER is the sum of all rows
Related
I have a matrix visual in PowerBI with two row fields, rf1 and rf2. This goups row field 2 (rf2) by row field 1 (rf1) such that each value in rf1 contains multiple values from rf2. rf1 and rf2 are stored in different tables in the data model, but the tables are connected directly.
I would like to show on the matrix visual the number of unique rf2 values within each rf1 against the corresponding row.
For example (first two colums as collapsable groups as in the matrix visual):
rf1
rf2
rf2 count
Values
group1
3
10
a
3
b
1
c
6
group2
2
5
a
2
d
3
Tot
--------------
5
15
What measure do I need to be able to generate this view?
How do I select a random row from the database based on the probability chance assigned to each row.
Example:
Make Chance Value
ALFA ROMEO 0.0024 20000
AUDI 0.0338 35000
BMW 0.0376 40000
CHEVROLET 0.0087 15000
CITROEN 0.016 15000
........
How do I select random make name and its value based on the probability it has to be chosen.
Would a combination of rand() and ORDER BY work? If so what is the best way to do this?
You can do this by using rand() and then using a cumulative sum. Assuming they add up to 100%:
select t.*
from (select t.*, (#cumep := #cumep + chance) as cumep
from t cross join
(select #cumep := 0, #r := rand()) params
) t
where #r between cumep - chance and cumep
limit 1;
Notes:
rand() is called once in a subquery to initialize a variable. Multiple calls to rand() are not desirable.
There is a remote chance that the random number will be exactly on the boundary between two values. The limit 1 arbitrarily chooses 1.
This could be made more efficient by stopping the subquery when cumep > #r.
The values do not have to be in any particular order.
This can be modified to handle chances where the sum is not equal to 1, but that would be another question.
I have a Matrix that is calculating % and it works properly for 1 row but not multiples.
Its calculating the individual departments in the row by item number to equal 100 %.
When using multiple rows it calculates all the rows together for a total of 100%.
This is not what I want.
I want all rows to act like the first pic with 1 row calculating across the row.
lenter image description herelike this dept 1 dept 2 dept 3 total item 1 71% 14% 14% 100% item 2 50% 25% 25% 100%
I have figured this out, so this is how I needed to have my sql SUM(B.RDCQTY) OVER (partition by RDICDE) AS SMDSTRDCQTY and RDCQTY / SUM(B.RDCQTY) OVER (PARTITION BY RDICDE) AS PER and in the last cte SUM(PER) OVER (PARTITION BY RDICDE) AS TTLPER then in SSRS the percentage column as sum(per) and the total % column as =(Fields!TTLPER.Value) Now the report is calulating properly per row.fixedpic
Could anyone please explain to me how the following code executes and what is the meaning of preceding keyword in Oracle?
SUM(WIN_30_DUR) OVER(PARTITION BY AGENT_MASTER_ID
ORDER BY ROW_DT ROWS BETWEEN 30 PRECEDING AND 1 PRECEDING)
Hey Thanks for your clarification. I have a small doubt.
Let say if we have 59 days of data from 1st jan to 28 feb. What data this function gets?
You obviously are querying a table T with columns WIN_30_DUR, AGENT_MASTER_ID and ROW_DT (among others). Keep in mind that keywords like OVER, PARTITION show you're using an analytical request: such requests allow you to get information on the current row from the other ones, that would be complex and long to write with GROUP BY or other "standard" clauses.
Here, on a given row, you:
group (PARTITION) by AGENT_MASTER_ID: this gets all the rows of T with current AGENT_MASTER_ID
in the partition formed you ORDER rows by ROW_DT
this ordering allows you to select the 30 rows before the current ROW_DT: this is the meaning of the PRECEDING keyword (0 would select the current row, the opposite is the FOLLOWING clause)
then you do a sum on the WIN_30_DUR field
In usual language, this would mean something like: for each agent, take the sum of durations of the preceding 30 days.
select row_dt, win_30_dur,
agent_master_id,
SUM(WIN_30_DUR) OVER(PARTITION BY AGENT_MASTER_ID
ORDER BY ROW_DT ROWS BETWEEN 30 PRECEDING AND 1 PRECEDING) running_sum
from test;
It uses ROWS BETWEEN 0 PRECEDING AND 0 PRECEDING for returning the results upto the current row. , that is partitioned by the column AGENT_MASTER_ID in your table which is ordered by the ROW_DT.
So, in your query it returns the sum of values of AGENT_MASTER_ID that is preceding between 30 and 1 rows above the current row.
for better understanding: see here: http://sqlfiddle.com/#!4/ce6b4/4/0
ROWS BETWEEN is the windowing clasue. It is used to specify what rows are considered while evaluating the analytic function.
Breaking down the clauses,
PARTITION BY AGENT_MASTER_ID : The rows are partitioned by agent_master_id. That means, while evaluating the function for a particular row, only those rows are considered which which have agent_master_id same as that of the current row.
ORDER BY ROW_DT : The column by which the rows are ordered within each partition.
ROWS BETWEEN 30 PRECEDING AND 1 PRECEDING : This specifies within each partition, consider only those rows starting from the row which precedes the current row by 30, till the row which precedes the current row by 1. Essentially, 30 previous rows.
For explanation purpose lets assume this is how your table looks like. Under the sum_as_analytical I have mentioned what rows are included while calculating the SUM.
agent_master_id win_30_dur row_dt sum_as_analytical
---------------------------------------------------------------------
1 12 01-01-2013 no preceding rows. Sum is null
1 10 02-01-2013 only 1 preceding row. sum = 12
1 14 03-01-2013 only 2 preceding rows. sum = 12 + 10
1 10 04-01-2013 3 preceding rows. sum = 12 + 10 + 14
. .
. .
. .
1 10 30-01-2013 29 preceding rows. sum = 12 + 10 + 14 .... until value for 29-01-2013
1 10 31-01-2013 30 preceding rows. sum = 12 + 10 + 14 .... until value for 30-01-2013
1 20 01-02-2013 30 preceding rows. sum = 10 + 14 + 10 .... until value for 31-01-2013
. .
. .
. .
1 10 28-02-2013 30 preceding rows. sum = sum of values from 29th Jan to 27th FeB
2 10 01-01-2013 no preceding rows. Sum is null
2 15 02-01-2013 only 1 preceding row. sum = 10
2 14 03-01-2013 only 2 preceding rows. sum = 10 + 15
2 12 04-01-2013 3 preceding rows. sum = 10 + 15 + 14
. .
. .
. .
2 23 31-01-2013 30 preceding rows. sum = 10 + 15 + 14 .... until value for 30-01-2013
2 12 01-02-2013 30 preceding rows. sum = 15 + 14 + 12 .... until value for 31-01-2013
. .
. .
. .
2 25 28-02-2013 30 preceding rows. sum = sum of values from 29th Jan to 27th FeB
Few other examples of windowing clasue,
UNBOUNDED PRECEDING and UNBOUNDED FOLLOWING : All preceding rows, current row, all following rows.
2 PRECEDING and 5 FOLLOWING : 2 preceding rows, current row and 5 following rows.
5 PRECEDING and CURRENT ROW : 5 preceding rows and current row.
CURRENT ROW and 1 FOLLOWING : Current row, 1 following row.
Windowing clause is optional. If you omit it, the default in Oracle is UNBOUNDED PRECEDING AND CURRENT ROW, which essentially gives the cumulative total.
Here's a simple demo.
I found solution by assigning result into list..
List<> BOS = Orders1.ToList<>();
decimal running_total = 0;
var result_set =
from x in BOS
select new
{
DESKTOPS = x.NOTEBOOKS,
running_total = (running_total = (decimal)(running_total + x.NOTEBOOKS))
};`enter code here`
I have NxM matrix with integer elements, greater or equal than 0.
From any cell I can transfer 1 to another one (-1 to the source cell, +1 to the destination).
Using this operation, I have to make sums for all rows and columns equal. The question is how to find the minimal amount of such operations to achieve my task. During the processing cells may be negative.
For example, for
1 1 2 2
1 0 1 1
0 0 1 1
1 1 1 2
The answer is 3.
P.s.: I've tried to solve it on my own, but came only to brute-force solution.
First, find the expected sum per row and per column 1.
rowSum = totalSum / numRows
colSum = totalSum / numCols
Then, iterate through the rows and the columns and compute the following values:
rowDelta = 0
for each row r
if sum(r) > rowSum
rowDelta += sum(r) - rowSum
colDelta = 0
for each col c
if sum(c) > colSum
colDelta += sum(c) - colSum
The number of the minimum moves to equilibrate all the rows and columns is:
minMoves = max(rowDelta, colDelta)
This works because you have to transfer from rows that exceed rowSum into rows that don't exceed it, and from columns that exceed colSum into columns that don't exceed it.
If initially rowDelta was lower than colDelta, then you will attain a stage where you equilibrated all the rows, but the columns are still not equilibrated. At this case, you will continue transferring from cells to other cells in the same row. The same applies if initially colDelta was lower than rowDelta, and that's why we selected the maximum between them as the expected result.
1 If totalSum is not a multiple of numRows or numCols, then the problem has no solution.
Let us consider the one dimensional case: you have an array of numbers and you are allowed a single operation: take 1 from the value of one of the elements of the array and add it to other element. The goal is to make all elements equal with minimal operations. Here the solution is simple: you choose random "too big number" and add one to random "too small" number. Let me now describe how this relates to the problem at hand.
You can easily calculate the sum that is needed for every column and every row. This is the total sum of all elements in the matrix divided by the number of columns or rows respectively. From then on you can calculate which rows and columns need to be reduced and which - increased. see here:
1 1 2 2 -2
1 0 1 1 +1
0 0 1 1 +2
1 1 1 2 -1
+1+2-1-2
Expected sum of a row: 4
Expected sum of a column: 4
So now we generate two arrays: the array of displacements in the rows: -2,+1,+2,-1 and the number of displacements in the columns: +1,+2,-1,-2. For this two arrays we solve the simpler task described above. It is obvious that we can not solve the initial problem in fewer steps than the ones required for the simpler task (otherwise the balance in the columns or rows will not be 0).
However I will prove that the initial task can be solved in exactly as many steps as is the maximum of steps needed to solve the task for the columns and rows:
Every step in the simpler task generates two indices i and j: the index from which to subtract and the index to which to add. Lets assume in a step in the column task we have indices ci and cj and in the row task we have indices ri and rj. Then we assign a correspondence of this in the initial task: take 1 from (ci, ri) and add one to (cj, rj). At certain point we will reach a situation in which there might be still more steps in, say, the columns task and no more in the rows task. So we get ci and cj, but what do we do for ri and rj? We just choose ri=rj so that we do not screw up the row calculations.
In this solution I am making use of the fact I am allow to obtain negative numbers in the matrix.
Now lets demonstrate:
Solution for columns:
4->1;3->2;4->2
Solution for rows:
1->3;1->3;2->4
Total solution:
(4,1)->(1,3);(3,1)->(2,3);(4,2)->(2,4)
Supose thar r1 is the index of a row with maximal sum, while r2 is the row with minimal sum. c1 column with maximal sum and c2 column with minimal.
You need to repeat the following operation:
if Matrix[r1][c1] == Matrix[r2][c2] we're done!
Otherwise, Matrix[r1][c1] -= 1 and Matrix[r2][c2] += 1