I have table with some positive integer numbers
n
----
1
2
5
10
For each row of this table I want values cos(cos(...cos(0)..)) (cos is applied n times) to be calculated by means of SQL statement (PL/SQL stored procedures and functions are not allowed):
n coscos
--- --------
1 1
2 0.540302305868
5 0.793480358743
10 0.731404042423
I can do this in Oracle 11g by using recursive queries.
Is it possible to do the same in Oracle 10g ?
The MODEL clause can solve this:
Test data:
create table test1(n number unique);
insert into test1 select * from table(sys.odcinumberlist(1,2,5,10));
commit;
Query:
--The last row for each N has the final coscos value.
select n, coscos
from
(
--Set each value to the cos() of the previous value.
select * from
(
--Each value of N has N rows, with value rownumber from 1 to N.
select n, rownumber
from
(
--Maximum number of rows needed (the largest number in the table)
select level rownumber
from dual
connect by level <= (select max(n) from test1)
) max_rows
cross join test1
where max_rows.rownumber <= test1.n
order by n, rownumber
) n_to_rows
model
partition by (n)
dimension by (rownumber)
measures (0 as coscos)
(
coscos[1] = cos(0),
coscos[rownumber > 1] = cos(coscos[cv(rownumber)-1])
)
)
where n = rownumber
order by n;
Results:
N COSCOS
1 1
2 0.54030230586814
5 0.793480358742566
10 0.73140404242251
Let the holy wars begin:
Is this query a good idea? I wouldn't run this query in production, but hopefully it is a useful demonstration that any problem can be solved with SQL.
I've seen literally thousands of hours wasted because people are afraid to use SQL. If you're heavily using a database it is foolish to not use SQL as your primary programming language. It's good to occasionally spend a few hours to test the limits of SQL. A few strange queries is a small price to pay to avoid the disastrous row-by-row processing mindset that infects many database programmers.
Using WITH FUNCTION(Oracle 12c):
WITH FUNCTION coscos(n INT) RETURN NUMBER IS
BEGIN
IF n > 1
THEN RETURN cos(coscos(n-1));
ELSE RETURN cos(0);
END IF;
END;
SELECT n, coscos(n)
FROM t;
db<>fiddle demo
Output:
+-----+-------------------------------------------+
| N | COSCOS(N) |
+-----+-------------------------------------------+
| 1 | 1 |
| 2 | .5403023058681397174009366074429766037354 |
| 5 | .793480358742565591826054230990284002387 |
| 10 | .7314040424225098582924268769524825209688 |
+-----+-------------------------------------------+
Related
So this is my code which I executed in sqllive online ide.
declare
fac number := 1;
n number := &1;
begin
while n > 0 loop
fac := n*fac;
n := n-1;
end loop;
dbms_output.put_line(fac);
end;
Its giving me a "PLS-00103: Encountered the symbol "&" when expecting one of the following:" error
What is wrong with it?
SQL Live is not SQL*Plus. The ampersand is for SQL*Plus substitution variables and does not work in SQL Live. You need to edit your anonymous block and supply a value each time before you run it.
I don't really see the need for a pl/sql loop here. You could very well do this with a recursive query:
with cte (n, fac) as (
select ? n, 1 fac from dual
union all
select n - 1, fac * n from cte where n > 1
)
select max(fac) as result from cte
The question mark represents the parameter for the query, that is the number whose factorial you want to compute.
Demo on DB Fiddle: when given parameter 4 for example, the query returns:
<pre>
| RESULT |
| -----: |
| 24 |
</pre>
... which is the result of computation 1 * 2 * 3 * 4.
You could also phrase this as:
with cte (n, fac) as (
select 4 n, 4 fac from dual
union all
select n - 1, fac * (n - 1) from cte where n > 1
)
select max(fac) as result from cte
I don't know if this is even possible, but I'd like to be able to create a calculated column where each row is dependent on the rows above it.
A classic example of this is the Fibonacci sequence, where the sequence is defined by the recurrence relationship F(n) = F(n-1) + F(n-2) and seeds F(1) = F(2) = 1.
In table form,
Index Fibonacci
----------------
1 1
2 1
3 2
4 3
5 5
6 8
7 13
8 21
9 34
10 55
... ...
I want to be able to construct the Fibonacci column as a calculated column.
Now, I know that the Fibonacci sequence has a nice closed form where I can define
Fibonacci = (((1 + SQRT(5))/2)^[Index] - ((1 - SQRT(5))/2)^[Index])/SQRT(5)
or using the shallow diagonals of Pascal's triangle form:
Fibonacci =
SUMX (
ADDCOLUMNS (
SELECTCOLUMNS (
GENERATESERIES ( 0, FLOOR ( ( [Index] - 1 ) / 2, 1 ) ),
"ID", [Value]
),
"BinomCoeff", IF (
[ID] = 0,
1,
PRODUCTX (
GENERATESERIES ( 1, [ID] ),
DIVIDE ( [Index] - [ID] - [Value], [Value] )
)
)
),
[BinomCoeff]
)
but this is not the case for recursively defined functions in general (or for the purposes I'm actually interested in using this for).
In Excel, this is easy to do. You would write a formula like this
A3 = A2 + A1
or in R1C1 notation,
= R[-1]C + R[-2]C
but I just can't figure out if this is even possible in DAX.
Everything I've tried either doesn't work or gives a circular dependency error. For example,
Fibonacci =
VAR n = [Index]
RETURN
IF(Table1[Index] <= 2,
1,
SUMX(
FILTER(Table1,
Table1[Index] IN {n - 1, n - 2}),
Table1[Fibonacci]
)
)
gives the error message
A circular dependency was detected: Table1[Fibonacci].
Edit:
In the book Tabular Modeling in Microsoft SQL Server Analysis Services by Marco Russo and Alberto Ferrari, DAX is described and includes this paragraph:
As a pure functional language, DAX does not have imperative statements, but it leverages special functions called iterators that execute a certain expression for each row of a given table expression. These arguments are close to the lambda expression in functional languages. However, there are limitations in the way you can combine them, so we cannot say they correspond to a generic lambda expression definition. Despite its functional nature, DAX does not allow you to define new functions and does not provide recursion.
It appears there is no straightforward way to do recursion. I do still wonder if there is a way to still do it indirectly somehow using Parent-Child functions, which appear to be recursive in nature.
Edit 2:
While general recursion doesn't seem feasible, don't forget that recursive formulas may have a nice closed form that can be fairly easily derived.
Here are a couple of examples where I use this workaround to sidestep recursive formulas:
How to perform sum of previous cells of same column in PowerBI
DAX - formula referencing itself
Based on your first sample dataset, it looks to me like a "sort of" Cummulative Total, which can probably calculated easily in SQL using WINDOW function-- I tried a couple things but nothing panned out just yet. I don't work with DAX enough to say if it can be done.
Edit: In reviewing a little closer the Fibonacci sequence, it turns out that my SQL code doing cumulative comparison is not correct. You can read the SO Post How to generate Fibonacci Series, and it has a few good SQL Fibonacci answers that I tested; in particular the post by N J - answered Feb 13 '14. I'm not sure of a DAX Fibonacci recursion function capability.
SQL Code (not quite correct):
DECLARE #myTable as table (Indx int)
INSERT INTO #myTable VALUES
(1),(2),(3),(4),(5),(6),(7),(8),(9),(10)
SELECT
Indx
,SUM(myTable.Indx) OVER(ORDER BY myTable.Indx ASC ROWS BETWEEN UNBOUNDED PRECEDING and CURRENT ROW) -- + myTable.Indx
AS [Cummulative]
,SUM(myTable.Indx) OVER(ORDER BY myTable.Indx ASC ROWS BETWEEN UNBOUNDED PRECEDING and 2 PRECEDING)
+ SUM(myTable.Indx) OVER(ORDER BY myTable.Indx ASC ROWS BETWEEN UNBOUNDED PRECEDING and 1 PRECEDING)
AS [Fibonacci]
from #myTable myTable
Result Set:
+------+-------------+-----------+
| Indx | Cummulative | Fibonacci |
+------+-------------+-----------+
| 1 | 1 | NULL |
+------+-------------+-----------+
| 2 | 3 | NULL |
+------+-------------+-----------+
| 3 | 6 | 4 |
+------+-------------+-----------+
| 4 | 10 | 9 |
+------+-------------+-----------+
| 5 | 15 | 16 |
+------+-------------+-----------+
| 6 | 21 | 25 |
+------+-------------+-----------+
| 7 | 28 | 36 |
+------+-------------+-----------+
| 8 | 36 | 49 |
+------+-------------+-----------+
| 9 | 45 | 64 |
+------+-------------+-----------+
| 10 | 55 | 81 |
+------+-------------+-----------+
DAX Cummulative:
A link that could help calculate cumulative totals with DAX-- https://www.daxpatterns.com/cumulative-total/. And here is some sample code from the article.
Cumulative Quantity :=
CALCULATE (
SUM ( Transactions[Quantity] ),
FILTER (
ALL ( 'Date'[Date] ),
'Date'[Date] <= MAX ( 'Date'[Date] )
)
)
DAX language doesn't support recursion.
It's also been written in a sqlbi's article about calculation groups
DAX is not recursive, so Calculation Groups do not allow recursion. This is a good idea for controlling performance, but it requires a different approach compared to certain techniques that are possible in MDX Script by leveraging recursion.
https://www.sqlbi.com/blog/marco/2019/03/01/calculation-groups-in-dax-first-impressions/
If I declare in oracle a column as a number , What will be the maximum number it can be stored ?
Based on documentation:
Positive numbers in the range 1 x 10(raised)-130 to 9.99...9 x 10(raised)125 with up
to 38 significant digits
10(raised)125 is a very big number which has more than 38 digits. Will it not be stored ? If a number greater than 38 digits is stored, it will fail ? , will it save but when queried will lose precision ?
Thanks
From Oracle Doc
Positive numbers in the range 1 x 10^130 to 9.99...9 x 10^125 with up
to 38 significant digits Negative numbers from -1 x 10^130 to
9.99...99 x 10^125 with up to 38 significant digits
Test
create table tbl(clm number);
insert into tbl select power(10, -130) from dual;
insert into tbl select 9.9999*power(10, 125) from dual;
insert into tbl select 0.12345678912345678912345678912345678912123456 from dual;
insert into tbl select -1*power(10, -130) from dual;
select clm from tbl;
select to_char(clm) from tbl;
OutPut
1.000000000000000000000000000000000E-130
9.999900000000000000000000000000000E+125
.123456789123456789123456789123456789121
-1.00000000000000000000000000000000E-130
Numbers (datatype NUMBER) are stored using scientific notation
i.e. 1000000 as 1 * 10^6 i.e. you store only 1 (mantissa) and 6 (exponent)
select VSIZE(1000000), VSIZE(1000001) from dual;
VSIZE(1000000) VSIZE(1000001)
-------------- --------------
2 5
For the first number you need only 1 byte for mantissa, for the second 4 bytes (2 digist per byte).
So using NUMBER you will not get an exception while starting to loose precision.
select power(2,136) from dual;
87112285931760246646623899502532662132700
This number not exact and "filled" with zeroes (exponent). This may or may not be harmfull - consider e.g. the MOD function:
select mod(power(2,136),2) from dual;
-100
If you want to controll the precision exactly use e.g. datatype NUMBER(38,0)
select cast(power(2,136) as NUMBER(38,0)) from dual;
ORA-01438: value larger than specified precision allowed for this column
For example if you take the following example into consideration.
100.00 - Original Number
33.33 - 1st divided by 3
33.33 - 2nd divided by 3
33.33 - 3rd divided by 3
99.99 - Is the sum of the 3 division outcomes
But i want it to match the original 100.00
One way that i saw it could be done was by taking the original number minus the first two divisions and the result would be my third number. Now if i take those 3 numbers i get my original number.
100.00 - Original Number
33.33 - 1st divided by 3
33.33 - 2nd divided by 3
33.34 - 3rd number
100.00 - Which gives me my original number correctly. (33.33+33.33+33.34 = 100.00)
Is there a formula for this either in Oracle PL/SQL or a function or something that could be implemented?
Thanks in advance!
This version takes precision as a parameter as well:
with q as (select 100 as val, 3 as parts, 2 as prec from dual)
select rownum as no
,case when rownum = parts
then val - round(val / parts, prec) * (parts - 1)
else round(val / parts, prec)
end v
from q
connect by level <= parts
no v
=== =====
1 33.33
2 33.33
3 33.34
For example, if you want to split the value among the number of days in the current month, you can do this:
with q as (select 100 as val
,extract(day from last_day(sysdate)) as parts
,2 as prec from dual)
select rownum as no
,case when rownum = parts
then val - round(val / parts, prec) * (parts - 1)
else round(val / parts, prec)
end v
from q
connect by level <= parts;
1 3.33
2 3.33
3 3.33
4 3.33
...
27 3.33
28 3.33
29 3.33
30 3.43
To apportion the value amongst each month, weighted by the number of days in each month, you could do this instead (change the level <= 3 to change the number of months it is calculated for):
with q as (
select add_months(date '2013-07-01', rownum-1) the_month
,extract(day from last_day(add_months(date '2013-07-01', rownum-1)))
as days_in_month
,100 as val
,2 as prec
from dual
connect by level <= 3)
,q2 as (
select the_month, val, prec
,round(val * days_in_month
/ sum(days_in_month) over (), prec)
as apportioned
,row_number() over (order by the_month desc)
as reverse_rn
from q)
select the_month
,case when reverse_rn = 1
then val - sum(apportioned) over (order by the_month
rows between unbounded preceding and 1 preceding)
else apportioned
end as portion
from q2;
01/JUL/13 33.7
01/AUG/13 33.7
01/SEP/13 32.6
Use rational numbers. You could store the numbers as fractions rather than simple values. That's the only way to assure that the quantity is truly split in 3, and that it adds up to the original number. Sure you can do something hacky with rounding and remainders, as long as you don't care that the portions are not exactly split in 3.
The "algorithm" is simply that
100/3 + 100/3 + 100/3 == 300/3 == 100
Store both the numerator and the denominator in separate fields, then add the numerators. You can always convert to floating point when you display the values.
The Oracle docs even have a nice example of how to implement it:
CREATE TYPE rational_type AS OBJECT
( numerator INTEGER,
denominator INTEGER,
MAP MEMBER FUNCTION rat_to_real RETURN REAL,
MEMBER PROCEDURE normalize,
MEMBER FUNCTION plus (x rational_type)
RETURN rational_type);
Here is a parameterized SQL version
SELECT COUNT (*), grp
FROM (WITH input AS (SELECT 100 p_number, 3 p_buckets FROM DUAL),
data
AS ( SELECT LEVEL id, (p_number / p_buckets) group_size
FROM input
CONNECT BY LEVEL <= p_number)
SELECT id, CEIL (ROW_NUMBER () OVER (ORDER BY id) / group_size) grp
FROM data)
GROUP BY grp
output:
COUNT(*) GRP
33 1
33 2
34 3
If you edit the input parameters (p_number and p_buckets) the SQL essentially distributes p_number as evenly as possible among the # of buckets requested (p_buckets).
I've solved this problem yesterday by subtracting 2 of 3 parts from the starting number, e.g. 100 - 33.33 - 33.33 = 33.34 and the result of summing it up is still 100.
I had the following query:
SELECT nvl(sum(adjust1),0)
FROM (
SELECT
ManyOperationsOnFieldX adjust1,
a, b, c, d, e
FROM (
SELECT
a, b, c, d, e,
SubStr(balance, INSTR(balance, '[&&2~', 1, 1)) X
FROM
table
WHERE
a >= To_Date('&&1','YYYYMMDD')
AND a < To_Date('&&1','YYYYMMDD')+1
)
)
WHERE
b LIKE ...
AND e IS NULL
AND adjust1>0
AND (b NOT IN ('...','...','...'))
OR (b = '... AND c <> NULL)
I tried to change it to this:
SELECT nvl(sum(adjust1),0)
FROM (
SELECT
ManyOperationsOnFieldX adjust1
FROM (
SELECT
SubStr(balance, INSTR(balance, '[&&2~', 1, 1)) X
FROM
table
WHERE
a >= To_Date('&&1','YYYYMMDD')
AND a < To_Date('&&1','YYYYMMDD')+1
AND b LIKE '..'
AND e IS NULL
AND (b NOT IN ('..','..','..'))
OR (b='..' AND c <> NULL)
)
)
WHERE
adjust1>0
Mi intention was to have all the filtering in the innermost query, and only give to the outer ones the field X which is the one I have to operate a lot. However, the firts (original) query takes a couple of seconds to execute, while the second one won't even finish. I waited for almost 20 minutes and still I wouldn't get the answer.
Is there an obvious reason for this to happen that I might be overlooking?
These are the plans for each of them:
SELECT STATEMENT optimizer=all_rows (cost = 973 Card = 1 bytes = 288)
SORT (aggregate)
PARTITION RANGE (single) (cost=973 Card = 3 bytes = 864)
TABLE ACCESS (full) OF "table" #3 TABLE Optimizer = analyzed(cost=973 Card = 3 bytes=564)
SELECT STATEMENT optimizer=all_rows (cost = 750.354 Card = 1 bytes = 288)
SORT (aggregate)
PARTITION RANGE (ALL) (cost=759.354 Cart = 64.339 bytes = 18.529.632)
TABLE ACCESS (full) OF "table" #3 TABLE Optimizer = analyzed(cost=750.354 Card = 64.339 bytes=18.529.632)
Your two queries are not identical.
the logical operator AND is evaluated before the operator OR:
SQL> WITH data AS
2 (SELECT rownum id
3 FROM dual
4 CONNECT BY level <= 10)
5 SELECT *
6 FROM data
7 WHERE id = 2
8 AND id = 3
9 OR id = 5;
ID
----------
5
So your first query means: Give me the big SUM over this partition when the data is this way.
Your second query means: give me the big SUM over (this partition when the data is this way) or (when the data is this other way [no partition elimination hence big full scan])
Be careful when mixing the logical operators AND and OR. My advice would be to use brackets so as to avoid any confusion.
It is all about your OR... Try this:
SELECT nvl(sum(adjust1),0)
FROM (
SELECT
ManyOperationsOnFieldX adjust1
FROM (
SELECT
SubStr(balance, INSTR(balance, '[&&2~', 1, 1)) X
FROM
table
WHERE
a >= To_Date('&&1','YYYYMMDD')
AND a < To_Date('&&1','YYYYMMDD')+1
AND (
b LIKE '..'
AND e IS NULL
AND (b NOT IN ('..','..','..'))
OR (b='..' AND c <> NULL)
)
)
)
WHERE
adjust1>0
Because you have the OR inline with the rest of your AND statements with no parenthesis, the 2nd version isn't limiting the data checked to just the rows that fall in the date filter. For more info, see the documentation of Condition Precedence