Insert all into select sequence.nextval, sequence not allowed? - oracle

I'm trying to copy data from a table to 3 tables.
I need to input sequence values to 1 or more tables.
But I got an error, "sequence number not allowed here"
Here is my sql.
INSERT ALL
INTO COM_BOARD(BOD_UID, MNU_UID, BOD_NOTICE, BOD_SUBJECT, BOD_READCNT, BOD_COMMENTCNT, BOD_REF, BOD_LEVEL, BOD_ORDER, BOD_REPLYCNT, BOD_PARENTUID,
BOD_TAG, BOD_OPEN, BOD_STATE, BOD_DELETE)
VALUES(BOD_UID, MNU_UID, 0, BOD_SUBJECT, 0, 0, BOD_UID, 0, 0, 0, 0, 0, 0, 9, 0)
INTO COM_BODCONTENT(CON_UID, BOD_UID, MEM_UID, CON_PW, CON_NM, CON_IP, CON_TY, CON_REGYMD, CON_MODYMD, CON_CONTENT)
VALUES(con_uid, bod_uid, 1, 'adm!!##!11', 'Admin', '127.0.0.1', 0, input_dt, update_dt, bod_subject)
INTO COM_BODDATA(DAT_UID, BOD_UID, DAT_FILETY, DAT_FILEEXT, DAT_FILENM, DAT_ORGFILENM, DAT_FILESIZE, DAT_DOWNCNT, DAT_STATE)
VALUES(DAT_UID, BOD_UID, 0, FILE_EXT, IMG, IMG, IMG_SIZE, 0, 1)
SELECT SEQ_BODUID.NEXTVAL BOD_UID, SEQ_CONUID.NEXTVAL CON_UID, SEQ_CONDAT_UID.NEXTVAL DAT_UID, 3141 MNU_UID
, DECODE(STATE, 'A', 9, 1) BOD_STATE, DECODE(STATE, 'A', 9, 1) BOD_DELETE, SUBJECT/*_kr .. */ bod_subject
, IMG, IMG_SIZE, SUBSTR(REGEXP_SUBSTR(IMG, '\.\w+'), 2) FILE_EXT, INPUT_DT, INPUT_WRITER, UPDATE_DT, UPDATE_WRITER, CNT
from system.t_near_photo
Cannot use insert all into ~ into ~ select sequence.nextval ~~
or I used wrongly?
Appriciate your any help.

Related

SSRS expression for date difference to a number

I have this expression:
=COUNT(Fields!RecId.Value) -
IIF(Fields!Status.Value="Assigned",
DATEDIFF("d", Fields!CreatedDateTime.Value,Fields!ResolvedDateTime.Value),
DATEDIFF("d", Fields!CreatedDateTime.Value,Fields!AssignedDateTime.Value))
- IIF(Weekday(Parameters!StartDate.Value, 1) = 1, 1, 0)
- IIF(Weekday(Parameters!StartDate.Value, 1) = 7, 1, 0)
- IIF(Weekday(Parameters!EndDate.Value, 1) = 1, 1, 0)
- IIF(Weekday(Parameters!EndDate.Value, 1) = 7, 1, 0)
What I want to be able to return is the RecID value minus the date difference if the date is more than 1 day.
From the comment, it seems like you want the count of records minus the number of records where the work days between the Created Date and if Status is "Assigned" the Resolved Date else the Assigned Date.
=COUNT(Fields!RecId.Value) -
SUM(
IIF(Fields!Status.Value = "Assigned",
IIF(DATEDIFF("d", Fields!CreatedDateTime.Value, Fields!ResolvedDateTime.Value)
- (DateDiff(DateInterval.WeekOfYear, Fields!CreatedDateTime.Value, Fields!ResolvedDateTime.Value)*2)
- (IIF(WEEKDAY(Fields!CreatedDateTime.Value) = 7, 1, 0)
- (IIF(WEEKDAY(Fields!ResolvedDateTime.Value) = 6, 1, 0))
- 1) > 1, 0, 1)
,
IIF(DATEDIFF("d", Fields!CreatedDateTime.Value, Fields!AssignedDateTime.Value) > 1, 0, 1)
- (DateDiff(DateInterval.WeekOfYear, Fields!CreatedDateTime.Value, Fields!AssignedDateTime.Value) * 2)
- (IIF(WEEKDAY(Fields!CreatedDateTime.Value) = 7, 1, 0)
- (IIF(WEEKDAY(Fields!AssignedDateTime.Value) = 6, 1, 0))
- 1) > 1, 0, 1)
)

Pattern recognition in binary numbers (pseudo code or MQL5)

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
. . 1 1 1 1 1 0 0 0 1 1 1 1 1 0 0 0
Recognition starts at 17 and goes backwards to 0.
What can be seen is the most simple pattern.
Pattern starts with at least three 0s or three 1s but could be more of each but not mixed!
First pattern is then followed by at least five 0s or five 1s depending on what came in the first pattern. Since pattern one contains three 0, there must be at least five 1s and vice versa
Then we want to see the first pattern again. At least three 0s or three 1s, again, depending wheather there were 1s or 0s before
Finally we want to see the second pattern again, which means at least five 0s or five 1s, again, depending on which pattern was seen before
I tried using for loops and counters but did not manage to work it out. What is struggling me is the fact, that the pattern is not of fixed size as there can be more than three or five 0s and 1s in succession.
Is anybody able to provide some pseudo code how to implement this or even some MQL5 code?
The following Swift code is everything else than optimal. It should just give you hints how you could implement it.
A function to match a single pattern:
func matchPattern(numbers: [Int], startIndex: Int, number: Int) -> Int {
var actualIndex = startIndex
while numbers[actualIndex] == number && actualIndex > 0 {
actualIndex = actualIndex - 1
}
return startIndex - actualIndex
}
A function to match the 4 patterns:
func match(binNrs: [Int]) -> Bool {
let firstPatternNr = binNrs[17]
let secondPatternNr = firstPatternNr == 0 ? 1 : 0
let pattern1Length = matchPattern(numbers: binNrs,
startIndex: 17,
number: firstPatternNr)
if pattern1Length < 3 { return false }
let pattern2Length = matchPattern(numbers: binNrs,
startIndex: 17 - pattern1Length,
number: secondPatternNr)
if pattern2Length < 5 { return false }
let pattern3Length = matchPattern(numbers: binNrs,
startIndex: 17 - pattern1Length - pattern2Length,
number: firstPatternNr)
if pattern3Length < 3 { return false }
let pattern4Length = matchPattern(numbers: binNrs,
startIndex: 17 - pattern1Length - pattern2Length - pattern3Length,
number: secondPatternNr)
return pattern4Length >= 5
}
Some test patterns with results:
let match1 = match(binNrs: [0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0]) // true
let match2 = match(binNrs: [1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0]) // false (4th sequence < 5)
let match3 = match(binNrs: [0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0]) // false (1st sequence < 3)
let match4 = match(binNrs: [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1]) // false (2nd sequence < 5)

Creating XOR across several IN clauses within the WHERE clause

I am trying to create an exclusive or statement within an in clause. For example
WHERE ACCOUNT IN (1,2,3) XOR ACCOUNT IN (3,4) XOR ACCOUNT IN (5,6)
The only reference materials I can find do not facilitate using an IN clause. TIA.
Edit - Clarification :
DDL:
CREATE TABLE EXAMPLE
(
CONTRACT VARCHAR2(1),
ID_NUMBER NUMBER,
ACCOUNT NUMBER,
AMOUNT_1 NUMBER,
AMOUNT_2 NUMBER
);
INSERT INTO EXAMPLE (CONTRACT, ID_NUMBER, ACCOUNT, AMOUNT_1, AMOUNT_2)
VALUES ('A', 1, 100, 5, NULL);
INSERT INTO EXAMPLE (CONTRACT, ID_NUMBER, ACCOUNT, AMOUNT_1, AMOUNT_2)
VALUES ('A', 2, 101, NULL, 5);
INSERT INTO EXAMPLE (CONTRACT, ID_NUMBER, ACCOUNT, AMOUNT_1, AMOUNT_2)
VALUES ('A', 3, 200, 2, NULL);
INSERT INTO EXAMPLE (CONTRACT, ID_NUMBER, ACCOUNT, AMOUNT_1, AMOUNT_2)
VALUES ('B', 4, 100, 7, NULL);
INSERT INTO EXAMPLE (CONTRACT, ID_NUMBER, ACCOUNT, AMOUNT_1, AMOUNT_2)
VALUES ('B', 5, 100, 3, NULL);
INSERT INTO EXAMPLE (CONTRACT, ID_NUMBER, ACCOUNT, AMOUNT_1, AMOUNT_2)
VALUES ('B', 6, 101, NULL, 10);
INSERT INTO EXAMPLE (CONTRACT, ID_NUMBER, ACCOUNT, AMOUNT_1, AMOUNT_2)
VALUES ('B', 7, 200, 2, NULL);
INSERT INTO EXAMPLE (CONTRACT, ID_NUMBER, ACCOUNT, AMOUNT_1, AMOUNT_2)
VALUES ('C', 8, 200, 10, NULL);
INSERT INTO EXAMPLE (CONTRACT, ID_NUMBER, ACCOUNT, AMOUNT_1, AMOUNT_2)
VALUES ('C', 9, 200, 5, NULL);
INSERT INTO EXAMPLE (CONTRACT, ID_NUMBER, ACCOUNT, AMOUNT_1, AMOUNT_2)
VALUES ('C', 10, 201, NULL, 15);
INSERT INTO EXAMPLE (CONTRACT, ID_NUMBER, ACCOUNT, AMOUNT_1, AMOUNT_2)
VALUES ('C', 11, 300, 6, NULL);
INSERT INTO EXAMPLE (CONTRACT, ID_NUMBER, ACCOUNT, AMOUNT_1, AMOUNT_2)
VALUES ('C', 12, 301, NULL, 6);
INSERT INTO EXAMPLE (CONTRACT, ID_NUMBER, ACCOUNT, AMOUNT_1, AMOUNT_2)
VALUES ('D', 13, 100, NULL, -5);
INSERT INTO EXAMPLE (CONTRACT, ID_NUMBER, ACCOUNT, AMOUNT_1, AMOUNT_2)
VALUES ('D', 14, 100, NULL, 5);
INSERT INTO EXAMPLE (CONTRACT, ID_NUMBER, ACCOUNT, AMOUNT_1, AMOUNT_2)
VALUES ('D', 15, 300, 7, 3);
INSERT INTO EXAMPLE (CONTRACT, ID_NUMBER, ACCOUNT, AMOUNT_1, AMOUNT_2)
VALUES ('D', 16, 200, NULL, 4);
My query:
SELECT * FROM
(
SELECT
A.CONTRACT,
COUNT(NVL(ID_NUMBER,1)) AS ID_NUMBER_COUNT,
LISTAGG(ID_NUMBER, ', ') WITHIN GROUP(ORDER BY CONTRACT) AS ID_NUMBERS,
SUM(NVL(AMOUNT_1,0)) AS AMOUNT_1_SUM,
SUM(NVL(AMOUNT_2,0)) AS AMOUNT_2_SUM
FROM EXAMPLE A
WHERE 1=1
AND NOT (NVL(AMOUNT_1,0) = NVL(AMOUNT_2,0))
GROUP BY CUBE(CONTRACT,ACCOUNT)
) A
WHERE 1=1
AND NVL(A.AMOUNT_1_SUM,0) = NVL(A.AMOUNT_2_SUM,0)
AND CONTRACT IS NOT NULL
The CUBE function may seem like overkill for this example, but my actual table has several more descriptor columns that necessitates searching across the combinations.
If you run the query on the above table, without any IN clause to limit the accounts, you will not receive the true population of records that are offsets (should clarify that they only sum to zero if they are in the same column, other wise an offset will occur across both columns where the aggregated amounts are equal).
The true population of records that I am aiming to capture is:
-On contract A, ID Numbers 1 and 2
-On contract B, ID Number 4,5, and 6
-On contract C, all ID Numbers
-On contract D, all ID Numbers
The query as it stands currently can capture all ID numbers across contracts C and D, however there are records in contracts A and B that will not come back as a valid result unless the accounts are limited.
-Limiting account to IN (100,101) will yield the ID numbers from A and B that I aim to capture. The caveat is that there are ~20 combinations of accounts in my full population that must be searched.
-There will never be an offset that occurs between two different contracts. I handle this in the query on the full population by using GROUPING_ID, then just excluding anywhere the Contract field is blank.
-As a last resort, I can use a UNION statement, but would like to do without using one.
-The only other thing I can currently think to do is to define the sets of accounts somewhere before I run the query, then just run a FOR loop for each set.
Thank you!
The equivalent of A XOR B is ( A AND NOT B ) OR ( B AND NOT A ) which would make your query something like this:
WHERE ( ACCOUNT IN (1,2,3) AND ACCOUNT NOT IN (3,4,5,6) )
OR ( ACCOUNT IN (3,4) AND ACCOUNT NOT IN (1,2,3,5,6) )
OR ( ACCOUNT IN (5,6) AND ACCOUNT NOT IN (1,2,3,3,4) )
However, the question does not really make sense as ACCOUNT cannot have multiple values so (apart from 3 which appears in multiple sets) you appear to be testing for the equivalent of A XOR NOT A which will always be true (when ACCOUNT <> 3).
Given this, the logic above will simplify to:
WHERE ACCOUNT IN (1,2,4,5,6)
Edit - Following the clarification of the question:
Oracle Setup:
I renamed the Amount_1 and Amount_2 columns to Credit and Debit
CREATE TABLE EXAMPLE( CONTRACT, ID_NUMBER, ACCOUNT, CREDIT, DEBIT ) AS
SELECT 'A', 1, 100, 5, NULL FROM DUAL UNION ALL
SELECT 'A', 2, 101, NULL, 5 FROM DUAL UNION ALL
SELECT 'A', 3, 200, 2, NULL FROM DUAL UNION ALL
SELECT 'B', 4, 100, 7, NULL FROM DUAL UNION ALL
SELECT 'B', 5, 100, 3, NULL FROM DUAL UNION ALL
SELECT 'B', 6, 101, NULL, 10 FROM DUAL UNION ALL
SELECT 'B', 7, 200, 2, NULL FROM DUAL UNION ALL
SELECT 'C', 8, 200, 10, NULL FROM DUAL UNION ALL
SELECT 'C', 9, 200, 5, NULL FROM DUAL UNION ALL
SELECT 'C', 10, 201, NULL, 15 FROM DUAL UNION ALL
SELECT 'C', 11, 300, 6, NULL FROM DUAL UNION ALL
SELECT 'C', 12, 301, NULL, 6 FROM DUAL UNION ALL
SELECT 'D', 13, 100, NULL, -5 FROM DUAL UNION ALL
SELECT 'D', 14, 100, NULL, 5 FROM DUAL UNION ALL
SELECT 'D', 15, 300, 7, 3 FROM DUAL UNION ALL
SELECT 'D', 16, 200, NULL, 4 FROM DUAL UNION ALL
SELECT 'E', 17, 100, 3, NULL FROM DUAL UNION ALL
SELECT 'E', 18, 200, NULL, 4 FROM DUAL;
CREATE OR REPLACE TYPE TransactionObj AS OBJECT(
ID_NUMBER INT,
ACCOUNT INT,
VALUE INT
);
/
CREATE OR REPLACE TYPE TransactionTable AS TABLE OF TransactionObj;
/
CREATE OR REPLACE FUNCTION getMaxZeroSum(
Transactions TransactionTable
) RETURN TransactionTable
AS
zeroSumTransactions TransactionTable := Transactiontable();
bitCount INT;
valueSum INT;
maxBitCount INT := 0;
valueMax INT := 0;
BEGIN
IF Transactions IS NULL OR Transactions IS EMPTY THEN
RETURN zeroSumTransactions;
END IF;
FOR i IN 1 .. POWER( 2, Transactions.COUNT ) - 1 LOOP
bitCount := 0;
valueSum := 0;
FOR j IN 1 .. Transactions.COUNT LOOP
IF BITAND( i, POWER( 2, j - 1 ) ) > 0 THEN
valueSum := valueSum + Transactions(j).VALUE;
bitCount := bitCount + 1;
END IF;
END LOOP;
IF valueSum = 0 AND bitCount > maxBitCount THEN
maxBitCount := bitCount;
valueMax := i;
END IF;
END LOOP;
IF maxBitCount > 0 THEN
zeroSumTransactions.EXTEND( maxBitCount );
bitCount := 0;
FOR j IN 1 .. Transactions.COUNT LOOP
IF BITAND( valueMax, POWER( 2, j - 1 ) ) > 0 THEN
bitCount := bitCount + 1;
zeroSumTransactions(bitCount) := transactions(j);
END IF;
END LOOP;
END IF;
RETURN zeroSumTransactions;
END;
/
Query:
SELECT zs.Contract,
LISTAGG( t.ID_NUMBER, ',' ) WITHIN GROUP ( ORDER BY ID_NUMBER ) AS ids,
LISTAGG( t.ACCOUNT, ',' ) WITHIN GROUP ( ORDER BY ID_NUMBER ) AS accounts
FROM (
SELECT CONTRACT,
getMaxZeroSum( CAST( COLLECT( TransactionObj( ID_NUMBER, ACCOUNT, NVL( CREDIT, 0 ) - NVL( DEBIT, 0 ) ) ) AS TransactionTable ) ) AS Transactions
FROM EXAMPLE
WHERE NVL( CREDIT, 0 ) <> NVL( DEBIT, 0 )
GROUP BY CONTRACT
) zs,
TABLE( zs.Transactions ) (+) t
GROUP BY Contract;
Output:
CONTRACT IDS ACCOUNTS
-------- -------------- --------------------
A 1,2 100,101
B 4,5,6 100,100,101
C 8,9,10,11,12 200,200,201,300,301
D 13,14,15,16 100,100,300,200
E NULL NULL
The getMaxZeroSum function could almost certainly be improved to consider the transactions in order of least number of items excluded through to all-but-two excluded and then to return as soon as it finds a zero sum (however, I went for having an easy to write function as a demonstration of how it could be done over a performant one). But however you write it I can't see a way that isn't O(n(2^n)) where n is the number of transactions for a given contract.

SSRS Divide by zero error

I am getting NaN in 3 places in my SSRS report. I believe it is because I am dividing by 0. I am trying to find the average days for prescriptions on-time, late, and not filled. The 3 expressions that I was given are below. What and where would I need to insert the iif statement addressing the 0 issue. I am new to this
=sum(iif(Fields!DaysDifference.Value >= -1 and Fields!DaysDifference.Value <= 1 and Fields!ActualNextFillDateKey.Value <> 0, Fields!DaysDifference.Value,0))/
sum(iif(Fields!DaysDifference.Value >= -1 and Fields!DaysDifference.Value <= 1 and Fields!ActualNextFillDateKey.Value <> 0, 1,0))
=sum(iif(Fields!DaysDifference.Value > 1 and Fields!ActualNextFillDateKey.Value <> 0, Fields!DaysDifference.Value,0))/
sum(iif(Fields!DaysDifference.Value > 1 and Fields!ActualNextFillDateKey.Value <> 0, 1,0))
=sum(iif(Fields!ActualNextFillDateKey.Value = 0, Fields!DaysDifference.Value, 0))/
sum(iif(Fields!ActualNextFillDateKey.Value = 0, 1, 0))
Instead of using 0, you should be dividing by your field:
=SUM(IIF(Fields!ActualNextFillDateKey.Value = 0, Fields!DaysDifference.Value, 0))/
SUM(IIF(Fields!ActualNextFillDateKey.Value = 0, 1, Fields!ActualNextFillDateKey.Value))

How can you tell which columns are unused in ALL_TAB_COLS?

When you query the ALL_TAB_COLS view on Oracle 9i, it lists columns marked as UNUSED as well as the 'active' table columns. There doesn't seem to be a field that explicitly says whether a column is UNUSED, or any view I can join to that lists the unused columns in a table. How can I easily find out which are the unused columns, so I can filter them out of ALL_TAB_COLS?
Try using ALL_TAB_COLUMNS instead of ALL_TAB_COLS. In Oracle 11.2 I find that unused columns appear in ALL_TAB_COLS (though renamed) but not in ALL_TAB_COLUMNS.
I created a table like this:
create table t1 (c1 varchar2(30), c2 varchar2(30);
Then set c2 unused:
alter table t1 set unused column c2;
Then I see:
select column_name from all_tab_cols where owner='ME' and table_name='T1';
COLUMN_NAME
-----------
C1
SYS_C00002_10060107:25:40$
select column_name from all_tab_columns where owner='ME' and table_name='T1';
COLUMN_NAME
-----------
C1
The only filter in the definition of ALL_TAB_COLUMNS is "where hidden_column = 'NO'", so it seems that UNUSED columns are flagged in the HIDDEN_COLUMN field.
Looking further into the data definition views, it looks like COL$.PROPERTY gets set to 32800 (bits 2^5 and 2^15) when the column becomes UNUSED. 2^5 is used to mark hidden columns, so it seems likely 2^15 is UNUSED. You could create a custom version of ALL_TAB_COLS based on that which should work for what you need, such as this.
CREATE OR REPLACE FORCE VIEW all_tab_cols_rev (owner,
table_name,
column_name,
data_type,
data_type_mod,
data_type_owner,
data_length,
data_precision,
data_scale,
nullable,
column_id,
default_length,
data_default,
num_distinct,
low_value,
high_value,
density,
num_nulls,
num_buckets,
last_analyzed,
sample_size,
character_set_name,
char_col_decl_length,
global_stats,
user_stats,
avg_col_len,
char_length,
char_used,
v80_fmt_image,
data_upgraded,
hidden_column,
virtual_column,
segment_column_id,
internal_column_id,
histogram,
qualified_col_name,
unused_column)
AS
SELECT u.NAME,
o.NAME,
c.NAME,
DECODE (c.type#,
1, DECODE (c.CHARSETFORM, 2, 'NVARCHAR2', 'VARCHAR2'),
2, DECODE (c.scale, NULL, DECODE (c.precision#, NULL, 'NUMBER', 'FLOAT'), 'NUMBER'),
8, 'LONG',
9, DECODE (c.CHARSETFORM, 2, 'NCHAR VARYING', 'VARCHAR'),
12, 'DATE',
23, 'RAW',
24, 'LONG RAW',
58, NVL2 (ac.synobj#, (SELECT o.NAME
FROM obj$ o
WHERE o.obj# = ac.synobj#), ot.NAME),
69, 'ROWID',
96, DECODE (c.CHARSETFORM, 2, 'NCHAR', 'CHAR'),
100, 'BINARY_FLOAT',
101, 'BINARY_DOUBLE',
105, 'MLSLABEL',
106, 'MLSLABEL',
111, NVL2 (ac.synobj#, (SELECT o.NAME
FROM obj$ o
WHERE o.obj# = ac.synobj#), ot.NAME),
112, DECODE (c.CHARSETFORM, 2, 'NCLOB', 'CLOB'),
113, 'BLOB',
114, 'BFILE',
115, 'CFILE',
121, NVL2 (ac.synobj#, (SELECT o.NAME
FROM obj$ o
WHERE o.obj# = ac.synobj#), ot.NAME),
122, NVL2 (ac.synobj#, (SELECT o.NAME
FROM obj$ o
WHERE o.obj# = ac.synobj#), ot.NAME),
123, NVL2 (ac.synobj#, (SELECT o.NAME
FROM obj$ o
WHERE o.obj# = ac.synobj#), ot.NAME),
178, 'TIME(' || c.scale || ')',
179, 'TIME(' || c.scale || ')' || ' WITH TIME ZONE',
180, 'TIMESTAMP(' || c.scale || ')',
181, 'TIMESTAMP(' || c.scale || ')' || ' WITH TIME ZONE',
231, 'TIMESTAMP(' || c.scale || ')' || ' WITH LOCAL TIME ZONE',
182, 'INTERVAL YEAR(' || c.precision# || ') TO MONTH',
183, 'INTERVAL DAY(' || c.precision# || ') TO SECOND(' || c.scale || ')',
208, 'UROWID',
'UNDEFINED'),
DECODE (c.type#, 111, 'REF'),
NVL2 (ac.synobj#, (SELECT u.NAME
FROM user$ u, obj$ o
WHERE o.owner# = u.user#
AND o.obj# = ac.synobj#), ut.NAME),
c.LENGTH,
c.precision#,
c.scale,
DECODE (SIGN (c.null$), -1, 'D', 0, 'Y', 'N'),
DECODE (c.col#, 0, TO_NUMBER (NULL), c.col#),
c.deflength,
c.default$,
h.distcnt,
h.lowval,
h.hival,
h.density,
h.null_cnt,
CASE
WHEN NVL (h.distcnt, 0) = 0
THEN h.distcnt
WHEN h.row_cnt = 0
THEN 1
WHEN ( h.bucket_cnt > 255
OR ( h.bucket_cnt > h.distcnt
AND h.row_cnt = h.distcnt
AND h.density * h.bucket_cnt <= 1) )
THEN h.row_cnt
ELSE h.bucket_cnt
END,
h.timestamp#,
h.sample_size,
DECODE (c.CHARSETFORM,
1, 'CHAR_CS',
2, 'NCHAR_CS',
3, NLS_CHARSET_NAME (c.CHARSETID),
4, 'ARG:' || c.CHARSETID),
DECODE (c.CHARSETID, 0, TO_NUMBER (NULL), NLS_CHARSET_DECL_LEN (c.LENGTH, c.CHARSETID) ),
DECODE (BITAND (h.spare2, 2), 2, 'YES', 'NO'),
DECODE (BITAND (h.spare2, 1), 1, 'YES', 'NO'),
h.avgcln,
c.spare3,
DECODE (c.type#,
1, DECODE (BITAND (c.property, 8388608), 0, 'B', 'C'),
96, DECODE (BITAND (c.property, 8388608), 0, 'B', 'C'),
NULL),
DECODE (BITAND (ac.flags, 128), 128, 'YES', 'NO'),
DECODE (o.status,
1, DECODE (BITAND (ac.flags, 256), 256, 'NO', 'YES'),
DECODE (BITAND (ac.flags, 2),
2, 'NO',
DECODE (BITAND (ac.flags, 4), 4, 'NO', DECODE (BITAND (ac.flags, 8), 8, 'NO', 'N/A') ) ) ),
DECODE (c.property, 0, 'NO', DECODE (BITAND (c.property, 32), 32, 'YES', 'NO') ),
DECODE (c.property, 0, 'NO', DECODE (BITAND (c.property, 8), 8, 'YES', 'NO') ),
DECODE (c.segcol#, 0, TO_NUMBER (NULL), c.segcol#),
c.intcol#,
CASE
WHEN NVL (h.row_cnt, 0) = 0
THEN 'NONE'
WHEN ( h.bucket_cnt > 255
OR ( h.bucket_cnt > h.distcnt
AND h.row_cnt = h.distcnt
AND h.density * h.bucket_cnt <= 1) )
THEN 'FREQUENCY'
ELSE 'HEIGHT BALANCED'
END,
DECODE (BITAND (c.property, 1024),
1024, (SELECT DECODE (BITAND (cl.property, 1), 1, rc.NAME, cl.NAME)
FROM SYS.col$ cl, attrcol$ rc
WHERE cl.intcol# = c.intcol# - 1
AND cl.obj# = c.obj#
AND c.obj# = rc.obj#(+)
AND cl.intcol# = rc.intcol#(+)),
DECODE (BITAND (c.property, 1), 0, c.NAME, (SELECT tc.NAME
FROM SYS.attrcol$ tc
WHERE c.obj# = tc.obj#
AND c.intcol# = tc.intcol#) ) ),
DECODE (c.property, 0, 'NO', DECODE (BITAND (c.property, 32768), 32768, 'YES', 'NO') )
FROM SYS.col$ c, SYS.obj$ o, SYS.hist_head$ h, SYS.user$ u, SYS.coltype$ ac, SYS.obj$ ot, SYS.user$ ut
WHERE o.obj# = c.obj#
AND o.owner# = u.user#
AND c.obj# = h.obj#(+)
AND c.intcol# = h.intcol#(+)
AND c.obj# = ac.obj#(+)
AND c.intcol# = ac.intcol#(+)
AND ac.toid = ot.oid$(+)
AND ot.type#(+) = 13
AND ot.owner# = ut.user#(+)
AND ( o.type# IN (3, 4) /* cluster, view */
OR ( o.type# = 2 /* tables, excluding iot - overflow and nested tables */
AND NOT EXISTS (
SELECT NULL
FROM SYS.tab$ t
WHERE t.obj# = o.obj#
AND ( BITAND (t.property, 512) = 512
OR BITAND (t.property, 8192) = 8192) ) ) )
AND ( o.owner# = USERENV ('SCHEMAID')
OR o.obj# IN (SELECT obj#
FROM SYS.objauth$
WHERE grantee# IN (SELECT kzsrorol
FROM x$kzsro) )
OR /* user has system privileges */
EXISTS (
SELECT NULL
FROM v$enabledprivs
WHERE priv_number IN
(-45 /* LOCK ANY TABLE */,
-47 /* SELECT ANY TABLE */,
-48 /* INSERT ANY TABLE */,
-49 /* UPDATE ANY TABLE */,
-50 /* DELETE ANY TABLE */) ) );
I'd put the view in a separate, locked schema that has the SELECT ANY DICTIONARY privilege, then create a public synonym for it. That way, all of your users would be able to see the UNUSED_COLUMN column for only the tables that they have permissions on.

Resources