CHAR_USED query returned '0 rows fetched from 1 column' - oracle

I created the following table:
Create table temp.test(c1 VARCHAR2(10 BYTE));
I was trying to use CHAR_USED to determine whether the column size is in BYTES or CHARS but all I am getting back is '0 rows fetched from 1 column'. The database version i am using is Oracle 11g. Does anyone have a clue as to why it is not return the semantic length information for this table?
The query used are as follows:
select CHAR_USED from all_tab_columns where table_name='temp.test'
select CHAR_USED from all_tab_columns where table_name='test' and owner = 'temp'

Assuming that you are not using case-sensitive identifiers (which you are not and should not), object names are stored in the data dictionary in upper case. So when you query a table like all_tab_columns, you'd need to use upper-case
SELECT column_name, char_used
FROM all_tab_columns
WHERE table_name = 'TEST'
AND owner = 'TEMP'

Related

select from the table name values after _ using oracle sql

Suppose if the table name is ABC_XYZ_123. I want to extract the integer values after _.
The output should be integer values after _.
In the above case, the output should be 123.
I have used the below sql query.
select from table_name like 'XXX_%';
But I am not getting required output. Can anyone help me with this query.
Thanks
Using REGEXP_SUBSTR with a capture group we can try:
SELECT REGEXP_SUBSTR(name, '_(\d+)$', 1, 1, NULL, 1)
FROM yourTable;
The question is somewhat unclear:
it looks as if you're looking for table names that contain number at the end, while
query you posted suggests that you're trying to select those numbers from one of table's columns
I'll stick to
Suppose if the table name is ABC_XYZ_123
If that's so, it is the data dictionary you'll query. USER_TABLES contains that information.
Let's create that table:
SQL> create table abc_xyz_123 (id number);
Table created.
Query selects numbers at the end of table names, for all my tables that end with numbers.
SQL> select table_name,
2 regexp_substr(table_name, '\d+$') result
3 from user_tables
4 where regexp_like(table_name, '\d+$');
TABLE_NAME RESULT
-------------------- ----------
TABLE1 1
TABLE2 2
restore_point-001 001
ABC_XYZ_123 123 --> here's your table
SQL>
Apparently, I have a few of them.

Query to check the full schema scan for tables in Oracle DB

Hi I have a requirement to scan through the schema and identify the tables which are redundant (candidate for dropping) ,so i did a select in DBA_Dependencies to check whether the tables are being used in any of the DB object types like (Procedure, package body, views, Materialized views....) i was able to find some tables and excluded the tables ,since i also need to capture the total counts, when the table was last loaded/used is there a automated way to select only selected tables (not found in dependencies list) and capture the counts and also when it was used/loaded
Difficulty - so many tables 500+
i have used the below query
Query 1
select table_name,
to_number(extractvalue(xmltype(dbms_xmlgen.getxml('select count(*) c from '||owner||'.'||table_name)),'/ROWSET/ROW/C')) as count
from all_tables
where owner = 'SCHEMA_NAME'
Query 2
select owner, table_name, num_rows, sample_size, last_analyzed from all_tables;
Query 1 Result
Filter Table_name=CUST_ORDER
OWNER TABLE_NAME COUNT SAMPLE_SIZE LAST_ANALYZED
ABCD CUST_ORDER 1083 1023 01.01.2020
Query 2 Result
Filter Table_name=CUST_ORDER
OWNER TABLE_NAME NUM_ROWS SAMPLE_SIZE LAST_ANALYZED
ABCD CUST_ORDER 1023 1023 01.01.2020
Question
Query 1 - Results not matching when compared with query 2 ,since the same table and filter is applied
in both the queries and why the results are not matching ?
but when i randomly checked other filter it is matching , does any one know the reason ?
Upon further testing i encountered an error ,what does this error signify permissions ?
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-04040: file **-**.csv in ****_***_***_***** not found
29913. 00000 - "error in executing %s callout"
*Cause: The execution of the specified callout caused an error.
*Action: Examine the error messages take appropriate action.
The number you see on all_tables is a point in time capture of the number of rows. It will only be updated if the statistics are rebuilt for that table.
Here is an example:
CREATE TABLE t1 AS
SELECT *
FROM all_objects;
SELECT t.num_rows
FROM all_tables t
WHERE t.table_name = 'T1';
-- 78570
SELECT COUNT(*)
FROM t1;
-- 78570
The stats and the physical number of rows match!
INSERT INTO t1
SELECT *
FROM all_objects ao
WHERE rownum <= 5;
-- 5 rows inserted
SELECT t.num_rows
FROM all_tables t
WHERE t.table_name = 'T1';
-- 78570
SELECT COUNT(*)
FROM t1;
-- 78575
Here we have the mis-match because rows were inserted (or maybe even deleted), but the stats for the table have not been updated. Let's update them:
BEGIN
dbms_stats.gather_table_stats(ownname => 'SCHEMA',
tabname => 'T1');
END;
/
SELECT t.num_rows
FROM all_tables t
WHERE t.table_name = 'T1';
-- 78575
Now you can see the rows match. Using the value from all_tables may be good enough for your research (and will certainly be faster to query than counting every table).
Query - 1 is actual data of the table and hence it is accurate data. One can rely on this query's output.
Query - 2 is not actual data. It is the data captured when table was last analyzed and one should not be dependant on this query for finding number of records in the table.
You can gather the stats on this table and execute the query-2 then you will find the same data as query-1
If records are not inserted or deleted from the table after stats are gathered, then query-1 and query-2 data will match for that table.

How to copy all constrains and data form one schema to another in oracle

I am using Toad for oracle 12c. I need to copy a table and data (40M) from one shcema to another (prod to test). However there is an unique key(not the PK for this table) called record_Id col which has something data like this 3.000*******19E15. About 2M rows has same numbers(I believe its because very large number) which are unique in prod. When I try to copy it violets the unique key of that col. I am using toad "export data to another schema" function to copy the data.
when I execute query in prod
select count(*) from table_name
OR
select count(distinct(record_id) from table_name
Both query gives the exact same numbers of data.
I don't have DBA permission. How do I copy all data without violating unique key of the table.
Thanks in advance!
You can use UPSERT for decisional INSERT or UPDATE or you may write small procedure for this.
you may consider to use NOT EXISTS, but your data is big and it might not be resource efficient.
insert into prod_tab
select * from other_tab t1 where NOT exists (
select 1 from prod_tab t2 where t1.id = t2.id
);
In Oracle you can use a MERGE query for that.
The following query proceeds as follows for each data row :
if the source record_id does not yet exist in the target table, a new record is inserted
else, the existing record is updated with source values
For the sake of the example, I assumed that there are two other columns in the table : column1 and column2.
MERGE INTO target_table t1
USING (SELECT * from source_table t2)
ON (t1.record_id = t2.record_id)
WHEN MATCHED THEN UPDATE SET
t1.column1 = t2.column1,
t1.column2 = t2.column2
WHEN NOT MATCHED THEN INSERT
(record_id, column1, column2) VALUES (t2.record_id, t2.column1, t2.column2)

Oracle CLOB column and LAG

I'm facing a problem when I try to use LAG function on CLOB column.
So let's assume we have a table
create table test (
id number primary key,
not_clob varchar2(255),
this_is_clob clob
);
insert into test values (1, 'test1', to_clob('clob1'));
insert into test values (2, 'test2', to_clob('clob2'));
DECLARE
x CLOB := 'C';
BEGIN
FOR i in 1..32767
LOOP
x := x||'C';
END LOOP;
INSERT INTO test(id,not_clob,this_is_clob) values(3,'test3',x);
END;
/
commit;
Now let's do a select using non-clob columns
select id, lag(not_clob) over (order by id) from test;
It works fine as expected, but when I try the same with clob column
select id, lag(this_is_clob) over (order by id) from test;
I get
ORA-00932: inconsistent datatypes: expected - got CLOB
00932. 00000 - "inconsistent datatypes: expected %s got %s"
*Cause:
*Action:
Error at Line: 1 Column: 16
Can you tell me what's the solution of this problem as I couldn't find anything on that.
The documentation says the argument for any analytic function can be any datatype but it seems unrestricted CLOB is not supported.
However, there is a workaround:
select id, lag(dbms_lob.substr(this_is_clob, 4000, 1)) over (order by id)
from test;
This is not the whole CLOB but 4k should be good enough in many cases.
I'm still wondering what is the proper way to overcome the problem
Is upgrading to 12c an option? The problem is nothing to do with CLOB as such, it's the fact that Oracle has a hard limit for strings in SQL of 4000 characters. In 12c we have the option to use extended data types (providing we can persuade our DBAs to turn it on!). Find out more.
Some of the features may not work properly in SQL when using CLOBs(like DISTINCT , ORDER BY GROUP BY etc. Looks like LAG is also one of them but, I couldn't find anywhere in docs.
If your values in the CLOB columns are always less than 4000 characters, you may use TO_CHAR
select id, lag( TO_CHAR(this_is_clob)) over (order by id) from test;
OR
convert it into an equivalent SELF JOIN ( may not be as efficient as LAG )
SELECT a.id,
b.this_is_clob AS lagging
FROM test a
LEFT JOIN test b ON b.id < a.id;
Demo
I know this is an old question, but I think I found an answer which eliminates the need to restrict the CLOB length and wanted to share it. Utilizing CTE and recursive subqueries, we can replicate the lag functionality with CLOB columns.
First, let's take a look at my "original" query:
WITH TEST_TABLE AS
(
SELECT LEVEL ORDER_BY_COL,
TO_CLOB(LEVEL) AS CLOB_COL
FROM DUAL
CONNECT BY LEVEL <= 10
)
SELECT tt.order_by_col,
tt.clob_col,
LAG(tt.clob_col) OVER (ORDER BY tt.order_by_col)
FROM test_table tt;
As expected, I get the following error:
ORA-00932: inconsistent datatypes: expected - got CLOB
Now, lets look at the modified query:
WITH TEST_TABLE AS
(
SELECT LEVEL ORDER_BY_COL,
TO_CLOB(LEVEL) AS CLOB_COL
FROM DUAL
CONNECT BY LEVEL <= 10
),
initial_pull AS
(
SELECT tt.order_by_col,
LAG(tt.order_by_col) OVER (ORDER BY tt.order_by_col) AS PREV_ROW,
tt.clob_col
FROM test_table tt
),
recursive_subquery (order_by_col, prev_row, clob_col, prev_clob_col) AS
(
SELECT ip.order_by_col, ip.prev_row, ip.clob_col, NULL
FROM initial_pull ip
WHERE ip.prev_row IS NULL
UNION ALL
SELECT ip.order_by_col, ip.prev_row, ip.clob_col, rs.clob_col
FROM initial_pull ip
INNER JOIN recursive_subquery rs ON ip.prev_row = rs.order_by_col
)
SELECT rs.order_by_col, rs.clob_col, rs.prev_clob_col
FROM recursive_subquery rs;
So here is how it works.
I create the TEST_TABLE, this really is only for the example as you should already have this table somewhere in your schema.
I create a CTE of the data I want to pull, plus a LAG function on the primary key (or a unique column) in the table partitioned and ordered in the same way I would have in my original query.
Create a recursive subquery using the initial row as the root and descending row by row joining on the lagged column. Returning both the CLOB column from the current row and the CLOB column from its parent row.

INSERT INTO TARGET_TABLE SELECT * FROM SOURCE_TABLE;

I would like to do an INSERT / SELECT, this means INSERT in the TARGET_TABLE the records of the SOURCE_TABLE, with this assumption:
The SOURCE and the TARGET table have only a SUBSET of common columns, this means in example:
==> The SOURCE TABLE has ALPHA, BETA and GAMMA columns;
==> The TARGET TABLE has BETA, GAMMA and DELTA columns.
What is the most efficient way to produce INSERT / SELECT statements, respecting the assumption that not all the target columns are present in the source table?
The idea is that the PL/SQL script CHECKS the columns in the source table and in the target table, makes the INTERSECTION, and then produces a dynamic SQL with the correct list of columns.
Please assume that the columns present in the target table, but not present in the source table, have to be left NULL.
I wish to extract the data from SOURCE into a set of INSERT statements for later insertion into the TARGET table.
You can assume that the TARGET table has more columns than the SOURCE table, and that all the columns in the SOURCE table are present in the TARGET table in the same order.
Thank you in advance for your useful suggestions!
In Oracle, You can get common columns with this SQL query:
select column_name
from user_tab_columns
where table_name = 'TABLE_1'
intersect
select column_name
from user_tab_columns
where table_name = 'TABLE_2'
Then you iterate a cursor with the mentioned query to generate a comma separated list of all values returned. Put that comma separated string into a varchar2 variable named common_fields. Then, you can:
sql_sentence := 'insert into TABLE_1 (' ||
common_fields ||
') select ' ||
common_fields ||
' from TABLE_2';
execute immediate sql_sentence;

Resources