Column definition incompatible with clustered column definition - oracle

I have created a cluster in Oracle
CREATE CLUSTER myLovelyCluster (clust_id NUMBER(38,0))
SIZE 1024 SINGLE TABLE HASHKEYS 11;
Than a table for the cluster
CREATE TABLE Table_cluster
CLUSTER myLovelyCluster (columnRandom)
AS SELECT * FROM myTable ;
the columnRandom is well defined as NUMBER(38,0) but why I am getting an error assuming incompatible column definition?
Thanks

Are you sure that columnRandom is number(38,0)? In oracle NUMBER != NUMBER(38,0)
Let's create two table.
create table src_table ( a number);
create table src_table2( a number(38,0));
select column_name,data_precision,Data_scale from user_tab_cols where table_name like 'SRC_TABLE%';
Result of query is. Definitions of column are different.
+-------------+----------------+------------+
| Column_name | Data_Precision | Data_scale |
+-------------+----------------+------------+
| A | | |
| A | 38 | 0 |
+-------------+----------------+------------+
And if i try creat cluster for first table.
CREATE TABLE Table_cluster
CLUSTER myLovelyCluster (a)
AS SELECT * FROM src_table ;
ORA-01753: column definition incompatible with clustered column definition
For 2-nd every thing is ok.
CREATE TABLE Table_cluster
CLUSTER myLovelyCluster (a)
AS SELECT * FROM src_table2 ;
If you add cast into select. Execution also is correct.
CREATE TABLE Table_cluster CLUSTER myLovelyCluster (a)
AS SELECT cast(a as number(38,0)) FROM src_table;

Related

View Performance

I have requirement of performing some calculation on a column of a table with large date set ( 300 GB). and return that value.
Basically I need to create a View on that table. Table has data of 21 years and It is partitioned on date column (Daily). We can not put date condition on View's query and User will put filter on runtime while execution of the view.
For example:
Create view v_view as
select * from table;
Noe I want to query View like
Select * v_view where ts_date between '1-Jan-19' and '1-Jan-20'
How Internally Oracle execute above statement? Will it execute view query first and then put date filter on that?
If so will there not be performance issue ? and how to resolve this?
oracle first generates the view and then applies the filter. you can create a function that input may inserted by user. the function results a create query and if yo run the query then the view will be created. just run:
create or replace function fnc_x(where_condition in varchar2)
return varchar2
as
begin
return ' CREATE OR REPLACE VIEW sup_orders AS
SELECT suppliers.supplier_id, orders.quantity, orders.price
FROM suppliers
INNER JOIN orders
ON suppliers.supplier_id = orders.supplier_id
'||where_condition||' ';
end fnc_x;
this function should be run. input the function is a string like this:
''WHERE suppliers.supplier_name = Microsoft''
then you should run a block like this to run the function's result:
cl scr
set SERVEROUTPUT ON
declare
szSql varchar2(3000);
crte_vw varchar2(3000);
begin
szSql := 'select fnc_x(''WHERE suppliers.supplier_name = Microsoft'') from dual';
dbms_output.put_line(szSql);
execute immediate szSql into crte_vw; -- generate 'create view' command that is depended on user's where_condition
dbms_output.put_line(crte_vw);
execute immediate crte_vw ; -- create the view
end;
In this manner, you just need received where_condition from user.
Oracle can "push" the predicates inside simple views and can then use those predicates to enable partition pruning for optimal performance. You almost never need to worry about what Oracle will run first - it will figure out the optimal order for you. Oracle does not need to mindlessly build the first step of a query, and then send all of the results to the second step. The below sample schema and queries demonstrate how only the minimal amount of partitions are used when a view on a partitioned table is queried.
--drop table table1;
--Create a daily-partitioned table.
create table table1(id number, ts_date date)
partition by range(ts_date)
interval (numtodsinterval(1, 'day'))
(
partition p1 values less than (date '2000-01-01')
);
--Insert 1000 values, each in a separate day and partition.
insert into table1
select level, date '2000-01-01' + level
from dual
connect by level <= 1000;
--Create a simple view on the partitioned table.
create or replace view v_view as select * from table1;
The following explain plan shows "Pstart" and "Pstop" set to 3 and 4, which means that only 2 of the many partitions are used for this query.
--Generate an explain plan for a simple query on the view.
explain plan for
select * from v_view where ts_date between date '2000-01-02' and date '2000-01-03';
--Show the explain plan.
select * from table(dbms_xplan.display(format => 'basic +partition'));
Plan hash value: 434062308
-----------------------------------------------------------
| Id | Operation | Name | Pstart| Pstop |
-----------------------------------------------------------
| 0 | SELECT STATEMENT | | | |
| 1 | PARTITION RANGE ITERATOR| | 3 | 4 |
| 2 | TABLE ACCESS FULL | TABLE1 | 3 | 4 |
-----------------------------------------------------------
However, partition pruning and predicate pushing do not always work when we may think they should. One thing we can do to help the optimizer is to use date literals instead of strings that look like dates. For example, replace
'1-Jan-19' with date '2019-01-01'. When we use ANSI date literals, there is no ambiguity and Oracle is more likely to use partition pruning.

Change data type of column in laravel

I have a column which i set enum earlier,
$table->enum('column_name', ['value1', 'value2']);
Now i want to change that into string without losing data.
I am using postgres database.
please help me,
thanks
You Actually cant change type of enum column with simple migration
How i achieved it was using the DB::statement to alter the column type
DB::statement('ALTER TABLE <table_name> MODIFY <column_name> VARCHAR(200)');
I'm not sure about the postgres You can modify query according to your need. Do make backup 1st we aren't sure if it will make you lose your data.
In plain SQL the solution is more something like "ALTER TABLE table_name ALTER COLUMN column_name SET DATA TYPE VARCHAR(20)".
For example:
postgres=# create table tenum(x serial, y enumval);
CREATE TABLE
postgres=# select relfilenode from pg_class where relname='tenum';
relfilenode
-------------
61038
(1 row)
postgres=# insert into tenum(y) values ('val1');
INSERT 0 1
postgres=# insert into tenum(y) values ('val2');
INSERT 0 1
postgres=# select * from tenum;
x | y
---+------
1 | val1
2 | val2
(2 rows)
postgres=# alter table tenum alter column y set data type varchar(20);
ALTER TABLE
postgres=# select relfilenode from pg_class where relname='tenum';
relfilenode
-------------
61042
(1 row)
postgres=# select * from tenum;
x | y
---+------
1 | val1
2 | val2
(2 rows)
postgres=#
Note that PostgreSQL will rewrite the table due to data type change.

change one key-value in map columns in hive table via HQL

I have a Hive table whose schema is as below, the col is of map type:
select
col
from table
col
{"name":"abc", "value":"val_1"}
What I need to do is change the val_1 to val_2 and create another table from it.
create table table_2 as
select
col -- TODO: need to do something here
from table
Any suggestions? Thanks!
with t as (select map("name","abc","value","val_1") as col)
select map("name",col["name"],"value","val_2") as col
from t
+--------------------------------+
| col |
+--------------------------------+
| {"name":"abc","value":"val_2"} |
+--------------------------------+

Optimizer using an index not present in the current schema

CONNECT alll/all
SELECT /*+ FIRST_ROWS(25) */ employee_id, department_id
FROM hr.employees
WHERE department_id > 50;
Execution Plan
Plan hash value: 2056577954
| Id | Operation | Name | Rows | Bytes |
| 0 | SELECT STATEMENT | | 25 | 200
| 1 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 25 | 200
|* 2 | INDEX RANGE SCAN | **EMP_DEPARTMENT_IX** | |
SQL> select * from user_indexes where index_name = 'EMP_DEPARTMENT_IX';
no rows selected
NOTE: There is an index with the same name on the DEPARTMENT column of the EMPLOYEES table in some other schema. And when that index is dropped a FULL TABLE SCAN on EMPLOYEES is performed.
Can the optimizer use that other index from some other schema over here?
You're connected as user ALLL, but you're querying a table in the HR schema:
SELECT /*+ FIRST_ROWS(25) */ employee_id, department_id
FROM hr.employees
WHERE department_id > 50;
You stressed other schema in the question, but seem to have overlooked that the table you're querying is also in another schema. The employees table won't appear in user_tables either.
The index being used is associated with that table, so it's likely to be in the same HR schema. You can see it in all_indexes or dba_indexes; the optimiser will use it even if you can't see it though. And it doesn't have to be in the same schema as the table, though it usually will be; in those views you might notice separate owner and table owner columns.
The schema model would break down if you could only utilise indexes in your own schema when accessing a table in someone else's. Every user would have to create their own copies of the indexes, which would be untenable.
You don't even necessarily have to be able to see the table - if you query a view that hides the underlying table from you (so you have select privs on the view only) the index will still be used in the background. And you might not always be explicitly using the schema prefix, if there is a synonym for the table, or you change your default schema.
Try looking in SYS.INDEXES:
select * from SYS.INDEXES where IXNAME = 'EMP_DEPARTMENT_IX'
Sounds like you are not the owner of the index, as you have noted. As long as your user can access the table data, then the index should be used by the optimizer.

Oracle partition key

I have many tables with large amount of data. The PK is the column (TAB_ID) which has data type RAW(16). I created the hash partitions with partition key having the TAB_ID column.
My issue is: the SQL statement (select * from my_table where tab_id = 'aas1df') does not use partition pruning. If I change the column datatype to varchar2(32), partition pruning works.
Why does not partition pruning work with partition key which have datatype RAW(16)?
I'm just guessing: try select * from my_table where 'aas1df' = tab_id.
Probably the datatype conversion works other way that expected. Anyway you should use the function UTL_RAW.CAST_TO_RAW.
Edited:
Is your table partitioned by TAB_ID? If yes, then there is something wrong with your design, you usually partition table by some useful business value, NOT by surrogate key.
If you know the PK value you do not need partition pruning at all. When Oracle traverses the PK index it gets ROWID value. This ROWID contains file-number, block ID and also row number within the block. So Oracle can access directly the row.
HEXTORAW enables partition pruning.
In the sample below the Pstart and Pstop are literal numbers, implying partition pruning occurs.
create table my_table
(
TAB_ID raw(16),
a number,
constraint my_table_pk primary key (tab_id)
)
partition by hash(tab_id) partitions 16;
explain plan for
select *
from my_table
where tab_id = hextoraw('1BCDB0E06E7C498CBE42B72A1758B432');
select * from table(dbms_xplan.display(format => 'basic +partition'));
Plan hash value: 1204448714
--------------------------------------------------------------------------
| Id | Operation | Name | Pstart| Pstop |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | |
| 1 | TABLE ACCESS BY GLOBAL INDEX ROWID| MY_TABLE | 2 | 2 |
| 2 | INDEX UNIQUE SCAN | MY_TABLE_PK | | |
--------------------------------------------------------------------------

Resources