Oracle- need to use CONTAINS to find a string that contains a period - oracle

I have a field in a table that is domain indexed so I can use CONTAINS in a query. I am trying to create a query that can find an acronym (i.e. J.A.V.A.) exactly. Problem is that the period '.' is a stopword that gets ignored. I need to figure out how to do this by escaping the '.' somehow. I can't find any reference online about how to do this, or maybe I am not fully understanding what I am reading.
I have tried
SELECT * FROM table WHERE CONTAINS(CLOB_column, '{J.A.V.A}')>0;
SELECT * FROM table WHERE CONTAINS(CLOB_column, '{J_A_V_A_}')>0;
SELECT * FROM table WHERE CONTAINS(CLOB_column, '{J\.A\.V\.A\.}')>0;
I created the domain index by:
CREATE INDEX table_txt_idx ON table(CLOB_column) INDEXTYPE IS CTXSYS.CONTEXT;
I saw something (https://oracle-base.com/articles/9i/full-text-indexing-using-oracle-text-9i) (https://docs.oracle.com/database/121/CCREF/csql.htm#CCREF0100) about appending to the index but I don't think it applies to a ctxsys index, but I am not fully sure. I just know what I used didn't work.
BEGIN
CTX_DLL.OPTIMIZE_INDEX('IDX_COLUMN_TXT','FAST');
END;
This didn't mention the ctxsys (https://docs.oracle.com/cd/B28359_01/text.111/b28303/ind.htm#BEIIEAFD)
I have created similar queries using LIKE and REGEXP_INSTR and need to create the same using CONTAINS so I can make sure I run the one that works the quickest.
SELECT * from table WHERE CLOB_column LIKE '%J.A.V.A.%'; --4.441 seconds
SELECT * from table WHERE REGEXP_INSTR(CLOB_column, '(J\.A\.V\.A\.)')>0; --23.528 seconds

Any reason you can't use LIKE?
CREATE TABLE suppliers
( supplier_id number(10) NOT NULL,
supplier_name varchar2(50) NOT NULL
);
INSERT INTO suppliers (supplier_id, supplier_name) VALUES (5000, 'Apple');
INSERT INTO suppliers (supplier_id, supplier_name) VALUES (5000, 'J.A.V.A');
SELECT * FROM suppliers WHERE supplier_name LIKE '%.%';
SELECT * FROM suppliers WHERE supplier_name LIKE 'J.A.V.A';

Related

Right way to create index organized table

I am trying to create a index organized table in oracle 11. I create the index organized table and insert the row from another table.
create table salIOT (
mypk ,
cid ,
date,
CONSTRAINT sal_pk PRIMARY KEY (mypk)
) ORGANIZATION INDEX
AS Select * from another table;
But the leaf blocks are empty when I query
SQL> Select owner, index_name, table_name, leaf_blocks from all_indexes where table_name like 'SALIOT';
Am I missing something here?
You still need to gather statistics on the table, something like:
exec dbms_stats.gather_table_stats('ABC', 'IOTTableName')
(assuming 'ABC' is your username; change as needed) - then re-run your SELECT against ALL_INDEXES and you will see how many leaf blocks you have.

Inserting Record Type

I have a source table, and a history table. They are identical, except the history table has a notes column as the last column.
The table is large, 55 columns, which I do not want to list all the columns. We do not want to use a trigger on this, but just create the history entry in the code itself.
If I simply do an INSERT INTO <history> SELECT * FROM <source> WHERE...... I will get "not enough values".
I'm hoping to do something of this nature: (Note this is just an anonymous block)
DECLARE
v_old_rec company_table%ROWTYPE;
BEGIN
SELECT * INTO v_old_rec
FROM company_table
WHERE company_id = 32789;
INSERT INTO company_table_hist
VALUES v_old_rec || ',MONTHLY UPDATE';
END;
Anything like this possible, so I do not have to list 55 columns?
Thank you.
It's quite simple actually:
INSERT INTO company_table_hist
(SELECT t.*, MONTHLY_UPD
FROM company_table t
WHERE company_id = 32789);
Of course monthly_upd must be bound somehow

"Create table as select" does not preserve not null

I am trying to use the "Create Table As Select" feature from Oracle to do a fast update. The problem I am seeing is that the "Null" field is not being preserved.
I defined the following table:
create table mytable(
accountname varchar2(40) not null,
username varchar2(40)
);
When I do a raw CTAS, the NOT NULL on account is preserved:
create table ctamytable as select * from mytable;
describe ctamytable;
Name Null Type
----------- -------- ------------
ACCOUNTNAME NOT NULL VARCHAR2(40)
USERNAME VARCHAR2(40)
However, when I do a replace on accountname, the NOT NULL is not preserved.
create table ctamytable as
select replace(accountname, 'foo', 'foo2') accountname,
username
from mytable;
describe ctamytable;
Name Null Type
----------- ---- -------------
ACCOUNTNAME VARCHAR2(160)
USERNAME VARCHAR2(40)
Notice that the accountname field no longer has a null, and the varchar2 field went from 40 to 160 characters. Has anyone seen this before?
This is because you are no longer selecting ACCOUNTNAME, which has a column definition and meta-data. Rather you are selecting a STRING, the result of the replace function, which doesn't have any meta-data. This is a different data type entirely.
A (potentially) better way that might work is to create the table using a query with the original columns, but with a WHERE clause that guarantees 0 rows.
Then you can insert in to the table normally with your actual SELECT.
By having query of 0 rows, you'll still get the column meta-data, so the table should be created, but no rows will be inserted. Make sure you make your WHERE clause something fast, like WHERE primary_key = -999999, some number you know would never exist.
Another option here is to define the columns when you call the CREATE TABLE AS SELECT. It is possible to list the column names and include constraints while excluding the data types.
An example is shown below:
create table ctamytable (
accountname not null,
username
)
as
select
replace(accountname, 'foo', 'foo2') accountname,
username
from mytable;
Be aware that although this syntax is valid, you cannot include the data type. Also, explicitly declaring all the columns somewhat defeats the purpose of using CREATE TABLE AS SELECT.

Oracle: use index for searching null values

I've done some search but I prefer something like an hint or similar
http://www.dba-oracle.com/oracle_tips_null_idx.htm
http://www.oracloid.com/2006/05/using-index-for-is-null/
What about a functional index using NVL2, like;
CREATE TABLE foo (bar INTEGER);
INSERT INTO foo VALUES (1);
INSERT INTO foo VALUES (NULL);
CREATE INDEX baz ON foo (NVL2(bar,0,1));
and then;
DELETE plan_table;
EXPLAIN PLAN FOR SELECT * FROM foo WHERE NVL2(bar,0,1) = 1;
SELECT operation, object_name FROM plan_table;
should give you
OPERATION OBJECT_NAME
---------------- -----------
SELECT STATEMENT
TABLE ACCESS FOO
INDEX BAZ << yep
If you're asking, "How can I create an index that would allow it to be used when searching for NULL values on a particular field", my suggestion is to create an index on the field you're interested in PLUS the primary key field(s). Thus, if you've got a table called A_TABLE, with field VAL that you want to search for NULLs, and a primary key named PK, I'd create an index on (VAL, PK).
Share and enjoy.
I'm going to "answer" the non-question above.
The articles you link to are kinda right - Oracle's b-tree indexes will not capture when the leaf nodes are null. Take this example:
CREATE TABLE MYTABLE (
ID NUMBER(8) NOT NULL,
DAT VARCHAR2(100)
);
CREATE INDEX MYTABLE_IDX_1 ON MYTABLE (DAT);
/* Perform inserts into MYTABLE where some DAT are null */
SELECT COUNT(*) FROM MYTABLE WHERE DAT IS NULL;
The ending SELECT will not be able to use the index, because the leafs (right-most column) will not capture the nulls. Burleson's solution is stupid, because now you have to use a NVL in all your queries and have compromised the data in the tables. Gorbachev's method includes a known NOT NULL column for the leaves of the b-tree, but this expands the index for no reason. Maybe in his case the index made sense that way for tuning other queries, but if all you want to do is find the NULLs then the easiest solution is to make the leaf a constant.
CREATE INDEX MYTABLE_IDX_1 ON MYTABLE (DAT, 1);
Now, the leaves are all the constant (1), and by default the nulls will all be together (either at the top or bottom of the index, but it doesn't really matter as Oracle can use the index forwards or backwards). There is a slight storage penalty for that constant, but a single number is smaller than most other data fields in a typical table. Now the database can use the index when querying for nulls...if the optimizer finds that the best way to get the data.

How to duplicate all data in a table except for a single column that should be changed

I have a question regarding a unified insert query against tables with different data
structures (Oracle). Let me elaborate with an example:
tb_customers (
id NUMBER(3), name VARCHAR2(40), archive_id NUMBER(3)
)
tb_suppliers (
id NUMBER(3), name VARCHAR2(40), contact VARCHAR2(40), xxx, xxx,
archive_id NUMBER(3)
)
The only column that is present in all tables is [archive_id]. The plan is to create a new archive of the dataset by copying (duplicating) all records to a different database partition and incrementing the archive_id for those records accordingly. [archive_id] is always part of the primary key.
My problem is with select statements to do the actual duplication of the data. Because the columns are variable, I am struggling to come up with a unified select statement that will copy the data and update the archive_id.
One solution (that works), is to iterate over all the tables in a stored procedure and do a:
CREATE TABLE temp as (SELECT * from ORIGINAL_TABLE);
UPDATE temp SET archive_id=something;
INSERT INTO ORIGINAL_TABLE (select * from temp);
DROP TABLE temp;
I do not like this solution very much as the DDL commands muck up all restore points.
Does anyone else have any solution?
How about creating a global temporary table for each base table?
create global temporary table tb_customers$ as select * from tb_customers;
create global temporary table tb_suppliers$ as select * from tb_suppliers;
You don't need to create and drop these each time, just leave them as-is.
You're archive process is then a single transaction...
insert into tb_customers$ as select * from tb_customers;
update tb_customers$ set archive_id = :v_new_archive_id;
insert into tb_customers select * from tb_customers$;
insert into tb_suppliers$ as select * from tb_suppliers;
update tb_suppliers$ set archive_id = :v_new_archive_id;
insert into tb_suppliers select * from tb_suppliers$;
commit; -- this will clear the global temporary tables
Hope this helps.
I would suggest not having a single sql statement for all tables and just use and insert.
insert into tb_customers_2
select id, name, 'new_archive_id' from tb_customers;
insert into tb_suppliers_2
select id, name, contact, xxx, xxx, 'new_archive_id' from tb_suppliers;
Or if you really need a single sql statement for all of them at least precreate all the temp tables (as temp tables) and leave them in place for next time. Then just use dynamic sql to refer to the temp table.
insert into ORIGINAL_TABLE_TEMP (SELECT * from ORIGINAL_TABLE);
UPDATE ORIGINAL_TABLE_TEMP SET archive_id=something;
INSERT INTO NEW_TABLE (select * from ORIGINAL_TABLE_TEMP);

Resources