How to get the value of clob data which has date field and compare with timestamp in oracle - oracle

I have a table named masterregistry and it contains all the info and business logic in it and the data type of the colum is clob
desc master_registy:
id number not null,
name varchar2(100),
value clob
select value from master_registry where name='REG_DATE';
o/p
11-10-17
This date is common across all the business logic, I need to query my tables which has ,
desc get_employee
====================
id number not null,
first_name varchar2(100),
last_name varchar2(100),
last_mod_dt timestamp
Now I need to get all the values from the get_employee whose last_mod_dt should be greater than the value of master_registry where name='REG_DATE'.The value in the latter table is clob data, how to fetch and compare the date of a clob data against the timestamp from another table. Please help.

Maybe you need something like this.
SELECT *
FROM get_employee e
WHERE last_mod_dt > (SELECT TO_TIMESTAMP (TO_CHAR (VALUE), 'DD-MM-YY')
FROM master_registy m
WHERE m.id = e.id);
DEMO
Note that i have used the column value directly in TO_CHAR. You may have to use TRIM,SUBSTR or whatever required to get ONLY the date component.

Related

Different Output with same Input for ORACLE MD5 Function

At a given time I stored the result of the following ORACLE SQL Query :
SELET col , TO_CHAR( LOWER( STANDARD_HASH( col , 'MD5' ) ) AS hash_col FROM MyTable ;
A week later, I executed the same query on the same data ( same values for column col ).
I thought the resulting hash_col column would have the same values as the values from the former execution but it was not the case.
Is it possible for ORACLE STANDARD_HASH function to deliver over time the same result for identical input data ?
It does if the function is called twice the same day.
All we have about the data changing (or not) and the hash changing (or not) is your assertion.
You could create and populate a log table:
create table hash_log (
sample_time timestamp,
hashed_string varchar2(200),
hashed_string_dump varchar2(200),
hash_value varchar2(200)
);
Then on a daily basis:
insert into hash_log values
(select systimestamp,
source_column,
dump(source_column),
STANDARD_HASH(source_column , 'MD5' )
from source_table
);
Then, to spot changes:
select distinct hashed_string ||
hashed_string_dump ||
hash_value
from hash_log;

Combine Oracle CONTEXT index and function-based index

I have created the following PRODUCTS table in the Oracle 11g EE database:
( ID NUMBER(10) NOT NULL PRIMARY KEY
,NAME VARCHAR2(100) NOT NULL
,DESCRIPTION VARCHAR2(4000) NOT NULL
,CREATED DATE NOT NULL
,CHANGED DATE );
For this table, the following index of type CTXSYS.CONTEXT was created on the DESCRIPTION field:
BEGIN
CTX_DDL.CREATE_PREFERENCE ('MY_LEXER', 'BASIC_LEXER');
CTX_DDL.SET_ATTRIBUTE ('MY_LEXER', 'BASE_LETTER', 'YES');
END;
/
CREATE INDEX PROD_DESCR_TXT_IDX ON PRODUCTS(DESCRIPTION) INDEXTYPE IS CTXSYS.CONTEXT PARAMETERS ('LEXER MY_LEXER');
BEGIN
CTX_DDL.SYNC_INDEX('DOC_DESCR_TXT_IDX', '2M');
END;
/
ANALYZE TABLE DOCUMENTOS_GED COMPUTE STATISTICS;
This table has many thousands of rows, and I need to query the CONTEXT index to return the rows in descending order by the date of the last change (field CHANGED) and, if this field is null, by the date of creation (field CREATED).
I would like to know if anyone can help me create an index that meets the search criteria for CONTEXT and at the same time bring the results ordered by the "NVL (CHANGED, CREATED) DESC" function, in order to make the result of the following command instantaneous:
SELECT *
FROM PRODUCTS
WHERE CONTAINS (DESCRIPTION, 'kitchen AND accessories') > 0
ORDER BY NVL(CHANGED, CREATED) DESC;
Thanks in advance for your help

is that possible to load data of various format in date column | Oracle

Table :
SQL> DESC EMO_SRC;
Name Null? Type
----------------------------------------------------- -------- ------------------------------------
EMPLOYEE_NAME VARCHAR2(30)
EMPLOYEE_NUMBER NUMBER
STATE VARCHAR2(10)
ZIP NUMBER
DOB DATE
AGE NUMBER
SEX VARCHAR2(7)
MARITALDESC VARCHAR2(15)
CITIZENDESC VARCHAR2(15)
HISPANIC_LATINO_RACEDESC VARCHAR2(15)
DATE_OF_HIRE DATE
DATE_OF_TERMINATION DATE
REASON_FOR_TERM VARCHAR2(50)
EMPLOYMENT_STATUS VARCHAR2(15)
DEPARTMENT VARCHAR2(30)
POSITION VARCHAR2(20)
PAY_RATE NUMBER(6,2)
MANAGER_NAME VARCHAR2(20)
EMPLOYEE_SOURCE VARCHAR2(30)
PERFORMANCE_SCORE VARCHAR2(40)
Table EMO_SRC has 3 "date" columns DOB,Date_of_hire and date_of _Termination
If
==========================================================
DOB | Date_of_hire | date_of_termination
============================================================
11/24/1987 10/27/2008 10/28/2016
4-26-1984 1-06-2014
02-26-1984 09/29/2014 4-15-2017
like above , the data has random date format for these 3 columns . Oracle not allowing to load '01-06-2014 ' format .
please let me know is there any way to load date value of different format or I need to convert data to '00/00/0000' format . Huge number of data mismatch is load . Then how, can I change and load the data in table .
SQL> show parameters nls_date_format;
NAME TYPE VALUE
------------------------------------ ---------- ------------------------------
nls_date_format string DD-MON-RR
Please let me know the solution .
You didn't mention which tool you use.
Anyway, you have a problem. "Date" formats are different within columns and within rows which makes it even worse.
I'd suggest you to create a function which will accept input values - "dates" (actually, strings) you find in source data and try to convert it to a valid DATE using TO_DATE function with different format masks: mm/dd/yyyy, dd-mm-yyyy, etc., whichever you find in source. Use inner BEGIN-EXCEPTION-END blocks so that the first failure wouldn't terminate function's execution. If you manage to find correct date value, fine - load it. If not, log the error and try to fix it by another TO_DATE's format mask. Optionally, you might use REGEXP_LIKE to verify input format.
Problem you can't solve is a string that looks like 10-08-20. Which is which? Is 10 day, month or year? The same goes for other values.
Also, calling a function for all those values - if source is large - will certainly take a lot of time.

SQL Error: ORA-01722: invalid number

I have enclosed the table and the insert statement I am working on.
CREATE TABLE EMP (
DRIVER_ID INTEGER NOT NULL
, FNAME VARCHAR(30) NOT NULL
, LNAME VARCHAR(30) NOT NULL
, ADDRESS VARCHAR(50) NOT NULL
, SALARY VARCHAR(50) NOT NULL
, DOB DATE NOT NULL
, SHIFTS VARCHAR2(20) NOT NULL
, SSN CHAR(11) NOT NULL
, PHONE INTEGER NOT NULL
, HIRING_DATE DATE NOT NULL
, EMAIL VARCHAR2(50) NOT NULL
);
When I run this insert statement:
INSERT INTO EMP (DRIVER_ID, FNAME, LNAME, ADDRESS, SALARY, DOB, SHIFTS, SSN, PHONE, HIRING_DATE, EMAIL)
VALUES (SEQ_EMP.NEXTVAL,'Emma', 'Johnson', '123 Main Street', 'DIRECT DEPOSIT', '31 JANUARY,1988', 'MORNING', '579-45-6666', '410-555-1112', '16 DECEMBER,2013', 'ejohnson#fakemail.com');
I get this error message
SQL Error: ORA-01722: invalid number
01722. 00000 - "invalid number"
*Cause: The specified number was invalid.
*Action: Specify a valid number.
Lets look at this logically. The error message is saying "invalid number". Oracle is saying to you "I am expecting a number, but you gave me something that isn't a number".
Looking at the table SQL, you can see that the table has two columns whose type is a number type (actually INTEGER). These are DRIVER_ID and PHONE. (The other columns don't matter now ... because they won't expect a number as the value.)
Now look at the insert SQL, and the values corresponding to those columns.
The value inserted into the DRIVER_ID column comes from SEQ_EMP.NEXTVAL ... which I would assume has type INTEGER. That means, you won't get an error from there.
The value inserted into the PHONE column is '410-555-1112'. But, hey, that isn't a number. Its a string! And besides a (mathematical) number doesn't have hyphen characters embedded in it!
In short, if you are going to store phone numbers with - (or + or space) characters embedded in them, you can't use INTEGER as the column type.
In your table DDL change
PHONE INTEGER NOT NULL
to
PHONE CHAR(12) NOT NULL
as previously mentioned by user Stephen C using CHAR instead of INTEGER.
Also, those DATE fields will have to be converted to CHAR datatypes as well or you'll have to format your INSERT to handle the date format properly. Those fields are:
, DOB DATE NOT NULL
, HIRING_DATE DATE NOT NULL
If you chose to keep the columns as DATEs, then in your insert use (for the above-mentioned columns) the following...
,to_date('31-JANUARY-1988','DD-MONTH-YYYY')
,to_date('16-DECEMBER-2013','DD-MONTH-YYYY')
The above won't give you exactly what you are looking for -- e.g. 16 DECEMBER,2013 -- but it will work and format the date with hyphens -- e.g. 16-DECEMBER-2013. If you need further information on the to_date() function for Oracle for formatting and for other sundry information, please refer to the Oracle docs or search on "Oracle TO_DATE" via your favorite browser.
HTH

"Create table as select" does not preserve not null

I am trying to use the "Create Table As Select" feature from Oracle to do a fast update. The problem I am seeing is that the "Null" field is not being preserved.
I defined the following table:
create table mytable(
accountname varchar2(40) not null,
username varchar2(40)
);
When I do a raw CTAS, the NOT NULL on account is preserved:
create table ctamytable as select * from mytable;
describe ctamytable;
Name Null Type
----------- -------- ------------
ACCOUNTNAME NOT NULL VARCHAR2(40)
USERNAME VARCHAR2(40)
However, when I do a replace on accountname, the NOT NULL is not preserved.
create table ctamytable as
select replace(accountname, 'foo', 'foo2') accountname,
username
from mytable;
describe ctamytable;
Name Null Type
----------- ---- -------------
ACCOUNTNAME VARCHAR2(160)
USERNAME VARCHAR2(40)
Notice that the accountname field no longer has a null, and the varchar2 field went from 40 to 160 characters. Has anyone seen this before?
This is because you are no longer selecting ACCOUNTNAME, which has a column definition and meta-data. Rather you are selecting a STRING, the result of the replace function, which doesn't have any meta-data. This is a different data type entirely.
A (potentially) better way that might work is to create the table using a query with the original columns, but with a WHERE clause that guarantees 0 rows.
Then you can insert in to the table normally with your actual SELECT.
By having query of 0 rows, you'll still get the column meta-data, so the table should be created, but no rows will be inserted. Make sure you make your WHERE clause something fast, like WHERE primary_key = -999999, some number you know would never exist.
Another option here is to define the columns when you call the CREATE TABLE AS SELECT. It is possible to list the column names and include constraints while excluding the data types.
An example is shown below:
create table ctamytable (
accountname not null,
username
)
as
select
replace(accountname, 'foo', 'foo2') accountname,
username
from mytable;
Be aware that although this syntax is valid, you cannot include the data type. Also, explicitly declaring all the columns somewhat defeats the purpose of using CREATE TABLE AS SELECT.

Resources