when I do "order by" with the varchar field in the oracle database
Here is the result
DATA:
1>33
1>31>33
1>31
112
11
1
Is there any way to achieve the desired result below
DATA:
112
11
1>33
1>31>33
1>31
1
for postgres it works perfectly but for other databases it is not working as it should
If someone can help me, thank you very much
Oracle_19
Postgres_13
source:
create table test(
data char(50)
)
insert into test values('112');
insert into test values('11');
insert into test values('1>33');
insert into test values('1>31>33');
insert into test values('1>31');
insert into test values('1');
select * from test order by data desc
I don't pretend to have a lot of knowledge of collation, but you can achieve the result you want in Oracle with UCA collation and variable characters and weighting, either for a specific query with nlssort():
select * from test order by nlssort(data, 'NLS_SORT=UCA0700_DUCET_VN') desc
or by setting the NLS_SORT session (or system) parameter:
alter session set nls_sort = UCA0700_DUCET_VN;
select * from test order by data desc
Both give the result you want:
DATA
----------
112
11
1>33
1>31>33
1>31
1
db<>fiddle
Presumably your PostgreSQL environment is configured to do something similar.
Never used PostgreSQL but looks like this collation does the same:
select * from test order by data collate "vi-VN-x-icu" desc
db<>fiddle
Related
I have requirement of performing some calculation on a column of a table with large date set ( 300 GB). and return that value.
Basically I need to create a View on that table. Table has data of 21 years and It is partitioned on date column (Daily). We can not put date condition on View's query and User will put filter on runtime while execution of the view.
For example:
Create view v_view as
select * from table;
Noe I want to query View like
Select * v_view where ts_date between '1-Jan-19' and '1-Jan-20'
How Internally Oracle execute above statement? Will it execute view query first and then put date filter on that?
If so will there not be performance issue ? and how to resolve this?
oracle first generates the view and then applies the filter. you can create a function that input may inserted by user. the function results a create query and if yo run the query then the view will be created. just run:
create or replace function fnc_x(where_condition in varchar2)
return varchar2
as
begin
return ' CREATE OR REPLACE VIEW sup_orders AS
SELECT suppliers.supplier_id, orders.quantity, orders.price
FROM suppliers
INNER JOIN orders
ON suppliers.supplier_id = orders.supplier_id
'||where_condition||' ';
end fnc_x;
this function should be run. input the function is a string like this:
''WHERE suppliers.supplier_name = Microsoft''
then you should run a block like this to run the function's result:
cl scr
set SERVEROUTPUT ON
declare
szSql varchar2(3000);
crte_vw varchar2(3000);
begin
szSql := 'select fnc_x(''WHERE suppliers.supplier_name = Microsoft'') from dual';
dbms_output.put_line(szSql);
execute immediate szSql into crte_vw; -- generate 'create view' command that is depended on user's where_condition
dbms_output.put_line(crte_vw);
execute immediate crte_vw ; -- create the view
end;
In this manner, you just need received where_condition from user.
Oracle can "push" the predicates inside simple views and can then use those predicates to enable partition pruning for optimal performance. You almost never need to worry about what Oracle will run first - it will figure out the optimal order for you. Oracle does not need to mindlessly build the first step of a query, and then send all of the results to the second step. The below sample schema and queries demonstrate how only the minimal amount of partitions are used when a view on a partitioned table is queried.
--drop table table1;
--Create a daily-partitioned table.
create table table1(id number, ts_date date)
partition by range(ts_date)
interval (numtodsinterval(1, 'day'))
(
partition p1 values less than (date '2000-01-01')
);
--Insert 1000 values, each in a separate day and partition.
insert into table1
select level, date '2000-01-01' + level
from dual
connect by level <= 1000;
--Create a simple view on the partitioned table.
create or replace view v_view as select * from table1;
The following explain plan shows "Pstart" and "Pstop" set to 3 and 4, which means that only 2 of the many partitions are used for this query.
--Generate an explain plan for a simple query on the view.
explain plan for
select * from v_view where ts_date between date '2000-01-02' and date '2000-01-03';
--Show the explain plan.
select * from table(dbms_xplan.display(format => 'basic +partition'));
Plan hash value: 434062308
-----------------------------------------------------------
| Id | Operation | Name | Pstart| Pstop |
-----------------------------------------------------------
| 0 | SELECT STATEMENT | | | |
| 1 | PARTITION RANGE ITERATOR| | 3 | 4 |
| 2 | TABLE ACCESS FULL | TABLE1 | 3 | 4 |
-----------------------------------------------------------
However, partition pruning and predicate pushing do not always work when we may think they should. One thing we can do to help the optimizer is to use date literals instead of strings that look like dates. For example, replace
'1-Jan-19' with date '2019-01-01'. When we use ANSI date literals, there is no ambiguity and Oracle is more likely to use partition pruning.
We have Oracle as source and HANA 1.0 sps12 as target. We are mirroring Oracle to HANA with Informatica CDC through real-time replication. In Oracle, for many columns we have datatype as CHAR i.e. fixed length datatype. As HANA officially doesn't support CHAR datatype so we are using NVARCHAR data type instead of same. Problem we are facing is -as in Oracle CHAR datatype is of fixed length and append spaces whenever actual string is of lesser length than datatype, we have lot of extra spaces in target HANA db for such columns.
For eg. If column col1 has data type
CHAR(5)
and value as 'A', it is replicated in HANA as 'A ' i.e. 'A' appended by four extra spaces, causing lot of problems in queries and data interpretation
Is it possible to implement CHAR like datatype in HANA?
You can use RPAD function in Informatica while transferring data to Hana. Just make sure if Hana doesn't trim automatically.
So, for the CHAR(5) source column you should use:
out_Column = RPAD(input_Column, 5)
Pretty much exactly, as the documentation says:
I don't know HANA and this is more a comment than an answer, but I chose to put it here as there's some code I'd like you to see.
Here's a table whose column is of a CHAR datatype:
SQL> create table test (col char(10));
Table created.
SQL> insert into test values ('abc');
1 row created.
Column's length is 10 (which you already know):
SQL> select length(col) from test;
LENGTH(COL)
-----------
10
But, if you TRIM it, you get a better result, the one you're looking for:
SQL> select length( TRIM (col)) from test;
LENGTH(TRIM(COL))
-----------------
3
SQL>
So: if you can persuade the mirroring process to apply TRIM function to those columns, you might get what you want.
[EDIT, after seeing Lars' comment and re-reading the question]
Right; the problem seems to be just the opposite of what I initially understood. If that's the point, maybe RPAD would help. Here's an example:
SQL> create table test (col varchar2(10));
Table created.
SQL> insert into test values ('abc');
1 row created.
SQL> select length(col) from test;
LENGTH(COL)
-----------
3
SQL> insert into test values (rpad('def', 10, ' '));
1 row created.
SQL> select col, length(col) len from test;
COL LEN
---------- ----------
abc 3
def 10
SQL>
I'm exploring cx_Oracle's JSON features within a CLOB. I have an index on the table that allows me to query for direct equality
SELECT * FROM mytable m WHERE m.jsonclob.jsonattribute = 'foo';
I'd like to be able to do the same thing with a LIKE statement.
SELECT * FROM mytable m WHERE m.jsonclob.jsonattribute LIKE 'foo.%';
This works for me with Oracle DB 12.2:
SQL> CREATE TABLE j_purchaseorder_b (po_document CLOB CHECK (po_document IS JSON)) LOB (po_document) STORE AS (CACHE);
Table created.
SQL> INSERT INTO j_purchaseorder_b VALUES ('{"userId":2,"userName":"Bob","location":"USA"}');
1 row created.
SQL> SELECT pob.po_document.location FROM j_purchaseorder_b pob where pob.po_document.location LIKE 'US%';
LOCATION
--------------------------------------------------------------------------------
USA
For reference check the Oracle JSON manual chapter Query JSON Data.
A side note: the JSON team like recommending BLOB for storage for performance reasons. Check the doc etc etc etc.
Hi I want to capture all the Oracle Errors for my DML operations in the manually created table with columns as ErrorID and Error_Descr.
How to get ORA_ERR_NUMBER$ and ORA_ERR_MESG$ values in the above columns?
This table contains user defined errors as well so I do not want to limit it to the Oracle Errors.
Is there any way of capturing Oracle as well as User Defined Errors in the User Defined Tables?
Thanks in Advance!
As per documentation Link,
Oracle allows you to use a manually created table for LOGGING only if you have included these mandatory columns.
ORA_ERR_NUMBER$
ORA_ERR_MESG$
ORA_ERR_ROWID$
ORA_ERR_OPTYP$
ORA_ERR_TAG$
If you want other columns to capture the information in those two columns, you could make them as virtual columns.
CREATE TABLE my_log_table (
ORA_ERR_NUMBER$ NUMBER
,ORA_ERR_MESG$ VARCHAR2(2000)
,ORA_ERR_ROWID$ ROWID
,ORA_ERR_OPTYP$ VARCHAR2(2)
,ORA_ERR_TAG$ VARCHAR2(2000)
,ErrorID NUMBER AS (COALESCE(ORA_ERR_NUMBER$, ORA_ERR_NUMBER$))
,Error_Descr VARCHAR2(2000) AS (COALESCE(ORA_ERR_MESG$, ORA_ERR_MESG$))
);
using COALESCE is a hack because Oracle doesn't allow you to have one column default to another directly.
Now, you could run your error logging dml normally mentioning the logging table name.
INSERT INTO t_emp
SELECT employee_id * 10000
,first_name
,last_name
,hire_date
,salary
,department_id
FROM hr.employees
WHERE salary > 10000 LOG ERRORS
INTO my_log_table('ERR_SAL_LOAD') REJECT LIMIT 25
0 row(s) inserted.
select ORA_ERR_TAG$,ErrorID,Error_Descr FROM my_log_table ;
ORA_ERR_TAG$ ERRORID ERROR_DESCR
ERR_SAL_LOAD 1438 ORA-01438: value larger than specified precision allowed for this column
ERR_SAL_LOAD 1438 ORA-01438: value larger than specified precision allowed for this column
In Oracle 12, if I create a very simple table, TEST_TABLE, with a single varchar2(128) column 'name' and populate that column with lots of strings of '20170831', and my sysdate shows:
SELECT sysdate FROM dual;
29-SEP-17
then why does this SQL query return 0 rows:
SELECT TO_DATE(name,'YYYYMMDD'),
TO_DATE(TRUNC(SYSDATE),'DD-MM-YYYY')
FROM TEST_TABLE
WHERE TO_DATE(name,'YYYYMMDD') < TO_DATE(TRUNC(SYSDATE),'DD-MM-YYYY');
(This is a very simplified example of a problem I'm facing in my partition maintenance script and have not been able to solve for the last week).
Thank you in advance for any assistance related to the above query.
Midnight(time part is 00:00:00.000):
SELECT TO_DATE(name,'YYYYMMDD'), TRUNC(SYSDATE)
FROM TEST_TABLE
WHERE TO_DATE(name,'YYYYMMDD') <= TRUNC(SYSDATE);
You could also try:
ALTER SESSION SET NLS_DATE_FORMAT = 'YYYY-MM-DD HH24:MI:SS';
Just don't apply a to_date() to an already date field, this because, it will implicitly convert that date into varchar and then apply the to_date() function to it, for example your query part TO_DATE(TRUNC(SYSDATE),'DD-MM-YYYY') is interpreted like this:
TO_DATE(TO_CHAR(TRUNC(SYSDATE)),'DD-MM-YYYY')
TO_CHAR(TRUNC(SYSDATE)) is getting a char something like: '31-AUG-17', and that is not in 'DD-MM-YYYY' format.
And because of that, TO_DATE(TRUNC(SYSDATE),'DD-MM-YYYY') gets something like this: 29/09/0017 and your filter goes FALSE and gets no results.