This is really strange. I run the following query on an 11g database, and the results are what I expect:
SELECT REGEXP_REPLACE(json_content, '<acronym class=\\\"ticker\\\">([a-zA-Z0-9]{1,5})</acronym>','\1'),json_content
FROM message_lobs
WHERE revision_id = 211576;
Works fine and gives proper output. But, if I run this in 12c, I get this as output:
"<CONNECTION><VERSIONSTRING>Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production</VERSIONSTRING><RELEASE>12.2.0.1.0</RELEASE><USERNAME>SYSTEM</USERNAME><SERVICENAME>VOLGA.EXAMPLE.COM</SERVICENAME></CONNECTION>"
And actually, in some cases, unprintable characters:
Ŏ}᭄#ਜᩔ᪂᪴ᨖ᪄ᨠ᪐ᩢଊᬾ;Ѽ᩼Fь
I am doing this query in the Toad Editor. Note this is a CLOB column. Anyone now why this would be? It almost seems like something is not compatible, or has changed in 12c?
Related
Something very strange has happened since we migrated our schemas/database from 12c to 19c
When I insert records to a table and I check the row count
namely under my oracle user - say SMITH_J - I see 4 records. Good I am happy.
When my Java application looks at the same table which I will call QUEUE_TAB - using the application oracle user - say APP_TOMCAT - it just sees ZERO records. How can that be ?
I check the GRANTS for APP_TOMCAT - it has everything that should be there for that table - it's got SELECT,INSERT,UPDATE,DELETE - which it had before.
What is really perplexing why are the record counts different - despite all the privileges being the same ? Is there something here that I have overlooked OR cannot see at the moment ? Is it something to do with privileges going from 12c to 19c ?
I owe the correct answer to Alex Poole in the comments above.
I was using a procedure to populate the table in question. And, foolishly assumed that the procedure would commit it. Of, course, it would have if it DID have a COMMIT within the code at the end of the procedure. So, after EXECUTING the procedure - I issued a COMMIT and it worked.
It's best practice NOT to have a COMMIT statement within your procedure even if it is a single procedure. See the comments of #MTO and #Alex Poole below
We have recently uplifted a major application that was written under JDK6 using ojdbc6 against an 11g Oracle database. It was uplifted to use JDK8 against the same 11g database, but we were also in the midst of upgrading to 12c. The uplifted code has been running in production against the 11g database, but slower than it was before. Against the 12c database in our QA environment, we're noticing jobs either throwing exceptions or running VERY slow. When I looked at the code, I noticed that the project team assigned to uplift the code failed to upgrade the ojdbc from 6 to at least 8. I have since done that work, but now we are getting errors from submitting the following code:
Calendar endModDate = Calendar.getInstance();
// get the timestamp from the db
Query qry = em.createNativeQuery("select SYSTIMESTAMP from dual");
TIMESTAMPTZWrapper tsTZWrapper = (TIMESTAMPTZWrapper)qry.getSingleResult();
The em is our entity manager. But when the code calls the qry.getSingleResult() member function, we get this error:
oracle.sql.TIMESTAMPTZ cannot be cast to org.eclipse.persistence.internal.platform.database.oracle.TIMESTAMPTZWrapper
I've searched high and low for an answer, and anything that resembles an answer doesn't appear to fix my solution. This same logic is used in one other area of the code and produces the same issue. If we switch back to ojdbc6, it then works but we can't use ojdbc6 (and we really shouldn't since we're on jdk8) since we need to upgrade to Oracle 12c in the coming month.
Thanks for any assistance in this matter.
Just quick comment about TIMESTAMP datatype:
There is a bug in JDBC drivers "Bug 21891493 : JDBC SENDS TIMESTAMP SCALE OF NULL WHEN NANOSECONDS ARE ZERO", which can cause creation of excessive number child cursors in Oracle database. This bug was fixed in 12.2 JDBC drivers.
The datatype TIMESTAMP WITH TIMEZONE uses internally a function to convert the this into GMT. When you create an index on it, in some cases this function based index is not used, especially when you compare this column with TIMESTAMP value of different subtype. You should compare exec plans between 11g and 12c.
I'm facing an issue with an update statement that worked earlier in Oracle 9i but now it does not update any rows in Oracle 11G. Here is the statement that i'm using.
update account
set
days_to_validate = validated_date - val_requested_Date
where
validated_date >= val_requested_date
The validated_date and val_requested_date are both date fields in the format: dd-mmm-yyyy (18-Mar-2015). This was working earlier in Oracle 9i before we did an upgrade.
Pls advice on how we can fix this.
Thanks
Prashanth
I was able to fix this. I tried including the "to_date" function and it worked in Oracle 11G. Here is the change i made to the query.
update account
set days_to_validate = to_date(validated_date) - to_date(val_requested_Date)
where to_Date(validated_date) >= to_date(val_requested_date)
I have to change the character set from AL32UTF8 to WE8MSWIN1252 in a Oracle 11g r2 Express instance... I tried to use the command:
ALTER DATABASE CHARACTER SET WE8MSWIN1252;
But it fails saying that MSWIN1252 isn't a superset of AL32UTF8. Then I found some articles talking about CSSCAN, and that tool doesn't seem to be available in Oracle 11 Express.
http://www.oracle-base.com/articles/10g/CharacterSetMigration.php
Anyone has an idea on how to do that? Thanks in advance
Edit
Clarifying a little bit: The real issue is that I'm trying to import data into a table that has a column defined as VARCHAR(6 byte). The string causing the issue is 'eq.mês', it needs 6 bytes in MSWIN1252 and 7 bytes in UT8
You can't.
The Express Edition of 11g is only available using a UTF-8 character set. If you want to go back to the express edition of 10g, there was a Western European version that used the Windows-1252 character set. Unlike with the other editions, Oracle doesn't support the full range of character sets in the Express Edition nor does it support changing the character set of an existing XE database.
Why do you believe you need to change the database character set? Other than potentially taking a bit more storage space to support the characters in the upper half of the Windows-1252 range, which generally aren't particularly heavily used, there aren't many downsides to a UTF-8 database.
I would say that your best option when you want to go to a character set that supports only a subset of the original characters, that your best option is to use exp and imp back (or expdp and impdp).
Are you sure that no single table will contain any character not found in the 1252 code page ?
The problem of only execute that ALTER DATABASE command is that the Data Dictionary was not converted and it can be corrupted.
I had the same problem. In my case, we are using a Oracle 11g Express Edition (11.2.0.2.0) and we really need that it runs on WE8MSWIN1252 character set, but I cannot change the character set on installation (it always install with AL32UTF8).
With a Oracle Client 11g installed as Administrator and run only the csscan full=y (check this link https://oracle-base.com/articles/10g/character-set-migration) and we notice that are lossy and convertible data problems in our database. But, the problems are with the MDSYS (Oracle Spatial) and APEX_040000 (Oracle Application Express) schemas. So, as we dont need this products, we remove them (check this link: http://fast-dba.blogspot.com.br/2014/04/how-to-remove-unwanted-components-from.html).
Then, we export with expdp the users schemas and drop the users (they must be recreated at the end of the process).
Executing csscan again with full=y capture=y, it reports that: The data dictionary can be safely migrated using the CSALTER script. If the report doesnt have this, the csalter.plb script will not work, because there are some conditions that will not be satisfied:
changeless for all CHAR VARCHAR2, and LONG data (Data Dictionary and Application Data)
changeless for all Application Data CLOB
changeless and/or convertible for all Data Dictionary CLOB
In our case, this conditions were satisfied and we could ran the CSALTER script successfully. Moreover, this script executes the ALTER DATABASE command you are trying to run and it converts the CLOB data of Data Dictionary that is convertible.
Finally, we create the users and the tablespaces of our application and we import the dump of the user data successfully.
I'm using Oracle 10g (XE 10.2.0.1.0), and find a behavior that I don't understand:
select *
from employees manager
join employees worker on MANAGER.EMPLOYEE_ID = WORKER.MANAGER_ID
join departments on DEPARTMENTS.manager_id = 108
where
department_id = 100
;
The problem is I think Oracle should have complain about the ambiguity of department_id in the where clause, since it's a column in both the table employees and departments. The fact is in Oracle 10g, it doesn't, and the result shows that it interprets the department_id as the one in departments. However, if I comment out the second join statement (4th line above), Oracle does complain “ORA-00918: column ambiguously defined” as expected.
So, can somebody help to explain how the ambiguity is defined in Oracle 10g? Or perhaps this is a bug in 10g?
BTW: The tables are defined in the default HR schema bundled in the Oracle 10g.
Update: Just found a related post:
Why does Oracle SQL mysteriously resolve ambiguity in one joins and does not in others
I believe it is a bug in Oracle 10g that Oracle chose not to fix. When we were upgrading our applications from 10g to 11gR2, we found a couple of queries that were written "loosely" in respect of ambiguous column names but worked in Oracle 10g. They all stopped working in 11gR2. We contacted Oracle but they pretty much said that the tolerant behavior toward ambiguous column names is a correct behavior for Oracle 10g and the stringent behavior is the correct behavior for 11g.
I think it is, because departments have no alias. Therefore everything without being qualified by an <alias>. is first treated to be from departments.
So I also think when you give departments an alias you should get the ORA-00918 again. Cannot test here though...