Getting "ORA-01482: unsupported character set" when using WHERE IN pattern - oracle

I have a table with the following structure in Oracle database:
CREATE TABLE PASSENGERS
(ID VARCHAR2(6),
PASSPORTNO VARCHAR2(14));
I want to get the IDs of the passengers who have been registered more than once. For that I run the following query.
SELECT ID FROM PASSENGERS WHERE PASSPORTNO IN
(SELECT PASSPORTNO FROM PASSENGERS
GROUP BY PASSPORTNO
HAVING COUNT(*)>1);
But I get "unsuported character set" error. What's the point I'm missing?

Since all queries related with PASSPORTNO are running fine you have at least two more things to do:
Run SELECT ID FROM PASSENGERS and check for errors, if the error cames up, then it may be releated with content stored in your table
Try another SQL tool to execute your queries, your client OS may be using a system enconding which the database can't understand both when processing your query of to display the returning rows.
Since both ID and PASSPORTNO are varchar fields, there's a big change to one of then have data in a enconding which oracle can't decode properly.

Mostly seems like a data issue. Try checking the exact data row which is causing the issue.
Use : DML Error Logging - http://www.oracle-base.com/articles/10g/dml-error-logging-10gr2.php
Btw, you are doing GROUP BY passportno .Is that correct? (This implies multiple passports can have same passport number). I guess it should be GROUP BY id

Related

How to Create a VIEW in oracle

So I'm supposed to create a view product_view that presents the information about how many products of a particular type are in each warehouse: product ID, product name, category_id, warehouse id, total quantity on hand for this warehouse.
So I used this query and tried to change it so many times but I keep getting errors
CREATE OR REPLACE VIEW PRODUCT_VIEW AS
SELECT p.product_id, p.product_name,
COUNT(p.product_id), SUM(i.quantity_on_hand)
FROM oe.product_information p JOIN oe.inventories i
ON p.product_id=i.product_id
ORDER BY i.warehouse_id;
ERROR at line 2:
ORA-00928: missing SELECT keyword
Please help... Thanks
Image showing the Tables in the OE schema
Image showing the error that occurs
When I get errors creating a view, I firstly drop the CREATE ... AS line and fix the query until it works. Then you need to name all the columns, for instance COUNT(p.product_id) won't work, you'll need to write something like COUNT(p.product_id) AS product_count or specify a list of aliases, like so
I'm not sure what the output of your query should look like. You'll get better answers quicker on stackexchange if you type a minimal example including the CREATE statments, some input data and your desired output, leaving out columns that are not essential.

Import massive table from Oracle to PostgreSQL with oracle-fdw return ORA-01406

I work on a project to transfer data from an Oracle database to a PostgreSQL database to create a datawarehouse with bash & SQL scripts. To access to the Oracle database, I use the PostgreSQL extension oracle-fdw.
One of my scripts import data from a massive table (~ 100 000 000 new rows/day). This table is partitioned and each partition contains 1 day of data. The query I use to import data looks like that :
INSERT INTO postgre_target_table (some_fields)
SELECT some_aggregated_fields -- (~150 fields)
FROM oracle_source_table
WHERE partition_id = :v_partition_id AND some_others_filters
GROUP BY primary_key;
On DEV server, the query works fine (there is much less data on this server) but in PREPROD, it returns the error ORA-01406: fetched column value was truncated.
In some posts, people say that the output fields may be too small but if I try to send a simple SELECT query without INSERT or GROUP BY I have the same error.
Another idea I found in another post is to create an Oracle side view but in my query I use multiple parameters that I cannot use in a view.
The last idea I found is to create an Oracle stored procedure that fills a table with aggregated data and then import data from this table but the Oracle database is critical and my customer prefers to avoid adding more data on it.
Now, I'm starting to think there's no solution and it's not good...
PostgreSQL version : 12.4 / Oracle version : 11.2
UPDATE
It seems my problem is more complecated than I thought.
After applying the modification given by Laurenz Albe, the query runs correctly on PGAdmin but the problem still appears when I use psql command.
Moreover, another query seems to have the same problem. This other query does not use the same source table as the first query, it uses 4 joined tables without any partition. The common point between these queries is the structure.
The detail I omit to specify in the original post is that the purpose of both queries is to pivot a table. They look like that :
SELECT osr.id,
MIN(CASE osr.category
WHEN 123 THEN
1
END) AS field1,
MIN(CASE osr.category
WHEN 264 THEN
1
END) AS field2,
MIN(CASE osr.category
WHEN 975 THEN
1
END) AS field3,
...
FROM oracle_source_table osr
WHERE osr.category IN (123, 264, 975, ...)
GROUP BY osr.id;
Now that I have detailed what the queries look like, I can give you some results I had with the second one without changing the value of max_long (this query is lighter than the first one) :
Sometimes it works (~10%), sometimes it failed (~90%) on PGadmin but it never works with psql command
If I delete the WHERE, it always works
I don't understand why deleting the WHERE change something, the field used in this clause is a NUMBER(6, 0) between 0 and 2500 and it is still used in the SELECT clause... Oh and in the 4 Oracle tables used by this query, there is no LONG datatype, only NUMBER datatype is used.
Among 20 queries I have, only these two have a problem, their structure is similar and I don't believe in coincidences.
Don't despair!
Set the max_long option on the foreign table big enough that all your oversized data fit.
The documentation has the details:
max_long (optional, defaults to "32767")
The maximal length of any LONG, LONG RAW and XMLTYPE columns in the Oracle table. Possible values are integers between 1 and 1073741823 (the maximal size of a bytea in PostgreSQL). This amount of memory will be allocated at least twice, so large values will consume a lot of memory.
If max_long is less than the length of the longest value retrieved, you will receive the error message
ORA-01406: fetched column value was truncated
Example:
ALTER FOREIGN TABLE my_tab OPTIONS (ADD max_long '1000000');

Oracle Alter command to rename existing Column errorring

alter table tablename rename column zl_divn_nbr to div_loc_nbr;
Error while executing the above statement. Please help.
SQL Error: ORA-54032: column to be renamed is used in a virtual column expression
54032. 0000 - "column to be renamed is used in a virtual column expression"
*Cause: Attempted to rename a column that was used in a virtual column
expression.
*Action: Drop the virtual column first or change the virtual column
expression to eliminate dependency on the column to be renamed
Run the following SQL query in your database using the table name mentioned in the error message. For example, in the error message shown in this article, the table name is 'tablename'. Note that whilst the table name appears in lower case in the error message, it may be upper case in your DB. This query is case sensitive so if you receive no results, check whether the table name is upper case inside your database.
SELECT COLUMN_NAME, DATA_DEFAULT, HIDDEN_COLUMN
FROM USER_TAB_COLS
WHERE TABLE_NAME = 'tablename';
Before proceeding, make sure the Bitbucket Server process is not running. If Extended Statistics has been enabled, contact your database administrator to have them drop the Extended Statistics metadata from the table, and proceed with your upgrade. If you wish to enable Extended Statistics again after the upgrade you may do so, however be aware that you may need to repeat this process again for subsequent upgrades otherwise you risk running into this issue again.
Removing columns created by Extended Statistics requires using an in-build stored procedure,
DBMS_STATS.DROP_EXTENDED_STATS().
Usage of this stored procedure is covered further in ORA-54033 and the Hidden Virtual Column Mystery, and looks similar to the following:
EXEC DBMS_STATS.DROP_EXTENDED_STATS(ownname=>'<YOUR_DB_USERNAME>', tabname=>'tablename', extension=>'("PR_ROLE", "USER_ID", "PR_APPROVED")')
References
Database Upgrade Eror: column to be rename
Thanks.
Probably, you have such a table :
CREATE TABLE tablename(
id NUMBER,
zl_divn_nbr NUMBER,
zl_divn_percent NUMBER GENERATED ALWAYS AS (ROUND(zl_divn_nbr/100,2)) VIRTUAL
);
where zl_divn_nbr column is used for a computation for virtual(zl_divn_percent) column.
To rename zl_divn_nbr, all referenced virtual columns to this column should be removed, and may be created later.
The syntax for defining a virtual column is this :
column_name [datatype] [GENERATED ALWAYS] AS (expression) [VIRTUAL]
Since version 11 R1, we have this property.
ALTER TABLE rename column to
In the case of tables with virtual or 'group extension columns' the above
statement returns an error before Oracle 12cR2. For Oracle 12cR2 or newer versions the above statement runs fine cause 'renaming column' command is decoupled from the group extension aspect.

oracle select and concurrent insert :: To check email availability

We have simple case, We have a table with column emailId i.e. unique.....using oracle DB
Question#1
Multiple concurrent user can check if some email id is available or not. Like 2 user that same time check availability of: abc#test.com
session1: select emailid from user_table;
//If not present allow user to complete rest of the process & insert info
session2: select emailid from user_table;
Now both session will get that this email id (abc#test.com) is available & both try to insert, I know one of them will get error upon insertion BUT how we can make sure only 1 user get availability & other get not available upon select ??
Question#2
Also in case both sessions inserted the same value, then first will succeed, is there ways that 2nd session update that row instead of throwing error. Like we have another column for timestamp & want that 2nd session instead of throwing error simple update the timestamp column ?
As this is a rather abstract question, here are only some general guidelines:
To deal with concurrent insert in a table, you need an unique index, and be prepared in your code to deal with ORA-00001 error unique constraint violated. Never rely only on check before insert(unless you have somehow exclusive access to your table -- and even if so ... as of myself, I would add an unique index: doesn't cost much and make me sleep better)
Oracle has a MERGE statement that allows you update or insert based on a condition. This operation is sometimes called an upsert. By using that keywork you should be able to find more informationsSee Oracle: how to UPSERT (update or insert into a table?) for example.
Now for, some thoughts about you specific case (maybe):
The only way for the system to work as you suggested, would be to make some kind of reservation when you check for availability (i.e.: immediately insert the row, instead of just select). And then update the row when the user confirm. But that means: (1) you will have to somehow deal with never-confirmed reservations (2) that doesn't dispense you to have an unique index, and to deal with ORA-00001.

DB2 duplicate key error when inserting, BUT working after select count(*)

I have a - for me unknown - issue and I don't know what's the logic/cause behind it. When I try to insert a record in a table I get a DB2 error saying:
[SQL0803] Duplicate key value specified: A unique index or unique constraint *N in *N
exists over one or more columns of table TABLEXXX in SCHEMAYYY. The operation cannot
be performed because one or more values would have produced a duplicate key in
the unique index or constraint.
Which is a quite clear message to me. But actually there would be no duplicate key if I inserted my new record seeing what records are already in there. When I do a SELECT COUNT(*) from SCHEMAYYY.TABLEXXX and then try to insert the record it works flawlessly.
How can it be that when performing the SELECT COUNT(*) I can suddenly insert the records? Is there some sort of index associated with it which might give issues because it is out of sync? I didn't design the data model, so I don't have deep knowledge of the system yet.
The original DB2 SQL is:
-- Generate SQL
-- Version: V6R1M0 080215
-- Generated on: 19/12/12 10:28:39
-- Relational Database: S656C89D
-- Standards Option: DB2 for i
CREATE TABLE TZVDB.PRODUCTCOSTS (
ID INTEGER GENERATED BY DEFAULT AS IDENTITY (
START WITH 1 INCREMENT BY 1
MINVALUE 1 MAXVALUE 2147483647
NO CYCLE NO ORDER
CACHE 20 )
,
PRODUCT_ID INTEGER DEFAULT NULL ,
STARTPRICE DECIMAL(7, 2) DEFAULT NULL ,
FROMDATE TIMESTAMP DEFAULT NULL ,
TILLDATE TIMESTAMP DEFAULT NULL ,
CONSTRAINT TZVDB.PRODUCTCOSTS_PK PRIMARY KEY( ID ) ) ;
ALTER TABLE TZVDB.PRODUCTCOSTS
ADD CONSTRAINT TZVDB.PRODCSTS_PRDCT_FK
FOREIGN KEY( PRODUCT_ID )
REFERENCES TZVDB.PRODUCT ( ID )
ON DELETE RESTRICT
ON UPDATE NO ACTION;
I'd like to see the statements...but since this question is a year old...I won't old my breath.
I'm thinking the problem may be the
GENERATED BY DEFAULT
And instead of passing NULL for the identity column, you're accidentally passing zero or some other duplicate value the first time around.
Either always pass NULL, pass a non-duplicate value or switch to GENERATED ALWAYS
Look at preceding messages in the joblog for specifics as to what caused this. I don't understand how the INSERT can suddenly work after the COUNT(*). Please let us know what you find.
Since it shows *N (ie n/a) as the name of the index or constraing, this suggests to me that is is not a standard DB2 object, and therefore may be a "logical file" [LF] defined with DDS rather than SQL, with a key structure different than what you were doing your COUNT(*) on.
Your shop may have better tools do view keys on dependent files, but the method below will work anywhere.
If your table might not be the actual "physical file", check this using Display File Description, DSPFD TZVDB.PRODUCTCOSTS, in a 5250 ("green screen") session.
Use the Display Database Relations command, DSPDBR TZVDB.PRODUCTCOSTS, to find what files are defined over your table. You can then DSPFD on each of these files to see the definition of the index key. Also check there that each of these indexes is maintained *IMMED, rather than *REBUILD or *DELAY. (A wild longshot guess as to a remotely possible cause of your strange anomaly.)
You will find the DB2 for i message finder here in the IBM i 7.1 Information Center or other releases
Is it a paging issue? we seem to get -0803 on inserts occasionally when a row is being held for update and it locks a page that probably contains the index that is needed for the insert? This is only a guess but it appears to me that is what is happening.
I know it is an old topic, but this is what Google shown me on the first place.
I had the same issue yesterday, causing me a lot of headache. I did the same as above, checked the table definitions, keys, existing items...
Then I found out the problem was with my INSERT statement. It was trying to insert to identical records at once, but as the constraint prevented the commit, I could not find anything in the database.
Advice: review your INSERT statement carefully! :)

Resources