Inserting char variables into table oracle sql developer - oracle

I've a table CMP with two fields as CMP_CODE varchar2(20) and CMP_NAME varchar2(50).
When I'm trying to insert an entry like '001' to CMP_CODE, every time it is getting inserted as '1'.
My statement was like
Insert into CMP(CMP_CODE ,CMP_NAME) values ('007','test');
Previously the problem was not there, but I've re-installed our XE database recently, is the problem with that?
Your valuable help in this regard is highly appreciated. Thanks in advance.

The fieldtype "VARCHAR2" itself is probably NOT responsible for snipping of your zeros.
The error seems to be in your application. In case, if you use an Numeric-Variable-Type (eg. Integer, Long, Float, Decimal) this behavior is very basic and in most cases desirable.
But due very less information about your situation it is kind of hard, to tell whats is really going on.

Related

Error in tSQLStoredProc in Delphi XE2 when importing stored proc from Oracle XE 11g?

In an Oracle database I have some stored procedures in a package. When the parameter list contain integers and I try to bind a tSQLStoredProc and navigate to the correct procedure, the Param list show the integers in the parameter list as ftFMTBcd instead of ftInteger.
As long as the integer parameters are declared as OUT in Oracle, transfer goes ok.
But if the integer is in the IN list, I get ORA-05602 when calling the procedure.
Reason seems to be that conversion in Datasnap server of the ftFMTBcd field into integer just fails and Datasnap server is sending a blank string to Oracle instead of an integer or a number (both IN INTEGER and IN Number in parameter list ends up as ftFMTBcd which seem to end up as blank string at transfer).
Using the following table:
CREATE TABLE achristo_adm.paalogget(
paaloggetID NUMBER(38, 0) NOT NULL,
utstyrID NUMBER(38, 0),
BrukerID NUMBER(38, 0),
sist_paalogget TIMESTAMP(6),
CONSTRAINT PK22 PRIMARY KEY (paaloggetID)
USING INDEX
LOGGING)
LOGGING;
And following package function
PROCEDURE registrer_paalogget(
FUTSTYRID IN INTEGER,
FBRUKERID IN INTEGER,
FSIST_PAALOGGET IN TIMESTAMP,
fpaaloggetID OUT INTEGER) AS paaloggetC hl_recur_typ;
BEGIN
OPEN paaloggetC FOR SELECT paaloggetID FROM paalogget WHERE utstyrID = FUTSTYRID;
FETCH paaloggetC INTO fpaaloggetID;
IF paaloggetC%notfound THEN
fpaaloggetID := PAALOGGET_SEQ.NEXTVAL;
INSERT
INTO paalogget
(
paaloggetID,
utstyrID,
BrukerID,
sist_paalogget
)
VALUES
(
fpaaloggetID,
FUTSTYRID,
FBRUKERID,
FSIST_PAALOGGET
);
else
update paalogget
set
sist_paalogget=FSIST_PAALOGGET
where
utstyrID=FUTSTYRID;
END IF;
EXCEPTION
WHEN OTHERS THEN
raise_application_error(-20001,'An error was encountered in registrer_paalogget - '
||SQLCODE||' -ERROR- '||SQLERRM);
END registrer_paalogget;
Will these have same problem in Delphi-XE4 ? (Didn't find anything resembling in QC)
If still same problem in Delphi-XE4, I have to make an entry in QC :-)
Hope someone with access to both Delphi XE4 (XE3) and Oracle XE could test this.
For me the possible error is a show stopper right now.
Addon:
Could this question (which I forgot to mark as question :-( ) be involved?
https://stackoverflow.com/questions/17567604/is-there-something-essential-missing-or-wrong-in-this-datasnap-server-method
Problem solved !!
If anyone was reading the included routines twice (or maybe thrice :-) there should have been triggered a huge bell, bigger than Liberty Bell :-D
NUMBER(38, 0) is in reality an 128-bit integer that is not supported in 32-bits Delphi.
Solution is to redesign the database to use BIGINT insted, which in Oracle is declared as NUMBER(19, 0) using ER-Studio Data Architect DE which is bundled with RAD Studio Architect.
There should also be implemented a new warning about 128-bits integer not supported and field converted to ftFMTBCD.
Now, why updating a record having a 128-bit integer do fail in tSQLStoredProcedure is something that maybe Embarcadero should look into. As I have understood someone else, same error message about erroneous integer export, ORA-01722: invalid number seem to happen also with tFDStoredProc.
A message about the problem has been sent the developers.
Problem arise if you model a database model in ER/Studio and declare fields as INTEGER instead of BIGINT or lower precision integers and export the model with stored procedure to Oracle database. That would end up as shpwn in original questin as NUMBER(38, 0).
The culprit is existing if you install ORacle XE 11g on your local computer and don't install latest instant client and copy the files that is also stored in server directory, like oci.dll etc. etc.
All the dll-files must be latest version, not the one distributed with the server.
Both transfer of integers, bcd etc. and dates do fail with original server installation.
A workaround (which is of course not usable in length) is to rewrite all procedures to use only strings as parameters instead of integer and use to_number before calling original procedure
:-(
The way I do it is to rename the interfaced procedure by adding _string to the procedure name and convert integer or number parameters to string and in implementation part convert the string parameters to numbers and call original method

VarChar2 to Char comparison : Oracle settings can allow this?

I've just a quick question to see how it comes that I get 2 different results for the same thing.
We have two databases which are built up exactly the same in terms of structure.
In both, there is a view which do a comparison between a varchar2(10) and a char(10) where the fields are only filled with a length of 7 (+3 spaces for the char off course).
Off course this is something wrong in our structure, but that's something different than my question.
How is it possible that one database is able to do the comparison (varchar2=char) and the other one not?
Is there some Oracle-setting which can allow this.
Thanks for the help,
Grts,
Maarten
It's probably bug 11726301 "Wrong Result with query_rewrite_enabled=false and Joins of CHAR with Other CHAR and VARCHAR2 Columns"
Fixed in 11.2.0.3
Workaround is to set query_rewrite_enabled=true

I get an ORA-01775: looping chain of synonyms error when I use sqlldr

I apologize for posting a question that seems to have been asked numerous times on the internet, but I can't quite fix it for some reason.
I was trying to populate some tables using Oracle's magical sqldr utility, but it throws an ORA-01775 error for some reason.
Everywhere I go on Google, people say something along the lines of: "Amateur, get your synonyms sorted out" (that was paraphrased) and that's nice and all, but I did not make any synonyms.
Here, the following does not work on my system:
SQLPLUS user/password
SQL>CREATE TABLE test (name varchar(10), id number);
SQL>exit
Then, I have a .ctl file with the following contents:
load data
characterset utf16
infile *
append
into table test
(name,
id
)
begindata
"GURRR" 4567
Then I run this command:
sqlldr user#localhost/password control=/tmp/controlfiles/test.ctl
The result:
SQL*Loader-702: Internal error - ulndotvcol: OCIStmtExecute()
ORA-01775: looping chain of synonyms
Part of test.log:
Table TEST, loaded from every logical record.
Insert option in effect for this table: APPEND
Column Name Position Len Term Encl Datatype
------------------------------ ---------- ----- ---- ---- ---------------------
NAME FIRST 2 CHARACTER
ID NEXT 2 CHARACTER
SQL*Loader-702: Internal error - ulndotvcol: OCIStmtExecute()
ORA-01775: looping chain of synonyms
And, if I try to do a manual insert:
SQL> insert into test values ('aa', 56);
1 row created.
There is no problem.
So, yeah, I am stuck!
If it helps, I am using Oracle 11g XE on CentOS.
Thanks for the help guys, I appreciate it.
EDIT:
I kind of, sort of figured out part of the problem. The problem was that somewhere along the line, maybe during a failed load or something, Oracle had given itself corrupt views and synonyms.
The affected views were: GV_$LOADISTAT, GV_$LOADPSTAT, V_$LOADISTAT and V_$LOADPSTAT. I am not quite sure why the views got corrupt, but recompiling them resulted in compiled with errorserrors. The synonyms used in the queries themselves were corrupt, namely the gv$loadistat, gv$loadpstat, v$loadistat and v$loadpstat synonyms.
I wasn't sure about why this was happening and I didn't quite understand anything. So, I decided to drop the synonyms and recreate them. Unfortunately, I couldn't recreate them, as the view they pointed to (there is a bit of weird recursion going on here...) was corrupt. These views were the aforementioned GV_$LOADISTAT and other views. In other words, the synonyms pointed to the views that used those synonyms. Talk about a looping chain.
So...I recreated the public synonyms but instead of specifying the view as GV_$LOADISTAT, I specified them as sys.GV_$LOADISTAT. e.g.
DROP PUBLIC synonym GV$LOADISTAT;
CREATE PUBLIC synonym GV$LOADISTAT for sys.GV_$LOADISTAT;
Then, I recreated the user views to point to those public synonyms.
CREATE OR REPLACE FORCE VIEW "USER"."GV_$LOADISTAT" ("INST_ID", "OWNER", "TABNAME", "INDEXNAME", "SUBNAME", "MESSAGE_NUM", "MESSAGE")
AS
SELECT "INST_ID",
"OWNER",
"TABNAME",
"INDEXNAME",
"SUBNAME",
"MESSAGE_NUM",
"MESSAGE"
FROM gv$loadistat;
That seemed to fix the views/synonyms. Yeah, it is a bit of a hack, but it somehow worked. Unfortunately, this was not enough to run SQL Loader. I got a table or view does not exist error.
I tried granting more permissions to my regular user, but it didn't work. So, I gave up and ran SQL Loader as sysdba. It worked! It is not a good thing to do, but it is a development only system made for testing purposes, so, I didn't care.
I could not repeat your looping synonym chain error, but it appears the control file needed a bit of work, at least for my environment.
I was able to get your example to work by modifying it thusly:
load data
infile *
append
into table test
fields terminated by "," optionally enclosed by '"'
(name,
id
)
begindata
"GURRR",4567

Oracle DBMS package command to export table content as INSERT statement

Is there any subprogram similar to DBMS_METADATA.GET_DDL that can actually export the table data as INSERT statements?
For example, using DBMS_METADATA.GET_DDL('TABLE', 'MYTABLE', 'MYOWNER') will export the CREATE TABLE script for MYOWNER.MYTABLE. Any such things to generate all data from MYOWNER.MYTABLE as INSERT statements?
I know that for instance TOAD Oracle or SQL Developer can export as INSERT statements pretty fast but I need a more programmatically way for doing it. Also I cannot create any procedures or functions in the database I'm working.
Thanks.
As far as I know, there is no Oracle supplied package to do this. And I would be skeptical of any 3rd party tool that claims to accomplish this goal, because it's basically impossible.
I once wrote a package like this, and quickly regretted it. It's easy to get something that works 99% of the time, but that last 1% will kill you.
If you really need something like this, and need it to be very accurate, you must tightly control what data is allowed and what tools can be used to run the script. Below is a small fraction of the issues you will face:
Escaping
Single inserts are very slow (especially if it goes over a network)
Combining inserts is faster, but can run into some nasty parsing bugs when you start inserting hundreds of rows
There are many potential data types, including custom ones. You may only have NUMBER, VARCHAR2, and DATE now, but what happens if someone adds RAW, BLOB, BFILE, nested tables, etc.?
Storing LOBs requires breaking the data into chunks because of VARCHAR2 size limitations (4000 or 32767, depending on how you do it).
Character set issues - This will drive you ¿¿¿¿¿¿¿ insane.
Enviroment limitations - For example, SQL*Plus does not allow more than 2500 characters per line, and will drop whitespace at the end of your line.
Referential Integrity - You'll need to disable these constraints or insert data in the right order.
"Fake" columns - virtual columns, XML lobs, etc. - don't import these.
Missing partitions - If you're not using INTERVAL partitioning you may need to manually create them.
Novlidated data - Just about any constraint can be violated, so you may need to disable everything.
If you want your data to be accurate you just have to use the Oracle utilities, like data pump and export.
Why don't you use regular export ?
If you must you can generate the export script:
Let's assume a Table myTable(Name VARCHAR(30), AGE Number, Address VARCHAR(60)).
select 'INSERT INTO myTable values(''' || Name || ','|| AGE ||',''' || Address ||''');' from myTable
Oracle SQL Developer does that with it's Export feature. DDL as well as data itself.
Can be a bit unconvenient for huge tables and likely to cause issues with cases mentioned above, but works well 99% of the time.

PL/SQL to insert history row with long raw column in Oracle

I have a long raw column in an Oracle table. Insert with select is not working because of the long raw column which is part of my select statement as well. Basically I am trying to insert to insert history row with couple of parameters changed. Hence I was thinking of using PL/SQL in Oracle. I have no experience in PL/SQL neither I got anything after googling for couple of days. Can anyone help me with a sample PL/ SQL for my problem ? Thanks in advance !!!
LONG and LONG RAW datatypes are deprecated, and have been for many years now. You really are much better off getting away from them.
Having said that, if you're using PL/SQL, you will be limited to 32,760 bytes of data, which is the max that the LONG RAW PL/SQL datatype will hold. However, the LONG RAW database datatype, can hold up to 2GB of data. So, if any rows in your table contain data longer than 32,760 bytes, you will not be able to retrieve it using PL/SQL. This is a fundamental limitation of LONG and LONG RAW datatypes, and one of the reasons Oracle has deprecated their use.
In that case, the only options are Pro*C or OCI.
More information can be found here:
http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14261/datatypes.htm#CJAEGDEB
Hope that helps.
You can work with a LONG RAW column directly in PL/SQL if your data is limited to 32kB:
FOR cc IN (SELECT col1, col2... col_raw FROM your_table) LOOP
INSERT INTO your_other_table (col1, col2... col_raw)
VALUES (cc.col1, cc.col2... cc.col_raw);
END LOOP;
This will fail if any LONG RAW is larger than 32k.
In that case you will have to use another language. You could use java since it is included in the DB. I already answered a couple of questions on SO with LONG RAW and java:
Copying data from LOB Column to Long Raw Column (will work with LONG RAW to LONG RAW too, just replace the UPDATE with an INSERT)
Get the LENGTH of a LONG RAW
In any case as you have noticed it is a pain to work with this data type. If converting to LOB is not possible you will have to use a workaround.

Resources