Oracle SQL: Stored procedure - object invalid - oracle

I was just about to create a store procedure and it worked so far.
(For learning purposes I want to hand a credit card number over to a stored procedure which should return the associated customer identification number.)
But when I wanted to test this procedure using
BEGIN CC_TO_CID(:p1, :p2);
END;
(the input data was submitted via a dialogue of my SQL IDE)
it just returned:
SQL Error [6550][65000]: ORA-06550: Row 1, Column 7: PLS-00905
Object xyz.CC_TO_CID is invalid ORA-06550: Row 1, Column 7:
PL/SQL: Statement ignored
This basically means that my stored procedure isn't well formatted but I really don't have any clue.
My stored procedure:
CREATE OR REPLACE PROCEDURE CC_TO_CID(in_cc_nr IN NUMBER(16,0), out_cid OUT NUMBER) IS
BEGIN
SELECT PM.CUSTOMER_ID INTO cid FROM "2_PAYMENT_M" PM,
"2_CREDITCARD" CC
WHERE CC.CC_NR=in_cc_nr AND CC.PAYMENT_M_NR=PM.PAYMENT_M_NR;
END;
My table structure with some test data:
Table: "2_CREDITCARD"
CC_NR PAYMENT_M_NR NAME CVV EXPIRES
------------------ -------------- -------------- ----- ---------------------
5307458270409047 1 Haley Harrah 52 2019-11-01 00:00:00
Table: "2_PAYMENT_M"
PAYMENT_M_NR CUSTOMER_ID CREATED TRANSACTION_LIMIT
-------------- ------------- --------------------- -------------------
1 100 2018-01-21 00:00:00 1.000
Thanks in advance!
I appreciate any useful hints.

You will have seen an error when you compiled the procedure, but it would probably have been quite generic. Your client may support show errors, or you can query the user_errors view to see the details.
You can’t give a size or precision restriction for the data type of a formal parameter to a function or procedure, so NUMBER(10,0) should just be NUMBER; and you have got the name of the argument wrong in your into clause.
CREATE OR REPLACE PROCEDURE CC_TO_CID(in_cc_nr IN NUMBER, out_cid OUT NUMBER) IS
BEGIN
SELECT PM.CUSTOMER_ID
INTO out_cid
FROM "2_PAYMENT_M" PM
JOIN "2_CREDITCARD" CC
ON CC.PAYMENT_M_NR=PM.PAYMENT_M_NR
WHERE CC.CC_NR=in_cc_nr;
END;
I’ve switched to ANSI join syntax because... well, just because. Untested as I don’t have your tables; if it still gets errors then check user_errors again.

Related

PLSQL Trigger - Update another table before inserting a new record

I have 3 tables that are related to each other:
ACCOUNTS
CARDS
TRANSACTIONS
I want to change the money amount from account every time I execute a new transaction. I want to decrease the account value with each new move.
I tried writing this trigger:
create or replace trigger ceva_trig1
before insert on miscari
for each row
declare
new_val micari.valoare%tipe := new.valoare;
begin
update conturi
set sold = sold - new_val
where nrcont = (select nrcont
from conturi
join carti_de_credit on conturi.nrcont = carti_de_credit.nrcont
join miscari on carti_de_credit.nr_card = miscari.nrcard)
and sold >= new_val;
end;
May anyone help me correct the syntax that crashes here?
I've created those tables with minimal number of columns, just to make trigger compile.
SQL> create table conturi
2 (sold number,
3 nrcont number
4 );
Table created.
SQL> create table miscari
2 (valoare number,
3 nrcard number
4 );
Table created.
SQL> create table carti_de_credit
2 (nrcont number,
3 nr_card number
4 );
Table created.
Trigger:
SQL> create or replace trigger ceva_trig1
2 before insert on miscari
3 for each row
4 begin
5 update conturi c
6 set c.sold = c.sold - :new.valoare
7 where c.nrcont = (select r.nrcont
8 from carti_de_credit r
9 where r.nrcont = c.nrcont
10 and r.nr_card = :new.nrcard
11 )
12 and c.sold >= :new.valoare;
13 end;
14 /
Trigger created.
SQL>
How does it differ from your code? Like this:
SQL> create or replace trigger ceva_trig1
2 before insert on miscari
3 for each row
4 declare
5 new_val micari.valoare%tipe := new.valoare;
6 begin
7 update conturi
8 set sold = sold - new_val
9 where nrcont = (select nrcont
10 from conturi
11 join carti_de_credit on conturi.nrcont = carti_de_credit.nrcont
12 join miscari on carti_de_credit.nr_card = miscari.nrcard)
13 and sold >= new_val;
14 end;
15 /
Warning: Trigger created with compilation errors.
SQL> show err
Errors for TRIGGER CEVA_TRIG1:
LINE/COL ERROR
-------- -----------------------------------------------------------------
2/11 PL/SQL: Item ignored
2/26 PLS-00208: identifier 'TIPE' is not a legal cursor attribute
4/3 PL/SQL: SQL Statement ignored
10/15 PL/SQL: ORA-00904: "NEW_VAL": invalid identifier
10/15 PLS-00320: the declaration of the type of this expression is incomplete or malformed
SQL>
Explained:
it isn't tipe but type
new column values are referenced with a colon, i.e. :new.valoare
you shouldn't make typos regarding table & column names; it is miscari, not micari
it is bad practice to write query which references the same table (miscari, line #12) trigger is created for. As it is being changed, you can't select values from it as it is mutating
lucky you, you don't have to do that at all. How? Have a look at my code.
Attempting to maintain an ongoing for transactions in one table in another table is always a bad idea. Admittedly in an extremely few cases it's necessary, but should be the design of last resort not an initial one; even when necessary it's still a bad idea and therefore requires much more processing and complexity.
In this instance after you correct all the errors #Littlefoot points out then your real problems begin. What do you do when: (Using Littlefoot's table definitions)
I delete a row from miscari?
I update a row in miscari?
The subselect for nrcont returns 0 rows?
The condition sold >= new_val is False?
If any of conditions occur the value for sold in conturi is incorrect and may not be correctable from values in the source table - miscari. An that list may be just the beginning of the issues you face.
Suggestion: Abandon the idea of keeping an running account of transaction values. Instead derive it when needed. You can create a view that does that and select from the view.
So maybe instead of "create table conturi ..."

insert a String into timestamp(6) column

I have timestamps looking like this: 2019-06-13 13:22:30.521000000
I am using Spark/Scala scripts to insert them into an Oracle table. Column in Oracle is Timestamp(6) and should stay like that.
This is what I do:
what I have in Spark is a df containing a column with my timestamps:
+-----------------------------+
| time |
+-----------------------------+
|2019-06-13 13:22:30.521000000|
+-----------------------------+
I do the following:
df.withColumn("time", (unix_timestamp(substring(col("time"), 1, 23), "yyyy-MM-dd HH:mm:ss.SSS") + substring(col("time"), -6, 6).cast("float") / 1000000).cast(TimestampType))
and I insert using a connexion to Oracle (insert script was tested and works fine).
But in Oracle I only see the following in my table:
+--------------------------+
| time |
+--------------------------+
|2019-06-13 13:22:30.000000|
+--------------------------+
The milliseconds aren't included. Any help please? Thank you!
If your time column is a timestamp type, you can try date_format:
https://sparkbyexamples.com/spark/spark-sql-how-to-convert-date-to-string-format/
I thank everyone that tried to help me.
This is what I did to get desired output:
df.withColumn("time", (unix_timestamp(substring(col("time"), 1, 23), "yyyy-MM-dd HH:mm:ss.SSS") + substring(col("time"), -9, 9).cast("float") / 1000000000).cast(TimestampType))
all other solutions kept returning null or timestamps without milliseconds.
Hope it helps someone.
I don't know tools you use, but - if it were only Oracle, then to_timestamp with appropriate format mask does the job. See if it helps.
SQL> create table test (col timestamp(6));
Table created.
SQL> insert into test (col) values
2 (to_timestamp('2019-06-13 13:22:30.521000000', 'yyyy-mm-dd hh24:mi:ss.ff'));
1 row created.
SQL> select * From test;
COL
---------------------------------------------------------------------------
13.06.19 13:22:30,521000
SQL>
[EDIT, as you can't read my mind (at least, I hope so]
As you (AbderrahmenM) said that you have a string but still want to insert a timestamp, perhaps you could use a stored procedure. Here's an example:
SQL> create or replace procedure p_test (par_time in varchar2)
2 is
3 begin
4 insert into test (col) values
5 (to_timestamp(par_time, 'yyyy-mm-dd hh24:mi:ss.ff'));
6 end;
7 /
Procedure created.
SQL> exec p_test('2019-06-13 13:22:30.521000000');
PL/SQL procedure successfully completed.
SQL> select * from test;
COL
-------------------------------------------------------------------
13.06.19 13:22:30,521000
SQL>
Now, the only thing I can't help with is how to call a procedure from Spark. If you know how, then simply pass that string you have and it should be properly inserted into the database; pay attention to correct format mask!

Oracle vs HANA char data type handling

We have Oracle as source and HANA 1.0 sps12 as target. We are mirroring Oracle to HANA with Informatica CDC through real-time replication. In Oracle, for many columns we have datatype as CHAR i.e. fixed length datatype. As HANA officially doesn't support CHAR datatype so we are using NVARCHAR data type instead of same. Problem we are facing is -as in Oracle CHAR datatype is of fixed length and append spaces whenever actual string is of lesser length than datatype, we have lot of extra spaces in target HANA db for such columns.
For eg. If column col1 has data type
CHAR(5)
and value as 'A', it is replicated in HANA as 'A ' i.e. 'A' appended by four extra spaces, causing lot of problems in queries and data interpretation
Is it possible to implement CHAR like datatype in HANA?
You can use RPAD function in Informatica while transferring data to Hana. Just make sure if Hana doesn't trim automatically.
So, for the CHAR(5) source column you should use:
out_Column = RPAD(input_Column, 5)
Pretty much exactly, as the documentation says:
I don't know HANA and this is more a comment than an answer, but I chose to put it here as there's some code I'd like you to see.
Here's a table whose column is of a CHAR datatype:
SQL> create table test (col char(10));
Table created.
SQL> insert into test values ('abc');
1 row created.
Column's length is 10 (which you already know):
SQL> select length(col) from test;
LENGTH(COL)
-----------
10
But, if you TRIM it, you get a better result, the one you're looking for:
SQL> select length( TRIM (col)) from test;
LENGTH(TRIM(COL))
-----------------
3
SQL>
So: if you can persuade the mirroring process to apply TRIM function to those columns, you might get what you want.
[EDIT, after seeing Lars' comment and re-reading the question]
Right; the problem seems to be just the opposite of what I initially understood. If that's the point, maybe RPAD would help. Here's an example:
SQL> create table test (col varchar2(10));
Table created.
SQL> insert into test values ('abc');
1 row created.
SQL> select length(col) from test;
LENGTH(COL)
-----------
3
SQL> insert into test values (rpad('def', 10, ' '));
1 row created.
SQL> select col, length(col) len from test;
COL LEN
---------- ----------
abc 3
def 10
SQL>

Type body created with compilation error

customer_ty object I have created has a nested table including
CREATE TYPE deposit_ty as object(
depNo number,
depCategory ref depcategory_ty,
amount number,
period number
)
/
CREATE TYPE deposit_tbl as table of deposit_ty
/
CREATE TYPE customer_ty as object(
custId varchar2(4),
custName varchar2(10),
address address_ty,
dob date,
deposits deposit_tbl
)
/
I have written a code to compute the total amount deposited by each client. Here is the code I've written;
alter type customer_ty
add member function totDeposits return number cascade
/
create or replace type body customer_ty as
member function totDeposits
return number is
total number;
BEGIN
select sum(d.amount) into total
from table(self.deposits) d;
group by self.custId,self.custName
return total;
END totDeposits;
END;
/
But I get a warning saying that the "type body created with compilation errors". What can I do to get rid of this?
If you do show errors immediately after you get the 'created with compilation errors' message, you'll see something like:
LINE/COL ERROR
-------- ------------------------------------------------------------------------------
8/5 PLS-00103: Encountered the symbol "GROUP" when expecting one of the following:
You can also query the user_errors or all_errors view to see the outstanding errors against any PL/SQL objects.
In this case it's a simple typo; you have the semicolon in the wrong place; instead of:
select sum(d.amount) into total
from table(self.deposits) d;
group by self.custId,self.custName
it should be:
select sum(d.amount) into total
from table(self.deposits) d
group by self.custId,self.custName;
In your version the ... d; is terminating that SQL statement - which would be invalid now because that truncated statement doesn't have a group by clause; but it doesn't get as far as complaining about that because it sees the group by ... as a separate statement, and that isn't the start of anything PL/SQL recognises as a query, statement, control loop etc., so it gives up.

Trying to export a Oracle via PL/SQL gives a date of 0000-00-00

I have inherited an Oracle .dmp file which I'm trying to get into CSV so that I can load it into MySQL.
The general approach I'm using is described here. I'm having a problem with one row though. It contains a date of 5544-09-14 like so:
alter session set nls_date_format = 'dd-MON-yyyy';
select OID, REF, TRADING_DATE From LOAN WHERE REF = 'XXXX';
OID REF TRADING_DATE
--- -------------------- ------------
1523 XXXX 14-SEP-5544
This is garbage data from the legacy system which didn't validate the input dates. I'm wondering why my PL/SQL function to export the data chokes on this value though?
It exports that row with a TRADING_DATE value of '0000-00-00T00:00:00' and I'm not sure why?
SELECT dump(TRADING_DATE) FROM LOAN WHERE REF = 'XXXX';
DUMP(TRADING_DATE)
--------------------------------------------------------------------------------
Typ=12 Len=7: 44,156,9,14,1,1,1
and
SELECT to_char(trading_date, 'YYYYMMDDHH24MISS') FROM LOAN WHERE REF = 'XXXX';
TO_CHAR(TRADIN
--------------
00000000000000
The value stored in that column is not a valid date. The first byte of the dump should be the century, which according to Oracle support note 69028.1 is stored in 'excess-100' notation, which means it should have a value of 100 + the actual century; so 1900 would be 119, 2000 would be 120, and 5500 would be 155. So 44 would represent -5600; the date you have stored appears to actually represent 5544-09-14 BC. As Oracle only supports dates with years between -4713 and +9999, this isn't recognised.
You can recreate this fairly easily; the trickiest bit is getting the invalid date into the database in the first place:
create table t42(dt date);
Table created.
declare
d date;
begin
dbms_stats.convert_raw_value('2c9c090e010101', d);
insert into t42 (dt) values (d);
end;
/
PL/SQL procedure successfully completed.
select dump(dt), dump(dt, 1016) from t42;
DUMP(DT)
--------------------------------------------------------------------------------
DUMP(DT,1016)
--------------------------------------------------------------------------------
Typ=12 Len=7: 45,56,9,14,1,1,1
Typ=12 Len=7: 2d,38,9,e,1,1,1
So this has a single row with the same data you do. Using alter session I can see what looks like a valid date:
alter session set nls_date_format = 'DD-Mon-YYYY';
select dt from t42;
DT
-----------
14-Sep-5544
alter session set nls_date_format = 'YYYYMMDDHH24MISS';
select dt from t42;
DT
--------------
55440914000000
But if I use an explicit date mask it just gets zeros:
select to_char(dt, 'DD-Mon-YYYY'), to_char(dt, 'YYYYMMDDHH24MISS') from t42;
TO_CHAR(DT,'DD-MON-Y TO_CHAR(DT,'YY
-------------------- --------------
00-000-0000 00000000000000
And if I run your procedure:
exec dump_table_to_csv('T42');
The resultant CSV has:
"DT"
"0000-00-00T00:00:00"
I think the difference is that those that attempt to show the date are sticking with internal date data type 12, while those that show zeros are using external data type 13, as mentioned in note 69028.1.
So in short, your procedure isn't doing anything wrong, the date it's trying to export is invalid internally. Unless you know what date it was supposed to be, which seems unlikely given your starting point, I don't think there's much you can do about it other than guess or ignore it. Unless, perhaps, you know how the data was inserted and can work out how it got corrupted.
I think it's more likely to be from an OCI program than what I did here; this 'raw' trick was originally from here. You might also want to look at note 331831.1. And this previous question is somewhat related.

Resources