I have a function that returns a Select clause with a variable number of columns (from 2 to 31). Then, I need to do inserts into a table using the first + each of the other columns. For example, if my Select returns: ('A', '1', '2', '3') I need to insert ('A','1'), ('A','2') and ('A','3') into a given table. The problem is that I can't know how many columns I'll have in the original Select clause.
I tried to use the Select clause to open a cursor but, is there any way I can know how many columns the cursor has and, then, fetch them separately? Is there any other way to do this?
Thanks a lot in advance,
Ander.
You are in need of dbms_sql.describe_columns
Thanks,
I finally managed to solve it. As Sanders says, using dbms_sql, I open a cursor, parse it, get the column number with describe_columns, and in a loop, define their formats (luckily for me, all of them are varchar2(6)).
Finally, I use dbms_sql.column_value to get each column's value while I fetch all the rows.
Related
Is there a way to turn a clob containing JSON Object into table
for example I have a clob containing [{"a":"1","b":"1"; "a":"2", "b":"2"; "a":"2","b":"2"}]
I want to turn this into a table to join it with other tables in my database.
is there a way to do it?
Thank you!
Your JSON is definitely not well formatted. However, once that is cleaned up, you can use JSON_TABLE to achieve your goals:
WITH test_data (json) AS
(
SELECT '{"rows":[{"a":"1","b":"1"},{"a":"2", "b":"2"},{"a":"2","b":"2"}]}' FROM DUAL
)
SELECT jt.*
FROM test_data td,
JSON_TABLE(td.json,
'$.rows[*]'
COLUMNS (row_number FOR ORDINALITY,
a INTEGER PATH '$.a',
b INTEGER PATH '$.b')) jt
Produces the following results:
row_number
a
b
1
1
1
2
2
2
3
2
2
Here is a DBFiddle showing how this works (Link)
See if this can help.
PLSQL looping through JSON object
It answers more or less what you are asking, although I'm not sure if it can handle not knowing the column names, figuring them out and creating a table from them.
Otherwise you could probably do some REGEXP parsing to figure out the distinct column names first, then either go through it with the json package, or just loop through it manually.
I read somewhere that 99% of time you don't need to use a cursor.
But I can't think of any other way beside using a cursor in this following situation.
Select t.flag
From Dual t;
Let's say this return 4 rows of either 'Y' or 'N'. I want the procedure to trigger something if it finds 'Y'. I usually declare a cursor and loop until %NOTFOUND. Please tell me if there is a better way.
Also, if you have any idea, when is the best time to use a cursor?
EDIT: Instead of inserting the flags, what if I want to do "If 'Y' then trigger something"?
Your case definitely falls into the 99%.
You can easily do the conditional insert using insert into ... select.... It's just a matter or making a select that returns the result that you want to insert.
If you want to insert one record for each 'Y' then use a query with where flag = 'Y'. If you only want to insert a single record depending on whether there are at least one 'Y', then you can add distinct to the query.
A cursor is useful when you make something more complicated. I for example use a cursor when need to insert or update records in one table, and also for each record insert or update one or more records into several other tables.
Something like this:
INSERT INTO TBL_FLAG (col)
SELECT ID FROM Dual where flag = 'Y'
You will usually see a performance gain when using set based instead of procedural operations because most modern DBMS are setup to perform set based operations. You can read more here.
well the example doesnt quite make sense..
but you can always write an insert as select statement instead of what i think you are describing
Cursors are best to use when an column value form one table will be used repeatedly in multiple queries on different tables.
Suppose the values of id_test column are fetched from MY_TEST_TBL using a cursor CUR_TEST. Now this id_test column is a foreign key in MY_TEST_TBL. If we want to use id_test to insert or update any rows in table A_TBL,B_TBL and C_TBL, then in this case it's best to use cursors instead of using complex queries.
Hope this might help to understand the purpose of cursors
We are trying to copy the current row of a table to mirror table by using a trigger before delete / update. Below is the working query
BEFORE UPDATE OR DELETE
ON CurrentTable FOR EACH ROW
BEGIN
INSERT INTO MirrorTable
( EMPFIRSTNAME,
EMPLASTNAME,
CELLNO,
SALARY
)
VALUES
( :old.EMPFIRSTNAME,
:old.EMPLASTNAME,
:old.CELLNO,
:old.SALARY
);
END;
But the problem is we have more than 50 coulmns in the current table and dont want to mention all those column names. Is there a way to select all coulmns like
:old.*
SELECT * INTO MirrorTable FROM CurrentTable
Any suggestions would be helpful.
Thanks,
Realistically, no. You'll need to list all the columns.
You could, of course, dynamically generate the trigger code pulling the column names from DBA_TAB_COLUMNS. But that is going to be dramatically more work than simply typing in 50 column names.
If your table happens to be an object table, :new would be an instance of that object so you could insert that. But it would be rather rare to have an object table.
If your 'current' and 'mirror' tables have EXACTLY the same structure you may be able to use something like
INSERT INTO MirrorTable
SELECT *
FROM CurrentTable
WHERE CurrentTable.primary_key_column = :old.primary_key_column
Honestly, I think that this is a poor choice and wouldn't do it, but it's a more-or-less free world and you're free (more or less :-) to make your own choices.
Share and enjoy.
For what it's worth, I've been writing the same stuff and used this to generate the code:
SQL> set pagesize 0
SQL> select ':old.'||COLUMN_NAME||',' from all_tab_columns where table_name='BIGTABLE' and owner='BOB';
:old.COL1,
:old.COL2,
:old.COL3,
:old.COL4,
:old.COL5,
...
If you feed all columns, no need to mention them twice (and you may use NULL for empty columns):
INSERT INTO bigtable VALUES (
:old.COL1,
:old.COL2,
:old.COL3,
:old.COL4,
:old.COL5,
NULL,
NULL);
people writing tables with that many columns should have no desserts ;-)
I am using ODP.Net and run the PL/SQL Command to merge the table in the Oracle 10G database.
My command is as follow:
MERGE INTO TestTable t
USING (SELECT 2911 AS AR_ID FROM dual) s
ON (t.AR_ID = s.AR_ID)
WHEN MATCHED THEN
UPDATE SET t.AR_VIUAL_IMPAIRMENT = 1
WHEN NOT MATCHED THEN
INSERT (AR_S_REF)
VALUES ('abcdef');
SELECT sql%ROWCOUNT FROM dual;
The Merge command runs successfully and update/insert as I want. The problem is I want to know how many records are updated.
When I run the above statement, "ORA-00911: invalid character error".
Please advise me how I could get the affected rows back. Thanks million.
You're mixing up a few things: a MERGE statement is a plain SQL command while PL/SQL code is always delimited by BEGIN/END (and optional DECLARE). Furthermore, SQL%ROWCOUNT is a PL/SQL variable that cannot occur outside of PL/SQL.
And I don't quite understand whether you ran the MERGE and the SELECT statement with two separate or a common ODP.NET call.
Anyway, the solution is straightfowrad with ODP.NET: Execute the MERGE command with OracleCommand.ExecuteNonQuery(). This method returns the number of affected rows.
One thing you could do is put your code in a PLSQL function that returns %ROWCOUNT.
Then call this function from ODP.net setting the command type to stored procedure and using the ExecuteLiteral method which is going to return you the row count as an object instance you can cast as an int.
It is not possible to return just the "updated" row count.
(as already mentioned the row count is the number of affected (inserted and updated) rows)
there is a good discusion on ask tom: http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:122741200346595110
I am getting passed comma separated values to a stored procedure in oracle. I want to treat these values as a table so that I can use them in a query like:
select * from tabl_a where column_b in (<csv values passed in>)
What is the best way to do this in 11g?
Right now we are looping through these one by one and inserting them into a gtt which I think is inefficient.
Any pointers?
This solves exactly same problem
Ask Tom
Oracle does not come with a built-in tokenizer. But it is possible to roll our own, using SQL Types and PL/SQL. I have posted a sample solution in this other SO thread.
That would enable a solution like this:
select * from tabl_a
where column_b in ( select *
from table (str_to_number_tokens (<csv values passed in>)))
/
In 11g you can use the "occurrence" parameter of REGEXP_SUBSTR to select the values directly in SQL:
select regexp_substr(<csv values passed in>,'[^,]+',1,level) val
from dual
connect by level < regexp_count(<csv values passed in>,',')+2;
But since regexp_substr is somewhat expensive I am not sure if it is the most effective in terms of being the fastest.