Oracle Update multiple columns with same value - oracle

Lets suppose I created one table--
Create table t1 (aa varchar2(5),bb varchar2(5),cc varchar2(5));
Inserted values in it--
insert into T1 values ('a','b','c');
commit;
Now in one scenario, if i wanted to update all columns with same value then I am doing by this way--
UPDATE T1 SET AA='x',BB='x',CC='x';
Is there any another way by which this update task can be accomplished considering in real time there may be quite large number of columns and all has to be updated with same value in one go?
I am using Oracle 11.2.0.1.0 - 64bit Production
Note: Usually there are very less any scenarios where same values are being updated for all columns. But for example consider a school database and a good student scores 10/10 marks in all subjects. :-)
Thanks.

There is no way to do it in pure SQL. You must list down all the columns explicitly in the UPDATE statement.
And, believe me it is not a difficult task using a good text editor. Using the metadata you could get the list of column names in few seconds, all you need to do is prepare the SQL statement as per the syntax.
If you really want to do it dynamically, then you need to do it in PL/SQL and (ab)use EXECUTE IMMEDIATE. I would personally not suggest it unless you are just doing it for learning purpose.

You could try this:
UPDATE T1 SET AA='x',BB=AA,CC=AA;

Related

Oracle: Monitoring changes in v_$parameter

Long time user, first time "asker".
I am attempt to construct an Oracle procedure and/or trigger that will compare two tables with the MINUS operation and then insert any resulting rows into another table. I understand how to do the query in standard SQL, but I am having trouble coming up with an efficient way to do this using PL/SQL.
Admittedly, I am very new to Oracle and pretty green with SQL in general. This may be a silly way to go about accomplishing my goal, so allow me to explain what I am attempting to do.
I need to create some sort of alert that will be triggered when the V_$PARAMETER view is changed. Apparently triggers can not respond to changes to view but, instead, can only replace actions on views...which I do not wish to do. So, what I did was create a table that to mirror that view to essentially save it as a "snapshot".
create table mirror_v_$parameter as select * from v_$parameter;
Then, I attempted to make a procedure that would minus these two so that, whenever a change is made to v_$parameter, it will return the difference between the snapshot, mirror_v_$parameter. I trying to create a cursor with the command:
select * from v_$parameter minus select * from mirror_v_$parameter;
to be used inside a procedure, so that it could be used to fetch any returned rows and insert them into another table called alerts_v_$parameter. The intent being that, when something is added to the "alert" table, a trigger can be used to somehow (haven't gotten this far yet) notify my team that there has been a change to the v_$parameter table, and that they can refer to alerts_v_$parameter to see what has been change. I would use some kind of script to run this procedure at a regular interval. And maybe, some day down the line when I understand all this better, manipulate what goes into the alerts_v_$parameter table so that it provides better information such as specifically what column was changed, what was its previous value, etc.
Any advice or pointers?
Thank you for taking the time to read this. Any thoughts will be very appreciated.
I would create a table based on the exact structure of v_$parameter with an additional timestamp column for "last_update", and periodically (via DBMS_Scheduler) merge into it any changes from the real v_$parameter table and capture the timestamp of any detected change.
You might also populate a history table at the same time, either using triggers on update of your table or with SQL.
PL/SQL is unlikely to be required, except as a procedural wrapper to the SQL code.
Examples of Merge are in the documentation here: http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_9016.htm#SQLRF01606

Oracle: difference between max(id)+1 and sequence.nextval

I am using Oracle
What is difference when we create ID using max(id)+1 and using sequance.nexval,where to use and when?
Like:
insert into student (id,name) values (select max(id)+1 from student, 'abc');
and
insert into student (id,name) values (SQ_STUDENT.nextval, 'abc');
SQ_STUDENT.nextval sometime gives error that duplicate record...
please help me on this doubt
With the select max(id) + 1 approach, two sessions inserting simultaneously will see the same current max ID from the table, and both insert the same new ID value. The only way to use this safely is to lock the table before starting the transaction, which is painful and serialises the transactions. (And as Stijn points out, values can be reused if the highest record is deleted). Basically, never use this approach. (There may very occasionally be a compelling reason to do so, but I'm not sure I've ever seen one).
The sequence guarantees that the two sessions will get different values, and no serialisation is needed. It will perform better and be safer, easier to code and easier to maintain.
The only way you can get duplicate errors using the sequence is if records already exist in the table with IDs above the sequence value, or if something is still inserting records without using the sequence. So if you had an existing table with manually entered IDs, say 1 to 10, and you created a sequence with a default start-with value of 1, the first insert using the sequence would try to insert an ID of 1 - which already exists. After trying that 10 times the sequence would give you 11, which would work. If you then used the max-ID approach to do the next insert that would use 12, but the sequence would still be on 11 and would also give you 12 next time you called nextval.
The sequence and table are not related. The sequence is not automatically updated if a manually-generated ID value is inserted into the table, so the two approaches don't mix. (Among other things, the same sequence can be used to generate IDs for multiple tables, as mentioned in the docs).
If you're changing from a manual approach to a sequence approach, you need to make sure the sequence is created with a start-with value that is higher than all existing IDs in the table, and that everything that does an insert uses the sequence only in the future.
Using a sequence works if you intend to have multiple users. Using a max does not.
If you do a max(id) + 1 and you allow multiple users, then multiple sessions that are both operating at the same time will regularly see the same max and, thus, will generate the same new key. Assuming you've configured your constraints correctly, that will generate an error that you'll have to handle. You'll handle it by retrying the INSERT which may fail again and again if other sessions block you before your session retries but that's a lot of extra code for every INSERT operation.
It will also serialize your code. If I insert a new row in my session and go off to lunch before I remember to commit (or my client application crashes before I can commit), every other user will be prevented from inserting a new row until I get back and commit or the DBA kills my session, forcing a reboot.
To add to the other answers, a couple of issues.
Your max(id)+1 syntax will also fail if there are no rows in the table already, so use:
Coalesce(Max(id),0) + 1
There's nothing wrong with this technique if you only have a single process that inserts into the table, as might be the case with a data warehouse load, and if max(id) is fast (which it probably is).
It also avoids the need for code to synchronise values between tables and sequences if you are moving restoring data to a test system, for example.
You can extend this method to multirow insert by using:
Coalesce(max(id),0) + rownum
I expect that might serialise a parallel insert, though.
Some techniques don't work well with these methods. They rely of course on being able to issue the select statement, so SQL*Loader might be ruled out. However SQL*Loader has support for this technique in general through the SEQUENCE parameter of the column specification: http://docs.oracle.com/cd/E11882_01/server.112/e22490/ldr_field_list.htm#i1008234
Assuming MAX(ID) is actually fast enough, wouldn't it be possible to:
First get MAX(ID)+1
Then get NEXTVAL
Compare those two and increase sequence in case NEXTVAL is smaller then MAX(ID)+1
Use NEXTVAL in INSERT statement
In that case I would have a fully stable procedure and manual inserts would also be allowed without worrying about updating the sequence

trigger insert and update oracle error

Friend, I have question about cascade trigger.
I have 2 tables, table data that has 3 attributes (id_data, sum, and id_tool), and table tool that has 3 attributes (id_tool, name, sum_total). table data and tool are joined using id_tool.
I want create trigger for update info sum_total. So , if I inserting on table data, sum_total on table tool where tool.id_tool = data.id_tool will updating too.
I create this trigger, but error ora-04090.
create or replace trigger aft_ins_tool
after insert on data
for each row
declare
v_stok number;
v_jum number;
begin
select sum into v_jum
from data
where id_data= :new.id_data;
select sum_total into v_stok
from tool
where id_tool=
(select id_tool
from data
where id_data= :new.id_data);
if inserting then
v_stok := v_stok + v_jum;
update tool
set sum_total=v_stok
where id_tool=
(select id_tool
from data
where id_data= :new.id_data);
end if;
end;
/
please give me opinion.
Thanks.
The ora-04090 indicates that you already have an AFTER INSERT ... FOR EACH ROW trigger on that table. Oracle doesn't like that, because the order in which the triggers fire is unpredictable, which may lead to unpredictable results, and Oracle really doesn't like those.
So, your first step is to merge the two sets of code into a single trigger. Then the real fun begins.
Presumably there is only one row in data matching the current value of id_data (if not your data model is rally messed up and there's no hope for your situation). Anyway, that means the current row already gives you access to the values of :new.sum and :new.id_tool. So you don't need those queries on the data table: removing those selects will remove the possibility of "mutating table" errors.
As a general observation, maintaining aggregate or summary tables like this is generally a bad idea. Usually it is better just to query the information when it is needed. If you really have huge volumes of data then you should use a materialized view to maintain the summary, rather than hand-rolling something.

Oracle Populate backup table from primary table

The program that I am currently assigned to has a requirement that I copy the contents of a table to a backup table, prior to the real processing.
During code review, a coworker pointed out that
INSERT INTO BACKUP_TABLE
SELECT *
FROM PRIMARY_TABLE
is unduly risky, as it is possible for the tables to have different columns, and different column orders.
I am also under the constraint to not create/delete/rename tables. ~Sigh~
The columns in the table are expected to change, so simply hard-coding the column names is not really the solution I am looking for.
I am looking for ideas on a reasonable non-risky way to get this job done.
Does the backup table stay around? Does it keep the data permanently, or is it just a copy of the current values?
Too bad about not being able to create/delete/rename/copy. Otherwise, if it's short term, just used in case something goes wrong, then you could drop it at the start of processing and do something like
create table backup_table as select * from primary_table;
Your best option may be to make the select explicit, as
insert into backup_table (<list of columns>) select <list of columns> from primary_table;
You could generate that by building a SQL string from the data dictionary, then doing execute immediate. But you'll still be at risk if the backup_table doesn't contain all the important columns from the primary_table.
Might just want to make it explicit, and raise a major error if backup_table doesn't exist, or any of the columns in primary_table aren't in backup_table.
How often do you change the structure of your tables? Your method should work just fine provided the structure doesn't change. Personally I think your DBAs should give you a mechanism for dropping the backup table and recreating it, such as a stored procedure. We had something similar at my last job for truncating certain tables, since truncating is frequently much faster than DELETE FROM TABLE;.
Is there a reason that you can't just list out the columns in the tables? So
INSERT INTO backup_table( col1, col2, col3, ... colN )
SELECT col1, col2, col3, ..., colN
FROM primary_table
Of course, this requires that you revisit the code when you change the definition of one of the tables to determine if you need to make code changes, but that's generally a small price to pay for insulating yourself from differences in column order, differences in column names, and irrelevent differences in table definitions.
If I had this situation, I would retrieve the column definitions for the two tables right at the beginning of the problem. Then, if they were identical, I would proceed with the simple:
INSERT INTO BACKUP_TABLE
SELECT *
FROM PRIMARY_TABLE
If they were different, I would only proceed if there were no critical columns missing from the backup table. In this case I would use this form for the backup copy:
INSERT INTO BACKUP_TABLE (<list of columns>)
SELECT <list of columns>
FROM PRIMARY_TABLE
But I'd also worry about what would happen if I simply stopped the program with an error, so I might even have a backup plan where I would use the second form for the columns that are in both tables, and also dump a text file with the PK and any columns that are missing from the backup. Also log an error even though it appears that the program completed normally. That way, you could recover the data if the worst happened.
Really, this is a symptom of bad processes somewhere which should be addressed, but defensive programming can help to make it someone else's problem, not yours. If they don't notice the log error message which tells them about the text dump with the missing columns, then its not your fault.
But, if you don't code defensively, and the worst happens, it will be partly your fault.
You could try something like:
CREATE TABLE secondary_table AS SELECT * FROM primary_table;
Not sure if that automatically copies data. If not:
CREATE TABLE secondary_table AS SELECT * FROM primary_table LIMIT 1;
INSERT INTO secondary_table SELECT * FROM primary_table;
Edit:
Sorry, didn't read your post completely: especially the constraints part. I'm afraid I don't know how. My guess would be using a procedure that first describes both tables and compares them, before creating a lengthy insert / select query.
Still, if you're using a backup-table, I think it's pretty important it matches the original one exactly.

Manually inserting data in a table(s) with primary key populated with sequence

I have a number of tables that use the trigger/sequence column to simulate auto_increment on their primary keys which has worked great for some time.
In order to speed the time necessary to perform regression testing against software that uses the db, I create control files using some sample data, and added running of these to the build process.
This change is causing most of the tests to crash though as the testing process installs the schema from scratch, and the sequences are returning values that already exist in the tables. Is there any way to programtically say "Update sequences to max value in column" or do I need to write out a whole script by hand that updates all these sequences, or can I/should I change the trigger that substitutes the null value for the sequence to some how check this (though I think this might cause the mutating table problem)?
You can generate a script to create the sequences with the start values you need (based on their existing values)....
SELECT 'CREATE SEQUENCE '||sequence_name||' START WITH '||last_number||';'
FROM ALL_SEQUENCES
WHERE OWNER = your_schema
(If I understand the question correctly)
Here's a simple way to update a sequence value - in this case setting the sequence to 1000 if it is currently 50:
alter sequence MYSEQUENCE increment by 950 nocache;
select MYSEQUENCE_S.nextval from dual;
alter sequence MYSEQUENCE increment by 1;
Kudos to the creators of PL/SQL Developer for including this technique in their tool.
As part of your schema rebuild, why not drop and recreate the sequence?

Resources