if it is possible... how can do that? i'm new in databases , please help me!
i'm using oracle and I want when create a new user when I insert a new row in a table "users"
As JGreenwell has mentioned, yes, you can do it, but is is not a good idea. The main problem is not the password in clear, is the transactional side effects.
Let's see the solution to password in clear:
It is easy to solve the problem of the password (do not store password in clear, encrypt it or hash it). If you have installed DMBS_CRYPTO and have privileges on it then you can create a procedure (I assume you have a table USERS with to columns: USERNAME and USERPASS):
CREATE OR REPLACE PROCEDURE MYCREATEUSER( NAME VARCHAR, PASS VARCHAR )
IS
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
EXECUTE IMMEDIATE 'CREATE USER ' || NAME || ' IDENTIFIED BY "' || PASS || '"';
END;
and a trigger:
CREATE OR REPLACE TRIGGER TUSERS BEFORE INSERT ON USERS FOR EACH ROW
BEGIN
MYCREATEUSER(:NEW.USERNAME, :NEW.USERPASS);
:NEW.USERPASS := DBMS_CRYPTO.HASH(:NEW.USERPASS, DBMS_CRYPTO.HASH_SH1);
END;
Your insert statement uses the username and the password, the trigger creates the user, but before inserting the row it replaces the password with its SHA1 hash value. You must remember this when you query the table.
One example:
INSERT INTO USERS VALUES('Mary', 'Mary123');
To check:
SELECT COUNT(*)
FROM USERS
WHERE USERNAME = 'Mary' AND
USERPASS = DBMS_CRYPTO.HASH('Mary123', DBMS_CRYPTO.HASH_SH1);
If count = 1 then the user and password exists, but if count = 0 the user/password does not exists.
This example only works if: you have installed DBMS_CRYPTO and has enough privileges, and you have CREATE TABLE privileges (remember that when a procedure executes statements it only has explicit granted privileges, privileges that came from ROLES are disabled - so, it is not enough to have the RESOURCE privilege, you need the CREATE TABLE explicit privilege).
But as I have mentioned it is not a good idea, because you can be affected by transactional side effects. Let's see the main problem:
To maintain ACID principles (Atomicity, Consistency, Isolation and Durability) Oracle needs to ensure that one statement is viewed as an Atomic operation. If I execute INSERT INTO USERS SELECT * FROM OTHER_USER_TABLES this statement must be viewed as an atomic unit. The atomic unit starts when you send the statement and ends when oracle notifies you the error/ok code.
But trigger is executed inside the statement (inside its temporal scope). The trigger controls the insert statement (if it fails all the statement fails, if it says ok, the statement is ok). Note that the statement ends after the triggers finalization. This means that it is not possible to commit or rollback the current transaction before the statement ends (because in this case the Atomicity is not ensured). So, a trigger cannot COMMIT/ROLLBACK the current transaction (it cannot commit and it cannot call a procedure to commit it, nobody can do it before statement ends).
CREATE TABLE is an autocommit statement (if forces a commit) so it is not possible to CREATE a table inside the trigger scope.
But we say "you can" and we have a working example. How?
The answer is the PRAGMA AUTONOMOUS_TRANSACTION.
This pragma forces oracle to execute the affected PL/SQL block in a new different transaction. So, in our example, oracle executes MYCREATEUSER procedure in a new transaction.
Oracle does not have a nested transaction model, so the inner AUTONOMOUS_TRANSACTION does not depend on the outer trigger's transaction. They are both flat transactions. This means that if the procedures ends (the user is created) and the insert finally fails (due to any reason) the row is not in the table but the new user exists.
Let's see an example:
INSERT INTO USERS VALUES ('Anne', 'An232131');
INSERT INTO USERS VALUES ('Mike', "ABC123');
ROLLBACK;
This examples inserts two rows (it creates two users) and rollback the transaction. So, finally the inserts are canceled and the rows (Anne an Mike) are not in users table. But Anne and Mike Oracle's users exists (the rollback) does not affect the user creation because they have been created by a different transaction that finally commited (the autonomous transaction).
This problem is not easy to solve without a nested transactional model (in a nested transactional model a inner transaction only finally commits when its outer transaction does).
Related
What would happen when this database trigger is fired?
Command (as user SYS):
SQL> CREATE OR REPLACE TRIGGER audit-table-DEPT AFTER
INSERT OR UPDATE OR DELETE ON DEPT FOR EACH ROW
declare
audit_data DEPT$audit%ROWTYPE;
begin
if inserting then audit_data.change_type := 'I';
elsif updating then audit_data.change_type :='U';
else audit_data.change_type := 'D';
end if;
audit_data.changed_by := user;
audit_data.changed_time := sysdate;
case audit_data.change_type
when 'I' then
audit_data.DEPTNO := :new.DEPTNO;
audit_data.DNAME := :new.DNAME;
audit_data.LOC := :new.LOC;
else
audit_data.DEPTNO := :old.DEPTNO;
audit_data.DNAME := :old.DNAME;
audit_data.LOC := :old.LOC;
end case;
insert into DEPT$audit values audit_data;
end;
/
how this can affect normal database operations?
What will happen if you run this command? Nothing, The trigger won't compile as you have given it an invalid object name. (Replace those dashes with underscores.)
After that, you have a trigger which inserts an audit record for DML activity on the DEPT table. When inserting you get an AUDIT_DEPT record with the values of the inserted DEPT record. When deleting you get an AUDIT_DEPT record with the values of the deleted DEPT record. When updating you get a somewhat useless AUDIT_DEPT record which tells you a DEPT record was updated but doesn't identify which one.
how this can affect normal database operations?
It won't cause anything to fail. However, you are executing additional insert statements every time you execute DML on the DEPT table. You probably won't notice the impact on single-row statements, but you might notice a slower response time if you insert, update or delete a large number of DEPT records. You will need to bench mark it.
One last observation:
Command (as user SYS):
Uh oh.
The better interpretation of this statement is that the trigger won't compile, because the SYS schema doesn't have a DEPT table. Connect as the user which owns the DEPT table, then run the CREATE TRIGGER statement.
The worrying option is that the trigger compiles because you have put a DEPT table in its schema. This is bad practice. The SYS schema is maintained by Oracle to run the database internal software. Changing the SYS schema unless authorised by Oracle could corrupt your database and invalidate any support contract you have. What you should do is use SYS (or SYSTEM) to create a user to host your application's objects, then connect as that user to build tables, triggers and whatever else you need.
I see the concept of temporary table in oracle is quite different from other databases like SQL Server. In Oracle, we have a concept of global temporary table and we create it only once and in each session we fill it with data which is not the same in other databases.
In 18c, oracle has introduced the concept of private temporary tables which states that upon successful usage, tables can be dropped like in other databases. But how do we use it in a PL/SQL block?
I tried using it using dynamic SQL - EXECUTE IMMEDIATE. But it is giving me table must be declared error. what do I do here?
But how do we use it in a PL/SQL block?
If what you mean is, how can we use private temporary tables in a PL/SQL program (procedure or function) the answer is simple: we can't. PL/SQL programs need to be compiled before we can call them. This means any table referenced in the program must exist at compilation time. Private temporary tables don't change that.
The private temporary table is intended for use in ad hoc SQL work. It allows us to create a data structure we can use in SQL statements for the duration of a session, to make life easier for ourselves.
For instance, suppose I have a massive table of sales data - low level transactions - and my task is to investigate monthly trends. So I only need the total sales by month. Unfortunately, there is no materialized view providing this summary. I don't want to include the aggregating query in my select statements. In previous versions I would have had to create a permanent table (and had to remember to drop it afterwards) but in 18c I can use a private temporary table to stage my summary just for the session.
create private temporary table ora$ptt_sales_summary (
sales_month date
, total_value number )
/
insert into ora$ptt_sales_summary
select trunc(sales_date, 'MM')
, sum (qty*price)
from massive_sales_table
group by trunc(sales_date, 'MM')
/
select *
from ora$ptt_sales_summary
order by sales_month
/
Obviously we can write anonymous PL/SQL blocks in our session but let's continue assuming that's not what you need. So what is the equivalent of a private temporary table in a permanent PL/SQL program? Same as it's been for several versions now: a PL/SQL collection or a SQL nested table type.
Private temporary tables (Available from Oracle 18c ) are dropped at the end of the session/transaction depending on the definition of PTT.
The ON COMMIT DROP DEFINITION option creates a private temporary table that is transaction-specific. At the end of the transaction,
Oracle drops both table definitions and data.
The ON COMMIT PRESERVE DEFINITION option creates a private temporary table that is session-specific. Oracle removes all data and
drops the table at the end of the session.
You do not need to drop it manually. Oracle will do it for you.
CREATE PRIVATE TEMPORARY TABLE ora$ptt_temp_table (
......
)
ON COMMIT DROP DEFINITION;
-- or
-- ON COMMIT PRESERVE DEFINITION;
Example of ON COMMIT DROP DEFINITION (table is dropped after COMMIT is executed)
Example of ON COMMIT PRESERVE DEFINITION (table is retained after COMMIT is executed but it will be dropped at the end of the session)
Note: I don't have access to 18c DB currently and db<>fiddle is facing some issue so I have posted images for you.
Cheers!!
It works with dynamic SQL:
declare
cnt int;
begin
execute immediate 'create private temporary table ora$ptt_tmp (id int)';
execute immediate 'insert into ora$ptt_tmp values (55)';
execute immediate 'insert into ora$ptt_tmp values (66)';
execute immediate 'insert into ora$ptt_tmp values (77)';
execute immediate 'select count(*) from ora$ptt_tmp' into cnt;
dbms_output.put_line(cnt);
execute immediate 'delete from ora$ptt_tmp where id = 66';
cnt := 0;
execute immediate 'select count(*) from ora$ptt_tmp' into cnt;
dbms_output.put_line(cnt);
end;
Example here:
https://livesql.oracle.com/apex/livesql/s/l7lrzxpulhtj3hfea0wml09yg
I recently started working on a number of large Oracle PL/SQL stored procedures with Toad for Oracle. Number of these procedures updates and inserts stuff into tables. My question is, is there a way to "safely" execute PL/SQL procedures without permanently modifying any of the tables ? Also, how do I safely modify and execute stored procedures for experimentation without actually making changes to the database ?
Doesn't matter if you have Toad or SQ*Plus or anything really - it's all about the code.
First - does your program have any commits or rollbacks IN the stored procedures?
Second - does your program do any DDL work: create a table? That will do an implicit COMMIT. Mind you, if your program calls another program and THAT program has a COMMIT or DDL - you're COMMITTED as its' all in one session.
Third - when you go to execute your stored procedure, does your anonymous block have a COMMIT or ROLLBACK there?
Your tool comes into play for the third bit. Inspect the code behind the 'execute' button.
In SQL Developer (similar to Toad in this regard)...
In this case my SP has a commit in the code - so barring an exception before that line...it's a permanent change.
In the generated anonymous block, there's a ROLLBACK, but it's commented out. When you hit the execute button in your GUI, look at the code there. Change it if necessary.
You can create a copy of your database, then play there. Other thing is, you can create a copy of the procedures/functions, packages and tables involve and play with it.
Let's you have this procedure,
CREATE PROCEDURE proc1
IS
BEGIN
INSERT INTO table1
(col1, col2)
VALUES
('actual data', 'hello');
UPDATE table2
SET col1 = 'actual'
WHERE col2 = 1;
COMMIT;
END;
You will create new procedure with same logic inside it.
CREATE PROCEDURE proc1_test
IS
BEGIN
INSERT INTO table1_test
(col1, col2)
VALUES ('test', 'hello');
UPDATE table2_test
SET col1 = 'test2'
WHERE col2 = 1;
COMMIT;
END;
/
Doing this, it will let you compare your actual data to your test data.
You can put rollback at the end of the procedure and comment any Commits/DDL statements. Also you need to be careful for the Pragma statements if any.
The following link on the PostgreSQL documentation manual http://www.postgresql.org/docs/8.3/interactive/populate.html says that to disable autocommit in postgreSQL you can simply place all insert statements within BEGIN; and COMMIT;
However I have difficulty in capturing any exceptions that may happen between the BEGIN; COMMIT; and if an error occurs (like trying to insert a duplicate PK) I have no way to explicitly call the ROLLBACK or COMMIT commands. Although all insert statements are automatically rolled back, PostgreSQL still expects an explicit call to either the COMMIT or ROLLBACK commands before it can consider the transaction to be terminated. Otherwise, the script has to wait for the transaction to time out and any statements executed thereafter will raise an error.
In a stored procedure you can use the EXCEPTION clause to do this but the same does not apply in my circumstance of performing bulk inserts. I have tried it and the exception block did not work for me because the next statement/s executed after the error takes place fails to execute with the error:
ERROR: current transaction is aborted, commands ignored until end of transaction block
The transaction remains open as it has not been explicitly finalised with a call to COMMIT or ROLLBACK;
Here is a sample of the code I used to test this:
BEGIN;
SET search_path TO testing;
INSERT INTO friends (id, name) VALUES (1, 'asd');
INSERT INTO friends (id, name) VALUES (2, 'abcd');
INSERT INTO friends (id, nsame) VALUES (2, 'abcd'); /*note the deliberate mistake in attribute name and also the deliberately repeated pk value number 2*/
EXCEPTION /* this part does not work for me */
WHEN OTHERS THEN
ROLLBACK;
COMMIT;
When using such technique do I really have to guarantee that all statements will succeed? Why is this so? Isn't there a way to trap errors and explicitly call a rollback?
Thank you
if you do it between begin and commit then everything is automatically rolled back in case of an exception.
Excerpt from the url you posted:
"An additional benefit of doing all insertions in one transaction is that if the insertion of one row were to fail then the insertion of all rows inserted up to that point would be rolled back, so you won't be stuck with partially loaded data."
When I initialize databases, i.e. create a series of tables/views/functions/triggers/etc. and/or loading in the initial data, I always use psql and it's Variables to control the flow. I always add:
\set ON_ERROR_STOP
to the top of my scripts, so whenever I hit any exception, psql will abort. It looks like this might help in your case too.
And in cases when I need to do some exception handling, I use anonymous code blocks like this:
DO $$DECLARE _rec record;
BEGIN
FOR _rec IN SELECT * FROM schema WHERE schema_name != 'master' LOOP
EXECUTE 'DROP SCHEMA '||_rec.schema_name||' CASCADE';
END LOOP;
EXCEPTION WHEN others THEN
NULL;
END;$$;
DROP SCHEMA master CASCADE;
We have an Oracle database, and the customer account table has about a million rows. Over the years, we've built four different UIs (two in Oracle Forms, two in .Net), all of which remain in use. We have a number of background tasks (both persistent and scheduled) as well.
Something is occasionally holding a long lock (say, more than 30 seconds) on a row in the account table, which causes one of the persistent background tasks to fail. The background task in question restarts itself once the update times out. We find out about it a few minutes after it happens, but by then the lock has been released.
We have reason to believe that it might be a misbehaving UI, but haven't been able to find a "smoking gun".
I've found some queries that list blocks, but that's for when you've got two jobs contending for a row. I want to know which rows have locks when there's not necessarily a second job trying to get a lock.
We're on 11g, but have been experiencing the problem since 8i.
Oracle's locking concept is quite different from that of the other systems.
When a row in Oracle gets locked, the record itself is updated with the new value (if any) and, in addition, a lock (which is essentially a pointer to transaction lock that resides in the rollback segment) is placed right into the record.
This means that locking a record in Oracle means updating the record's metadata and issuing a logical page write. For instance, you cannot do SELECT FOR UPDATE on a read only tablespace.
More than that, the records themselves are not updated after commit: instead, the rollback segment is updated.
This means that each record holds some information about the transaction that last updated it, even if the transaction itself has long since died. To find out if the transaction is alive or not (and, hence, if the record is alive or not), it is required to visit the rollback segment.
Oracle does not have a traditional lock manager, and this means that obtaining a list of all locks requires scanning all records in all objects. This would take too long.
You can obtain some special locks, like locked metadata objects (using v$locked_object), lock waits (using v$session) etc, but not the list of all locks on all objects in the database.
you can find the locked tables in Oracle by querying with following query
select
c.owner,
c.object_name,
c.object_type,
b.sid,
b.serial#,
b.status,
b.osuser,
b.machine
from
v$locked_object a ,
v$session b,
dba_objects c
where
b.sid = a.session_id
and
a.object_id = c.object_id;
Rather than locks, I suggest you look at long-running transactions, using v$transaction. From there you can join to v$session, which should give you an idea about the UI (try the program and machine columns) as well as the user.
Look at the dba_blockers, dba_waiters and dba_locks for locking. The names should be self explanatory.
You could create a job that runs, say, once a minute and logged the values in the dba_blockers and the current active sql_id for that session. (via v$session and v$sqlstats).
You may also want to look in v$sql_monitor. This will be default log all SQL that takes longer than 5 seconds. It is also visible on the "SQL Monitoring" page in Enterprise Manager.
The below PL/SQL block finds all locked rows in a table. The other answers only find the blocking session, finding the actual locked rows requires reading and testing each row.
(However, you probably do not need to run this code. If you're having a locking problem, it's usually easier to find the culprit using GV$SESSION.BLOCKING_SESSION and other related data dictionary views. Please try another approach before you run this abysmally slow code.)
First, let's create a sample table and some data. Run this in session #1.
--Sample schema.
create table test_locking(a number);
insert into test_locking values(1);
insert into test_locking values(2);
commit;
update test_locking set a = a+1 where a = 1;
In session #2, create a table to hold the locked ROWIDs.
--Create table to hold locked ROWIDs.
create table locked_rowids(the_rowid rowid);
--Remove old rows if table is already created:
--delete from locked_rowids;
--commit;
In session #2, run this PL/SQL block to read the entire table, probe each row, and store the locked ROWIDs. Be warned, this may be ridiculously slow. In your real version of this query, change both references to TEST_LOCKING to your own table.
--Save all locked ROWIDs from a table.
--WARNING: This PL/SQL block will be slow and will temporarily lock rows.
--You probably don't need this information - it's usually good enough to know
--what other sessions are locking a statement, which you can find in
--GV$SESSION.BLOCKING_SESSION.
declare
v_resource_busy exception;
pragma exception_init(v_resource_busy, -00054);
v_throwaway number;
type rowid_nt is table of rowid;
v_rowids rowid_nt := rowid_nt();
begin
--Loop through all the rows in the table.
for all_rows in
(
select rowid
from test_locking
) loop
--Try to look each row.
begin
select 1
into v_throwaway
from test_locking
where rowid = all_rows.rowid
for update nowait;
--If it doesn't lock, then record the ROWID.
exception when v_resource_busy then
v_rowids.extend;
v_rowids(v_rowids.count) := all_rows.rowid;
end;
rollback;
end loop;
--Display count:
dbms_output.put_line('Rows locked: '||v_rowids.count);
--Save all the ROWIDs.
--(Row-by-row because ROWID type is weird and doesn't work in types.)
for i in 1 .. v_rowids.count loop
insert into locked_rowids values(v_rowids(i));
end loop;
commit;
end;
/
Finally, we can view the locked rows by joining to the LOCKED_ROWIDS table.
--Display locked rows.
select *
from test_locking
where rowid in (select the_rowid from locked_rowids);
A
-
1
Given some table, you can find which rows are not locked with SELECT FOR UPDATESKIP LOCKED.
For example, this query will lock (and return) every unlocked row:
SELECT * FROM mytable FOR UPDATE SKIP LOCKED
References
Ask TOM "How to get ROWID for locked rows in oracle".