Guys I am using oracle 11g and I am pretty new with it.
the problem is when I try to create a table and insert a value in sqlplus terminal, it is not showing in the Navicat lite(Graphical representation software).
Here, let me make this more clear by showing you the pictures.
here are the rows that are getting printed on SQLPlus terminal
But take a look at the rows in the Navicat
Here the table name is TIME.
Can anyone explain to me what is the problem?
Looks like you didn't
commit;
in SQL*Plus after inserting rows. Unless you do that, those values are visible only to you, but not other sessions (which is what Navicat sees). So - commit.
As of letter case and double quotes:
This is how you should be doing it - don't use double quotes, reference tables any way you want:
SQL> create table test (id number);
Table created.
SQL> insert into test (id) values (1);
1 row created.
SQL> select * from test;
ID
----------
1
SQL> select * from TEST;
ID
----------
1
SQL> select * from tEsT;
ID
----------
1
SQL> drop table test;
Table dropped.
If you use double quotes, you have to reference the table exactly the same way as you created it:
SQL> create table "test" (id number);
Table created.
SQL> insert into test (id) values (1);
insert into test (id) values (1)
*
ERROR at line 1:
ORA-00942: table or view does not exist
SQL> insert into TEST (id) values (1);
insert into TEST (id) values (1)
*
ERROR at line 1:
ORA-00942: table or view does not exist
SQL> insert into "TEST" (id) values (1);
insert into "TEST" (id) values (1)
*
ERROR at line 1:
ORA-00942: table or view does not exist
SQL> insert into "test" (id) values (1);
1 row created.
SQL>
Related
I have been trying to use different functions on clob datatype in oracle 19.3.0.0 and none of them return values.
eg : -
dbms_lob.getlength(clob_data)
However any kind of functions on clob/blob datatype doesn't return values
length(clob_data)
These functions have been working fine previously in Oracle 12c. I recently upgraded to Oracle 19.3.0.0. Please educate me if there is any work around for this.
If you don't want to insert anything in clob column you should use empty_clob function
Test case
SQL> create table test1 (id number,a clob);
Table created.
SQL> insert into test1 values (&id,&a);
Enter value for id: 1
Enter value for a: null
1 row created.
SQL> /
Enter value for id: 2
Enter value for a: empty_clob()
1 row created.
SQL> commit;
Commit complete.
SQL> select id,dbms_lob.getlength(a) length from test1;
ID LENGTH
---------- ----------
1
2 0
I've made a trigger in SQL and need him to write an output after inserting a new row in the table. Please see the example:
CREATE OR REPLACE TRIGGER GAS_CODES AFTER
INSERT ON blablatable
FOR EACH ROW
BEGIN
insert into blabla2table (...,...,...,...)
values (:new...,...,...,..);
---output:
dbms_output.put_line('New row has been added.');
END;
/
When I compile the trigger, it shows in the Script Output, but if I add a new row into the table, there's nothing.
You are missing SET SERVEROUTPUT ON. This command is understandable also by SQLDeveloper.
Let's do a quick test inside the SQLDeveloper.
CREATE USER "TEST_SCHEMA" IDENTIFIED BY "TEST";
User "TEST_SCHEMA" created.
GRANT UNLIMITED TABLESPACE TO "TEST_SCHEMA";
Grant succeeded.
CREATE TABLE "TEST_SCHEMA"."NAMES" ("ID" NUMBER, "NAME" VARCHAR2(25), PRIMARY KEY("ID"));
Table "TEST_SCHEMA"."NAMES" created.
CREATE OR REPLACE TRIGGER "TEST_SCHEMA"."NAMES_TRG_1" AFTER
INSERT ON "TEST_SCHEMA"."NAMES"
FOR EACH ROW
BEGIN
DBMS_OUTPUT.PUT_LINE('New row has been added.');
END;
/
Trigger NAMES_TRG_1 compiled
SET SERVEROUTPUT ON
This command won't print anything in SQL Developer. No worries here.
INSERT INTO "TEST_SCHEMA"."NAMES" VALUES (1, 'Mark Smith');
1 row inserted.
New row has been added.
As you can see, the output was there and it was inserted after the actual row was inserted into the table. Works fine.
To cleanup the testcase, run this:
DROP USER "TEST_SCHEMA" CASCADE;
EDIT 1:
When you are working with Table Data Editor, this is behaving differently. Table Data Editor has its own Oracle session and it has different way of capturing DBMS Output.
To open the DBMS capture window, you need to click on "VIEW" menu and select "DBMS Output" option.
Then click the green plus button and set the database, that will be captured.
Now you can see the output.
Beware as the output here is not "realtime", this window will show something only when there is a buffer flush, and the buffer flush cannot be invoked manually/directly.
Most likely the client (SQLDeveloper) doesn't read the output buffer.
To enable this you must choose from menu "view" -> "dbms output" and then click the green "+" in the dbms output window to read the output buffer for your connection ...
In sqlplus you can do it like this:
SQL> drop table tst purge;
Table dropped.
SQL> drop table tst2 purge;
Table dropped.
SQL> create table tst ( tst_no integer);
Table created.
SQL> create table tst2 ( tst_no integer);
Table created.
SQL> create or replace trigger tst_trg after insert on tst
for each row
begin
insert into tst2 (tst_no) values (:new.tst_no);
dbms_output.put_line('new row with tst_no='|| :new.tst_no);
end;
/ 2 3 4 5 6 7
Trigger created.
SQL> set serveroutput on;
exec dbms_output.enable;
insert into tst values (1); SQL>
PL/SQL procedure successfully completed.
SQL> SQL>
new row with tst_no=1
1 row created.
SQL> r
1* insert into tst values (1)
new row with tst_no=1
1 row created.
SQL> select * from tst2;
TST_NO
----------
1
1
SQL>
as you can see the output is read and printed in sqlplus, and rows are inserted into the target table tst2
hope it helps...
I have created a user (say DROP_PARTITION_USER) to drop partitions of a table. The table is owned by different user (say NORMAL_USER).
Currently, I have granted DROP ANY TABLE and ALTER ANY TABLE privileges to DROP_PARTITION_USER. When DROP_PARTITION_USER executes following statement, it gets executed successfully.
ALTER TABLE SCHEMANAME.TABLENAME DROP PARTITION <PARTITION_NAME> UPDATE GLOBAL INDEXES;
But, DROP ANY TABLE and ALTER ANY TABLE allows DROP_PARTITION_USER to drop and alter any table under any schema [https://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_9013.htm ].
Is there any way in Oracle to restrict drop and alter table under specific schema?
The common way to solve this is to create a procedure owned by NORMAL_USER to drop one of the partitions of one of it's tables.
Then you GRANT EXECUTE on this procedure to DROP_PARTITION_USER.
You'll need no extra privileges.
CREATE OR REPLACE PROCEDURE my_drop_partition (p_table_name VARCHAR2, p_partition_name VARCHAR2)
IS
BEGIN
EXECUTE IMMEDIATE 'ALTER TABLE '||p_table_name||' DROP PARTITION '||p_partition_name;
END my_drop_partition;
/
GRANT EXECUTE ON my_drop_partition TO drop_partition_user;
You can use a DDL trigger to capture such attempts and take whatever action you like. For example
SQL> CREATE OR REPLACE TRIGGER STOP_THAT_STUFF
2 before create or alter or drop on database
3 begin
4 if ora_dict_obj_owner in ('SCOTT') and
5 ora_sysevent in ('DROP','ALTER') and
6 ora_dict_obj_name = 'MY_TABLE'
7 then
8 raise_application_error(-20000,'What the hell are you thinking!!!!');
9 end if;
10 end;
11 /
Trigger created.
SQL>
SQL> conn scott/tiger
Connected.
SQL> create table scott.my_table(x int );
Table created.
SQL> create table scott.my_other_table(x int);
Table created.
SQL> drop table my_other_table;
Table dropped.
SQL> drop table my_table;
drop table my_table
*
ERROR at line 1:
ORA-04088: error during execution of trigger 'SYS.STOP_THAT_STUFF'
ORA-00604: error occurred at recursive SQL level 1
ORA-20000: What the hell are you thinking!!!!
ORA-06512: at line 6
SQL> desc my_table
Name Null? Type
----------------------------------------------------------------------- -------- ----------------
X NUMBER(38)
This question already has an answer here:
Wrap an Oracle schema update in a transaction
(1 answer)
Closed 9 years ago.
Is it possible to deactivate the implicit commit, which is called after create, drop ,rename, alter statements on Oracle Databases?
simple example:
CREATE TABLE TEST.test2x (id NUMBER(10,0));
ALTER TABLE TEST1.test2x ADD PRIMARY KEY (id);
This will fail for the alter statement because the schema is wrong, but the table is now already created.
So is it somehow possible to bypass this behavior and only commit all or nothing, while using create, alter etc.?
There is a way to do "all-or-nothing" DDL but only in a very limited way -- the CREATE SCHEMA statement.
For example, the following CREATE SCHEMA statement tries to create two tables, T1 and T2. However, the DDL for T2 is incorrect. Neither table ends up getting created.
SQL> REM Verify the tables do not already exist.
SQL> SELECT * FROM T1;
SELECT * FROM T1
*
ERROR at line 1:
ORA-00942: table or view does not exist
SQL> SELECT * FROM T2;
SELECT * FROM T2
*
ERROR at line 1:
ORA-00942: table or view does not exist
SQL> CREATE SCHEMA AUTHORIZATION TEST
2 CREATE TABLE T1
3 (
4 X NUMBER PRIMARY KEY
5 )
6 CREATE TABLE T2
7 (
8 -- Try to reference a column that does not exist.
9 X NUMBER REFERENCES T1(Y)
10 );
X NUMBER REFERENCES T1(Y)
*
ERROR at line 9:
ORA-02428: could not add foreign key reference
ORA-00904: "Y": invalid identifier
SQL> REM Verify the tables still don't exist.
SQL> SELECT * FROM T1;
SELECT * FROM T1
*
ERROR at line 1:
ORA-00942: table or view does not exist
SQL> SELECT * FROM T2;
SELECT * FROM T2
*
ERROR at line 1:
ORA-00942: table or view does not exist
However, CREATE SCHEMA is limited in that it only supports CREATE TABLE, CREATE VIEW and GRANT statements.
You could run these commands inside of a plsql "begin / end" block. Then catch and handle errors.
ie.
begin
execute immediate 'create table xyz (x number)';
execute immediate 'alter table bad.xyz add';
exception
when others then
execute immediate 'drop table xyz';
end;
I want to create a trigger that execute on update of a table.
in particular on update of a table i want to update another table via a trigger but if the trigger fails (REFERENTIAL INTEGRITY-- ENTITY INTEGRITY) i do not want to execute the update anymore.
Any suggestion on how to perform this?
Is it better to use a trigger or do it anagrammatically via a stored procedure?
Thanks
The DML in the trigger is part of the same action as the triggering DML. Both have to succeed or b oth fail. If the trigger raises an unhandled exception the entire statement gets rolled back.
Here is a trigger on T23 which copies the row into T42.
SQL> create or replace trigger t23_trg
2 before insert or update on t23 for each row
3 begin
4 insert into t42 values (:new.id, :new.col1);
5 end;
6 /
Trigger created.
SQL>
A successful inserrt into T23...
SQL> insert into t23 values (1, 'ABC')
2 /
1 row created.
SQL> select * from t42
2 /
ID COL
---------- ---
1 ABC
SQL>
But this one will fail because of a unique constraint on T42.ID. As you can see the triggering statement is rolled back too ...
SQL> insert into t23 values (1, 'XYZ')
2 /
insert into t23 values (1, 'XYZ')
*
ERROR at line 1:
ORA-00001: unique constraint (APC.T24_PK) violated
ORA-06512: at "APC.T23_TRG", line 2
ORA-04088: error during execution of trigger 'APC.T23_TRG'
SQL> select * from t42
2 /
ID COL
---------- ---
1 ABC
SQL> select * from t23
2 /
ID COL
---------- ---
1 ABC
SQL>
If the trigger fails, it will raise an exception ( unless you specifically tell it not to ), in which case, you would have the client rollback. It doesn't really matter if its done via a trigger or a SP ( although its often a good idea to keep a logical transaction within a SP, rather than spread it around triggers ).