Trigger in multiple schema - oracle

I have one database with two schema: schema1 and schema2.
I want to create trigger for SCHEMA1.CLIENT table. When ever update query performed in SCHEMA1.CLIENT table before the changes and after the change rows will added in SCHEMA2.HISTORY table.
Example:
SCHEMA1.CLIENT
NAME AGE DEPT
KRANTHI 21 CSE
KUMAR 22 ME
If I update above table kranthi age from 21 to 33, the rows will be stored like below in SCHEMA2.HISTORY.
SCHEMA2.HISTORY
MODIFIE DNAME AGE DEPT
BEFORE KRANTHI 21 CSE
AFTER KRANTHI 33 CSE

Related

FRM-40350 Query Caused No Records to be retrieved

I am new in Oracle forms. I have connected my forms to the database. Then created data block choosing data block wizard. I have choosen of my database table ORDERS. I have inserted values in this ORDERS table in sqlplus. After creating ORDERS data block I run my forms through browser, I click on execute query on menu but it's not fetching the data from my database table. It says,
FRM-40350 Query Caused No REcords to be retrieved . But in sqlplus I have checked that the row created successfully and column is fill in value and my application forms is connected to my database. Then why my execute query not working? How can I fix this?
If you're connected to the same user in both SQLPlus and Forms (you probably are, otherwise Forms wouldn't see that table and you wouldn't be able to base data block on it), you probably didn't COMMIT after inserting row(s) in SQLPlus.
Code you posted should be fixed (invalid datatype, missing column name, don't insert strings into DATE datatype column):
SQL> CREATE TABLE ORDERS(
2 order_id NUMBER(20),
3 customer_id NUMBER(20),
4 order_date DATE, -- NUMBER(20), --> date, not number
5 order_status VARCHAR2(20),
6 order_mode VARCHAR2 (200),
7 SALES_REP_ID NUMBER (20) );
Table created.
SQL> INSERT INTO orders (
2 order_id,
3 customer_id,
4 order_date,
5 order_status,
6 order_mode,
7 sales_rep_id
8 ) VALUES (
9 0001,
10 01,
11 date '2008-02-11', -- not '11-FEB-2008', it is a STRING
12 '5',
13 'direct',
14 158
15 );
1 row created.
This is what you're missing:
SQL> COMMIT;
Commit complete.
SQL>

Convert value while inserting into HIVE table

i have created bucketed table called emp_bucket into 4 buckets clustered on salary column. The structure of the table is as below:
hive> describe Consultant_Table_Bucket;
OK
id int
age int
gender string
role string
salary double
Time taken: 0.069 seconds, Fetched: 5 row(s)
I also have a staging table from where i can insert data into the above bucketed table. Below is the sample data in the staging table:
id age Gender role salary
-----------------------------------------------------
938 38 F consultant 55038.0
939 26 F student 33319.0
941 20 M student 97229.0
942 48 F consultant 78209.0
943 22 M consultant 77841.0
My requirement is to load data into the bucketed table for those employees whose salary is greater than 10,000 and while loading i have to convert "consultant" role to BigData consultant role.
I know how to insert data into my bucketed table using the select command, but need some guidance how can the consultant value in the role column above can be changed to BigData consultant while inserting.
Any help appreciated
Based on your insert, you just need to work on the role part of your select:
INSERT into TABLE bucketed_user PARTITION (salary)
select
id
, age
, gender
, if(role='consultant', 'BigData consultant', role) as role
, salary
FROM
stage_table
where
salary > 10000
;

Update of two same tables on different oracle schemas using primary key

I am scratching our head to resolve the issue with good performance, we managed to find out the solution in java by using hash map, but as the table contains 1L records its pretty tough to manage this part.
I am looking for the best possible option.
I have two schemas on same oracle database. I need to update a table with another schema table using primary key(we need only update if the primary key row exists, we should not insert it).
Suppose My oracle database is TEST and i have two schema's SCHEMA1 & SCHEMA2.
SCHEMA1 & SCHEMA2 CONTAINS THE TABLE SAMPLE1
Structure:
ID NUMBER ==> PRIMARY KEY
NAME VARCHAR ==> PRIMARY KEY
LASTNAME VARCHAR ==> NORMAL COLUMN
NOW SCHEMA1 SAMPLE1 CONTAINS DATA BELOW
1) 123 'TEMP' 'TEMPOARY1'
2) 234 'TEMP2' 'TEMPORARY2'
3) 345 'TEMP3' 'TEMPORARY3'
SCHEMA2 SAMPLE1 CONTAINS DATA BELOW
1) 123 'TEMP' 'TEMP1'
2) 23 'TEMP23 'TEMP2'
3) 235 'TEMP2' 'TEMP3'
Now my target is i need to sync table SAMPLE1 of SCHEMA1 with the table of SAMPLE1 of SCHEMA2 and the result should be below.
1) 123 'TEMP' 'TEMP1'
2) 234 'TEMP2' 'TEMPORARY2'
3) 345 'TEMP3' 'TEMPORARY3'
Thank you for your help
Try something like this :
declare
procedure fncUpdate(pId PLS_INTEGER, pName VARCHAR2 , pLastname VARCHAR2) as
vIden pls_integer;
begin
UPDATE SCHEMA2.SAMPLE1 set id, name,lastname values (pId, pName pLastname)
returning iden into vIden;
DBMS_OUTPUT.PUT_LINE('iden : '|| vIden);
end fncUpdate;
begin
for cur in(
SELECT id,name,lastname
FROM SCHEMA1.SAMPLE1
)
loop
fncUpdate(cur.id,cur.name,cur.lastname);
end loop;
end;
Update of two same tables on different oracle databases
I have two schemas
I have edited your question title and changed database to schema. Since, you have clearly mentioned schema in your question body. Do not confuse between a DATABASE and a SCHEMA. I have seen SQL Server developers often interpreting a schema as a relative term for database. A schema is the set of objects (tables, views, indexes, etc) that belongs to an user. Do not confuse between a schema and database.
No need of PL/SQL. Do it in plain SQL.
You could use a MERGE statement.
For example,
MERGE INTO schema2.table2 t2
USING (SELECT * FROM schema1.table1) t1
ON (t2.primarykey = t1.key)
WHEN MATCHED THEN
UPDATE SET
t2.column2 = t1.column2
AND t2.column3 = t1.column3
/

Formatting output in Oracle database

I have created a table in an Oracle database as shown below:
create table employee(eid int, enameemp varchar(1), emgrid int);
insert into employee values(101,'A',103);
insert into employee values(102,'B',103);
insert into employee values(103,'C',104);
insert into employee values(104,'D',101);
//Displaying the table contents
select * from employee;
EID E EMGRID
--- - ------
101 A 103
102 B 103
103 C 104
104 D 101
The column name is not displayed completely unlike MySQL. Is there any way I can go about it without increasing the column size of enameemp i.e enameemp varchar(10)?
If you use SQL*Plus, By default the output column size would be colummn size itself. If you wish have a custom size you can always.
Try the below command.
COL COLUMN_NAME FORMAT A<ur column size>
EXAMPLE: COL ENAMEEMP FORMAT A10
More SQL plus options from Oracle Docs

Using Oracle temp table for multiple async HTTP calls

I have a Silverlight app that makes multiple (often concurrent) asynchronous calls to an Oracle database. The largest database table stores around 5 million records. Below is a summary of how the Silverlight app works, followed by my question.
The user sets query criteria to select a particular group of records, usually 500 to 5000 records in a group.
An asynchronous WCF call is made to the database to retrieve the values in four fields (latitude, longitude, heading, and time offset) over the selected group of records (meaning the call returns anywhere from 2k to 20k floating point numbers. These values are used to plot points on a map in the browser.
From here, the user can choose to graph the values in one or more of an additional twenty or so fields associated with the initial group of records. The user clicks on a field name, and another async WCF call is made to retrieve the field values.
My question is this: does it make sense in this case to store the records selected in step one in a temp table (or materialized view) in order to speed up and simplify the data access in step three?
If so, can anyone give me a hint regarding a good way to maintain the browser-to-temp-table link for a user's session?
Right now, I am just re-querying the 5 million points each time the user selects a new field to graph--which works until the user selects three or more fields at once. This causes the async calls to timeout before they can return.
We can do this using a CONTEXT. This is a namespace in session memory which we can use to store values. Oracle comes with a default namespace, 'USERENV', but we can define our own. The context has to be created by a user with the CREATE ANY CONTEXT privilege; this is usually a DBA. The statement references a PACKAGE which sets and gets values in the namespace, but this package does not have to exist in order for the statement to succeed:
SQL> create context user_ctx using apc.ctx_pkg
2 /
Context created.
SQL>
Now let's create the package:
SQL> create or replace package ctx_pkg
2 as
3 procedure set_user_id(p_userid in varchar2);
4 function get_user_id return varchar2;
5 procedure clear_user_id;
6 end ctx_pkg;
7 /
Package created.
SQL>
There are three methods, to set, get and unset a value in the namespace. Note that we can use one namespace to hold different valiables. I am just using this package to set one variable (USER_ID) in the USER_CTX namespace.
SQL> create or replace package body ctx_pkg
2 as
3 procedure set_user_id(p_userid in varchar2)
4 is
5 begin
6 DBMS_SESSION.SET_CONTEXT(
7 namespace => 'USER_CTX',
8 attribute => 'USER_ID',
9 value => p_userid);
10 end set_user_id;
11
12 function get_user_id return varchar2
13 is
14 begin
15 return sys_context('USER_CTX', 'USER_ID');
16 end get_user_id;
17
18 procedure clear_user_id
19 is
20 begin
21 DBMS_SESSION.CLEAR_CONTEXT(
22 namespace => 'USER_CTX',
23 attribute => 'USER_ID');
24 end clear_user_id;
25
26 end ctx_pkg;
27 /
Package body created.
SQL>
So, how does this solve anything? Here is a table for the temporary storage of data. I'm going to add a column which will hold a token to identify the user. When we populate the table the value for this column will be provided by CTX_PKG.GET_USER_ID():
SQL> create table temp_23 as select * from big_table
2 where 1=0
3 /
Table created.
SQL> alter table temp_23 add (user_id varchar2(30))
2 /
Table altered.
SQL> create unique index t23_pk on temp_23(user_id, id)
2 /
Index created.
SQL>
... and over that table I create a view:...
create or replace view v_23 as
select
id
, col1
, col2
, col3
, col4
from temp_23
where user_id = ctx_pkg.get_user_id
/
Now, when I want to store some data in the table I need to set the context with a value with uniquely identifies my user.
SQL> exec ctx_pkg.set_user_id('APC')
PL/SQL procedure successfully completed.
SQL>
This statement populates the temporary table with twenty random rows:
SQL> insert into temp_23
2 select * from
3 ( select b.*, ctx_pkg.get_user_id
4 from big_table b
5 order by dbms_random.random )
6 where rownum <= 20
7 /
20 rows created.
SQL>
I can retrieve those rows by querying the view. But when I change my USER_ID and run the same query I cannot see them any more:
SQL> select * from v_23
2 /
ID COL1 COL2 COL3 COL4
---------- ---------- ------------------------------ --------- ----------
277834 1880 GV_$MAP_EXT_ELEMENT 15-OCT-07 4081
304540 36227 /375c3e3_TCPChannelReaper 15-OCT-07 36
1111897 17944 /8334094a_CGCast 15-OCT-07 17
1364675 42323 java/security/PublicKey 15-OCT-07 42
1555115 3379 ALL_TYPE_VERSIONS 15-OCT-07 3
2073178 3355 ALL_TYPE_METHODS 15-OCT-07 3
2286361 68816 NV 15-OCT-07 68
2513770 59414 /5c3965c8_DicomUidDoc 15-OCT-07 59
2560277 66973 MGMT_MNTR_CA 15-OCT-07 66
2700309 45890 /6cc68a64_TrustManagerSSLSocke 15-OCT-07 45
2749978 1852 V_$SQLSTATS 15-OCT-07 6395
2829080 24832 /6bcb6225_TypesTypePair 15-OCT-07 24
3205157 55063 SYS_NTsxSe84BlRX2HiXujasKy/w== 15-OCT-07 55
3236186 23830 /de0b4d45_BaseExecutableMember 15-OCT-07 23
3276764 31296 /a729f2c6_SunJCE_n 15-OCT-07 31
3447961 60129 HHGROUP 15-OCT-07 60
3517106 38204 java/awt/im/spi/InputMethod 15-OCT-07 38
3723931 30332 /32a30e8e_EventRequestManagerI 15-OCT-07 30
3877332 53700 EXF$XPVARCLST 15-OCT-07 53
4630976 21193 oracle/net/nl/NetStrings 15-OCT-07 21
20 rows selected.
SQL> exec ctx_pkg.set_user_id('FOX_IN_SOCKS')
PL/SQL procedure successfully completed.
SQL> select * from v_23
2 /
no rows selected
SQL>
So, the challenges are:
to establish a token which you can use automatically to uniquely identify a user
to find a hook in your connecting code which can set the context each time the user gets a session
just as importantly, to find a hook in your dis-connecting code which can unset the context each time the user leaves a session
Also, remember to clear out the table once the user has finished with it.
when I first read this I thought 'global temporary table' (gtt) and realized that would not help you in the slightest! This is because the data in a GTT is visible only in a session, and with a stateless web app, probably using connection pooling, there is no guaranteed relationship between application user and database session (one user might be handed different sessions on successive connections, one session will be handed to several different users). Now a temp table should do the trick.
It seems that on each iterative hit, the person (via silverlight) is polling the same data (and a large amount to boot).
I do believe that a temp table would suffice. Here is an asktom that shows how to do this in a web environment. Keep in mind the moment the data is stored it is aging and possibly stale and there will need to be a cleanup job.
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:76812348057
Now to tie it back to the user, not 100% sure how to do this in Silverlight (assuming via asp.net?) is the user authenticated prior to proceding? if so, you ought to be able to take their credentials and utilize that as the source to query against (utilize their user name and/or SID as their primary key and foreign key it against the data table as described in the asktom link).
http://www.codeproject.com/KB/silverlight/SL3WindowsIdentityName.aspx
this links appears to show how to get the current silverlight user in a window's authenticated scheme.
hth

Resources