I have a Silverlight app that makes multiple (often concurrent) asynchronous calls to an Oracle database. The largest database table stores around 5 million records. Below is a summary of how the Silverlight app works, followed by my question.
The user sets query criteria to select a particular group of records, usually 500 to 5000 records in a group.
An asynchronous WCF call is made to the database to retrieve the values in four fields (latitude, longitude, heading, and time offset) over the selected group of records (meaning the call returns anywhere from 2k to 20k floating point numbers. These values are used to plot points on a map in the browser.
From here, the user can choose to graph the values in one or more of an additional twenty or so fields associated with the initial group of records. The user clicks on a field name, and another async WCF call is made to retrieve the field values.
My question is this: does it make sense in this case to store the records selected in step one in a temp table (or materialized view) in order to speed up and simplify the data access in step three?
If so, can anyone give me a hint regarding a good way to maintain the browser-to-temp-table link for a user's session?
Right now, I am just re-querying the 5 million points each time the user selects a new field to graph--which works until the user selects three or more fields at once. This causes the async calls to timeout before they can return.
We can do this using a CONTEXT. This is a namespace in session memory which we can use to store values. Oracle comes with a default namespace, 'USERENV', but we can define our own. The context has to be created by a user with the CREATE ANY CONTEXT privilege; this is usually a DBA. The statement references a PACKAGE which sets and gets values in the namespace, but this package does not have to exist in order for the statement to succeed:
SQL> create context user_ctx using apc.ctx_pkg
2 /
Context created.
SQL>
Now let's create the package:
SQL> create or replace package ctx_pkg
2 as
3 procedure set_user_id(p_userid in varchar2);
4 function get_user_id return varchar2;
5 procedure clear_user_id;
6 end ctx_pkg;
7 /
Package created.
SQL>
There are three methods, to set, get and unset a value in the namespace. Note that we can use one namespace to hold different valiables. I am just using this package to set one variable (USER_ID) in the USER_CTX namespace.
SQL> create or replace package body ctx_pkg
2 as
3 procedure set_user_id(p_userid in varchar2)
4 is
5 begin
6 DBMS_SESSION.SET_CONTEXT(
7 namespace => 'USER_CTX',
8 attribute => 'USER_ID',
9 value => p_userid);
10 end set_user_id;
11
12 function get_user_id return varchar2
13 is
14 begin
15 return sys_context('USER_CTX', 'USER_ID');
16 end get_user_id;
17
18 procedure clear_user_id
19 is
20 begin
21 DBMS_SESSION.CLEAR_CONTEXT(
22 namespace => 'USER_CTX',
23 attribute => 'USER_ID');
24 end clear_user_id;
25
26 end ctx_pkg;
27 /
Package body created.
SQL>
So, how does this solve anything? Here is a table for the temporary storage of data. I'm going to add a column which will hold a token to identify the user. When we populate the table the value for this column will be provided by CTX_PKG.GET_USER_ID():
SQL> create table temp_23 as select * from big_table
2 where 1=0
3 /
Table created.
SQL> alter table temp_23 add (user_id varchar2(30))
2 /
Table altered.
SQL> create unique index t23_pk on temp_23(user_id, id)
2 /
Index created.
SQL>
... and over that table I create a view:...
create or replace view v_23 as
select
id
, col1
, col2
, col3
, col4
from temp_23
where user_id = ctx_pkg.get_user_id
/
Now, when I want to store some data in the table I need to set the context with a value with uniquely identifies my user.
SQL> exec ctx_pkg.set_user_id('APC')
PL/SQL procedure successfully completed.
SQL>
This statement populates the temporary table with twenty random rows:
SQL> insert into temp_23
2 select * from
3 ( select b.*, ctx_pkg.get_user_id
4 from big_table b
5 order by dbms_random.random )
6 where rownum <= 20
7 /
20 rows created.
SQL>
I can retrieve those rows by querying the view. But when I change my USER_ID and run the same query I cannot see them any more:
SQL> select * from v_23
2 /
ID COL1 COL2 COL3 COL4
---------- ---------- ------------------------------ --------- ----------
277834 1880 GV_$MAP_EXT_ELEMENT 15-OCT-07 4081
304540 36227 /375c3e3_TCPChannelReaper 15-OCT-07 36
1111897 17944 /8334094a_CGCast 15-OCT-07 17
1364675 42323 java/security/PublicKey 15-OCT-07 42
1555115 3379 ALL_TYPE_VERSIONS 15-OCT-07 3
2073178 3355 ALL_TYPE_METHODS 15-OCT-07 3
2286361 68816 NV 15-OCT-07 68
2513770 59414 /5c3965c8_DicomUidDoc 15-OCT-07 59
2560277 66973 MGMT_MNTR_CA 15-OCT-07 66
2700309 45890 /6cc68a64_TrustManagerSSLSocke 15-OCT-07 45
2749978 1852 V_$SQLSTATS 15-OCT-07 6395
2829080 24832 /6bcb6225_TypesTypePair 15-OCT-07 24
3205157 55063 SYS_NTsxSe84BlRX2HiXujasKy/w== 15-OCT-07 55
3236186 23830 /de0b4d45_BaseExecutableMember 15-OCT-07 23
3276764 31296 /a729f2c6_SunJCE_n 15-OCT-07 31
3447961 60129 HHGROUP 15-OCT-07 60
3517106 38204 java/awt/im/spi/InputMethod 15-OCT-07 38
3723931 30332 /32a30e8e_EventRequestManagerI 15-OCT-07 30
3877332 53700 EXF$XPVARCLST 15-OCT-07 53
4630976 21193 oracle/net/nl/NetStrings 15-OCT-07 21
20 rows selected.
SQL> exec ctx_pkg.set_user_id('FOX_IN_SOCKS')
PL/SQL procedure successfully completed.
SQL> select * from v_23
2 /
no rows selected
SQL>
So, the challenges are:
to establish a token which you can use automatically to uniquely identify a user
to find a hook in your connecting code which can set the context each time the user gets a session
just as importantly, to find a hook in your dis-connecting code which can unset the context each time the user leaves a session
Also, remember to clear out the table once the user has finished with it.
when I first read this I thought 'global temporary table' (gtt) and realized that would not help you in the slightest! This is because the data in a GTT is visible only in a session, and with a stateless web app, probably using connection pooling, there is no guaranteed relationship between application user and database session (one user might be handed different sessions on successive connections, one session will be handed to several different users). Now a temp table should do the trick.
It seems that on each iterative hit, the person (via silverlight) is polling the same data (and a large amount to boot).
I do believe that a temp table would suffice. Here is an asktom that shows how to do this in a web environment. Keep in mind the moment the data is stored it is aging and possibly stale and there will need to be a cleanup job.
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:76812348057
Now to tie it back to the user, not 100% sure how to do this in Silverlight (assuming via asp.net?) is the user authenticated prior to proceding? if so, you ought to be able to take their credentials and utilize that as the source to query against (utilize their user name and/or SID as their primary key and foreign key it against the data table as described in the asktom link).
http://www.codeproject.com/KB/silverlight/SL3WindowsIdentityName.aspx
this links appears to show how to get the current silverlight user in a window's authenticated scheme.
hth
Related
I am new in Oracle forms. I have connected my forms to the database. Then created data block choosing data block wizard. I have choosen of my database table ORDERS. I have inserted values in this ORDERS table in sqlplus. After creating ORDERS data block I run my forms through browser, I click on execute query on menu but it's not fetching the data from my database table. It says,
FRM-40350 Query Caused No REcords to be retrieved . But in sqlplus I have checked that the row created successfully and column is fill in value and my application forms is connected to my database. Then why my execute query not working? How can I fix this?
If you're connected to the same user in both SQLPlus and Forms (you probably are, otherwise Forms wouldn't see that table and you wouldn't be able to base data block on it), you probably didn't COMMIT after inserting row(s) in SQLPlus.
Code you posted should be fixed (invalid datatype, missing column name, don't insert strings into DATE datatype column):
SQL> CREATE TABLE ORDERS(
2 order_id NUMBER(20),
3 customer_id NUMBER(20),
4 order_date DATE, -- NUMBER(20), --> date, not number
5 order_status VARCHAR2(20),
6 order_mode VARCHAR2 (200),
7 SALES_REP_ID NUMBER (20) );
Table created.
SQL> INSERT INTO orders (
2 order_id,
3 customer_id,
4 order_date,
5 order_status,
6 order_mode,
7 sales_rep_id
8 ) VALUES (
9 0001,
10 01,
11 date '2008-02-11', -- not '11-FEB-2008', it is a STRING
12 '5',
13 'direct',
14 158
15 );
1 row created.
This is what you're missing:
SQL> COMMIT;
Commit complete.
SQL>
i created a table but i forgot to add a sequence to one of the PK, its a sequence on a form page, i just cant find anything about it, is it possible or do i have to do the form all over again.
i tried to replace the PK but it doesnt give me the option to add the sequence when creating a new one.
i searched everywhere and asked the support in chat (didn't really help since its not their job).
all i could find was this and this.
I'd suggest you to skip Apex in this matter and do the following: presume this is your table:
SQL> create table test
2 (id number constraint pk_test primary key,
3 name varchar2(20)
4 );
Table created.
This is the sequence:
SQL> create sequence myseq;
Sequence created.
As you forgot to specify PK source while creating Apex Form page, never mind - let the database handle it. How? Create a BEFORE INSERT trigger:
SQL> create or replace trigger trg_bi_test
2 before insert on test
3 for each row
4 when (new.id is null)
5 begin
6 :new.id := myseq.nextval;
7 end trg_bi_test;
8 /
Trigger created.
Let's test it: I'm inserting only the NAME (which is what your Apex Form will be doing):
SQL> insert into test (name) values ('Littlefoot');
1 row created.
What is table's contents?
SQL> select * from test;
ID NAME
---------- --------------------
1 Littlefoot
SQL>
See? Trigger automatically inserted ID (primary key) column value.
If it were an Interactive Grid (which lets you insert several records at a time):
SQL> insert into test (name)
2 select 'Bigfoot' from dual union all
3 select 'FAD' from dual;
2 rows created.
SQL> select * from test;
ID NAME
---------- --------------------
1 Littlefoot
2 Bigfoot
3 FAD
SQL>
Works just fine.
And what's another benefit: you don't have to modify Apex application at all.
I have 3 tables that are related to each other:
ACCOUNTS
CARDS
TRANSACTIONS
I want to change the money amount from account every time I execute a new transaction. I want to decrease the account value with each new move.
I tried writing this trigger:
create or replace trigger ceva_trig1
before insert on miscari
for each row
declare
new_val micari.valoare%tipe := new.valoare;
begin
update conturi
set sold = sold - new_val
where nrcont = (select nrcont
from conturi
join carti_de_credit on conturi.nrcont = carti_de_credit.nrcont
join miscari on carti_de_credit.nr_card = miscari.nrcard)
and sold >= new_val;
end;
May anyone help me correct the syntax that crashes here?
I've created those tables with minimal number of columns, just to make trigger compile.
SQL> create table conturi
2 (sold number,
3 nrcont number
4 );
Table created.
SQL> create table miscari
2 (valoare number,
3 nrcard number
4 );
Table created.
SQL> create table carti_de_credit
2 (nrcont number,
3 nr_card number
4 );
Table created.
Trigger:
SQL> create or replace trigger ceva_trig1
2 before insert on miscari
3 for each row
4 begin
5 update conturi c
6 set c.sold = c.sold - :new.valoare
7 where c.nrcont = (select r.nrcont
8 from carti_de_credit r
9 where r.nrcont = c.nrcont
10 and r.nr_card = :new.nrcard
11 )
12 and c.sold >= :new.valoare;
13 end;
14 /
Trigger created.
SQL>
How does it differ from your code? Like this:
SQL> create or replace trigger ceva_trig1
2 before insert on miscari
3 for each row
4 declare
5 new_val micari.valoare%tipe := new.valoare;
6 begin
7 update conturi
8 set sold = sold - new_val
9 where nrcont = (select nrcont
10 from conturi
11 join carti_de_credit on conturi.nrcont = carti_de_credit.nrcont
12 join miscari on carti_de_credit.nr_card = miscari.nrcard)
13 and sold >= new_val;
14 end;
15 /
Warning: Trigger created with compilation errors.
SQL> show err
Errors for TRIGGER CEVA_TRIG1:
LINE/COL ERROR
-------- -----------------------------------------------------------------
2/11 PL/SQL: Item ignored
2/26 PLS-00208: identifier 'TIPE' is not a legal cursor attribute
4/3 PL/SQL: SQL Statement ignored
10/15 PL/SQL: ORA-00904: "NEW_VAL": invalid identifier
10/15 PLS-00320: the declaration of the type of this expression is incomplete or malformed
SQL>
Explained:
it isn't tipe but type
new column values are referenced with a colon, i.e. :new.valoare
you shouldn't make typos regarding table & column names; it is miscari, not micari
it is bad practice to write query which references the same table (miscari, line #12) trigger is created for. As it is being changed, you can't select values from it as it is mutating
lucky you, you don't have to do that at all. How? Have a look at my code.
Attempting to maintain an ongoing for transactions in one table in another table is always a bad idea. Admittedly in an extremely few cases it's necessary, but should be the design of last resort not an initial one; even when necessary it's still a bad idea and therefore requires much more processing and complexity.
In this instance after you correct all the errors #Littlefoot points out then your real problems begin. What do you do when: (Using Littlefoot's table definitions)
I delete a row from miscari?
I update a row in miscari?
The subselect for nrcont returns 0 rows?
The condition sold >= new_val is False?
If any of conditions occur the value for sold in conturi is incorrect and may not be correctable from values in the source table - miscari. An that list may be just the beginning of the issues you face.
Suggestion: Abandon the idea of keeping an running account of transaction values. Instead derive it when needed. You can create a view that does that and select from the view.
So maybe instead of "create table conturi ..."
How to create an oracle table with an auto increment column such that whenever exsisiting value is getting inserted it should increment the counter otherwise it should insert a new count
For instance if I have a column with phone number and status
There should be an another column named counter on which auto increment feature will be present
Whenever exsiting phonenumber is inserted again then counter must be increment and if a new value is inserted then counter should add a new initial value for that number
Depending on how you want to insert the data. If you are going to be inserting many rows at the same time then try a MERGEstatement.
Join with the phone number, if found increment the counter column value else set the counter to 1.
If you are going to be inserting one row at a time then this is best done in the code that performs an insert.
EDIT: I did not think this through. Now that I am, I think it is unnecessary to use a counter column.
If you are going to insert phone numbers multiple times anyway, why don't you simply count each phone number? It doesn't have to be stored.
You can't create a table like that.
You can, however, add your own logic into the place where you INSERT new rows - eg, it's not in the table itself. You can also go the route of a TRIGGER.
Additionally, you may wish to simply have your ID be a unique GUID that gets generated and create this duplicate counter whenever it is necessary, using ROW_NUMBER() OVER like EMP_ID in this example from the oracle website:
SELECT department_id, last_name, employee_id, ROW_NUMBER()
OVER (PARTITION BY department_id ORDER BY employee_id) AS emp_id
FROM employees;
DEPARTMENT_ID LAST_NAME EMPLOYEE_ID EMP_ID
------------- ------------------------- ----------- ----------
10 Whalen 200 1
20 Hartstein 201 1
20 Fay 202 2
30 Raphaely 114 1
30 Khoo 115 2
30 Baida 116 3
30 Tobias 117 4
30 Himuro 118 5
30 Colmenares 119 6
For Auto Increment You can create a sequence as below.
CREATE SEQUENCE name_of_sequence
START WITH 1
INCREMENT BY 1;
For the second part of your query you can define a trigger that automatically populates the primary key value using above sequence
I'm using Oracle on database server, from an XP client, using VB6 and ADO. In one transaction, I'm inserting one record into a parent table, which has a trigger and sequence to create a unique recordid, then that recordid is used for the relationship to a child table for a variable number of inserts to the child table. For performance, this is being sent in one execute command from my client app. For instance (simplified example):
declare Recordid int;
begin
insert into ParentTable (_field list_) Values (_data list_);
Select ParentTableSequence.currVal into Recordid from dual;
insert into ChildTable (RecordID, _field list_) Values (Recordid, _data list_);
insert into ChildTable (RecordID, _field list_) Values (Recordid, _data list_);
... multiple, variable number of additional ChildTable inserts
commit;
end;
This is working fine. My question is: I also need to return to the client the Recordid that was created for the inserts. On SQL Server, I can add something like a select to Scope_Identity() after the commit to return a recordset to the client with the unique id.
But how can I do something similar for Oracle (doesn't have to be a recordset, I just need that long integer value)? I've tried a number of things based on results from searching the 'net, but have failed in finding a solution.
These two lines can be compressed into a single statement:
-- insert into ParentTable (field list) Values (data list);
-- Select ParentTableSequence.currVal into Recordid from dual;
insert into ParentTable (field list) Values (data list)
returning ParentTable.ID into Recordid;
If you want to pass the ID back to the calling program you will need to define your program as a stored procedure or function, returning Recordid as an OUT parameter or a RETURN value respectively.
Edit
MarkL commented:
This is more of an Oracle PL/SQL
question than anything else, I
believe.
I confess that I no nothing about ADO, so I don't know whether the following example will work in your case. It involves building some infrastructure which allows us to pass an array of values into a procedure. The following example creates a new department, promotes an existing employee to manage it and assigns two new hires.
SQL> create or replace type new_emp_t as object
2 (ename varchar2(10)
3 , sal number (7,2)
4 , job varchar2(10));
5 /
Type created.
SQL>
SQL> create or replace type new_emp_nt as table of new_emp_t;
2 /
Type created.
SQL>
SQL> create or replace procedure pop_new_dept
2 (p_dname in dept.dname%type
3 , p_loc in dept.loc%type
4 , p_mgr in emp.empno%type
5 , p_staff in new_emp_nt
6 , p_deptno out dept.deptno%type)
7 is
8 l_deptno dept.deptno%type;
9 begin
10 insert into dept
11 (dname, loc)
12 values
13 (p_dname, p_loc)
14 returning deptno into l_deptno;
15 update emp
16 set deptno = l_deptno
17 , job = 'MANAGER'
18 , mgr = 7839
19 where empno = p_mgr;
20 forall i in p_staff.first()..p_staff.last()
21 insert into emp
22 (ename
23 , sal
24 , job
25 , hiredate
26 , mgr
27 , deptno)
28 values
29 (p_staff(i).ename
30 , p_staff(i).sal
31 , p_staff(i).job
32 , sysdate
33 , p_mgr
34 , l_deptno);
35 p_deptno := l_deptno;
36 end pop_new_dept;
37 /
Procedure created.
SQL>
SQL> set serveroutput on
SQL>
SQL> declare
2 dept_staff new_emp_nt;
3 new_dept dept.deptno%type;
4 begin
5 dept_staff := new_emp_nt(new_emp_t('MARKL', 4200, 'DEVELOPER')
6 , new_emp_t('APC', 2300, 'DEVELOPER'));
7 pop_new_dept('IT', 'BRNO', 7844, dept_staff, new_dept);
8 dbms_output.put_line('New DEPTNO = '||new_dept);
9 end;
10 /
New DEPTNO = 70
PL/SQL procedure successfully completed.
SQL>
The primary keys for both DEPT and EMP are assigned through triggers. The FORALL syntax is a very efficient way of inserting records (it also works for UPDATE and DELETE). This could be written as a FUNCTION to return the new DEPTNO instead, but it is generally considered better practice to use a PROCEDURE when inserting, updating or deleting.
That would be my preferred approach but I admit it's not to everybody's taste.
Edit 2
With regards to performance, bulk operations using FORALL will definitely perform better than a handful of individual inserts. In SQL, set operations are always preferable to record-by-record. However, if we are dealing with only a handful of records each time it can be hard to notice the difference.
Building a PL/SQL collection (what you think of as a temporary table in SQL Server) can be expensive in terms of memory. This is especially true if there are many users running the code, because it comes out of the session level allocation of memory, not the Shared Global Area. When we're dealing with a large number of records it is better to populate an array in chunks, perhaps using the BULK COLLECT syntax with a LIMIT clause.
The Oracle online documentation set is pretty good. The PL/SQL Developer's Guide has a whole chapter on Collections. Find out more.