Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I have written stored procedure and it takes long time when i call it.
I use temporary table in 'SP'.
it can be reason ??
CREATE OR REPLACE PROCEDURE TEST_SP
IS
BEGIN
INSERT INTO MYTEMP_table (A, B )
( SELECT id AS CUSTOMER_NO,
ACC_NO AS ACCOUNT_NO
FROM myTable );
UPDATE MYTEMP_table
SET MYTEMP_table.A =
( SELECT MIN (BRH_DATE)
FROM CUSTOMER,)
UPDATE MYTEMP_table
SET MYTEMP_table.B =
( SELECT MIN (SUBSTR (ENTRY_DATE, 0, 8))
FROM INFO)
.......
MYTEMP_table is temporary table.
This code snippet looks woefully incomplete. Seems odd that you are filling the temp table with one query:
select id, acc_no from myTable
and then wiping out all columns with a single value:
UPDATE MYTEMP_table
SET MYTEMP_table.A =
( SELECT MIN (BRH_DATE)
FROM CUSTOMER,)
Your post is not clear, but hopefully you are using a global temporary table (Memory based) rather than a physical table meant for temporary storage.
Multiple writes to the same rows is a sure-fire way of slowing down the works (Much more-so in a physical table, but still slow either way). If possible, consider the following:
Use analytic functions or a more complex initial query to get all your writing done up front...
If you're not comfortable/familiar with running/reading explain plans, try running each SQL statement in a SQL Editor manually to assess their individual performance...
Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
I want to remove duplicate rows within my database.
I only want them removed if each field within that row matches another within the same table.
I've researched how to use the Query wizard to find duplicate fields, but I haven't found a way to match the entire row.
Are you able to perform queries?
DELETE FROM table_name
LEFT OUTER JOIN (
SELECT
MIN(RowId) as RowId,
column_name1,
column_name2,
column_name3
FROM
table_name
GROUP BY
column_name1,
column_name2,
column_name3
) as nonDuplicates ON
table_name.RowId = nonDuplicates.RowId
WHERE
nonDuplicates.RowId IS NULL
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have come across a strange requirement, and really haven't a clue where to begin
We have an oracle database table that will be receiving data daily, and a CF application to interface with it
What they would like is for when a user logs in, to show x amount of rows from the table, and essentially "lock" the x amount of rows to that user, so when another user logs in, their x amount of rows are different, so noone is working concurrently on the same row
What i am guessing is a session write to a table, claiming the rows, but any thoughts would be more than welcome
There are a few ways to approach this problem. I will take a nibble here and tell you that you will need to get into Oracle database procedures and requires you to understand locking select for update clauses which you will immerse you in the nuances of sessions and how they work in oracle and you will call it using cfstoredproc.
There is another method. Now I have no idea about your coding environment or restrictions, nor do I know the user/system load considerations so this is just a suggestion. Adding a flag field to the table and make it a bit datatype or int...whatever. You are going to select them in a cfquery, then update the list of ids (setting bitItnFlag=1 ...or whatever you want to name this new field) which will mean 'this record is checked out'. You will still have the group of records in the first query which you will loop out to the end user to work with needing an update query setting them free by setting your bitIntFlag=0. They will be essentially locked. So another user will have to select where bitIntFlag=0 essentially skipping your locked group and setting their selected group (update them equal to 1). You can use cftransaction and two cfqueries like this.
<cftransaction action"begin">
<cfquery name="selectLock" datasource="#application.dsn#">
SELECT *
FROM (
SELECT *
FROM mytable
WHERE bitIntFlag = 0
ORDER BY
dbms_random.value
)
WHERE rownum <= 10
</cfquery>
<!---Now run your update--->
<cftry>
<cfquery name="updateLock" datasource="#application.dsn#">
UPDATE
myTable
SET
bitIntFlag = 1
WHERE
primaryKeyIDthing in #ValueList(selectLock.name)#
</cfquery>
<cfcatch type="database">
<cftransaction action="rollback"/>
</cfcatch>
<cftry>
<cftransaction action="commit"/>
</cftransaction>
<cfoutput query="selectLock">
#primaryKeyIDthing#<br>
</cfoutput>
(this code is untested but should get you started if you go down this route)
When you are done you update your records using cfquery and run your update sql and set the flag to zero to free up the records.
Again this is a simple work around that may or may not work for you. I don't know what kind of transactional intensity you are dealing with in your environment but sometimes making things simple can work!
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
There is a schema say 'A' in which there is a package called 'B' in which this is a function (below). In this function 'TIMESTMAP' is used which while compiling in 11g is giving error. I want to create a public synonym for TIMESTAMP. Can anyone please provide me the script for the same.
FUNCTION generate_random_number
Return Number
IS
l_seq_no VARCHA2(6)
l_sys_date CHAR(10)
BEGIN
SELECT LTRIM(TO_CHAR(TIMESTAMP.NEXTVAL,'000000'), ' ')
INTO l_seq_no
from DUAL;
SELECT TO_CHAR(SYSDATE, 'H24:MI:SS')
INTO l_sys_date
from DUAL
TIMESTAMP is a reserved word so it will be interpreting your code as TIMESTAMP being a datatype (so I guess the error your getting is nextval must be declared or something). So whilst you can create a sequence called TIMESTAMP, it is extremely silly to do so. you should rename the synonym. Failing that you can create a synonym (public or private) with a different name.
eg:
SQL> create sequence timestamp start with 1;
Sequence created.
SQL>
This sequence can be used in SQL but it cannot be used in PL/SQL (which is what the OP is trying to do). The function won't compile (with a PL-00302 error). So we must create a synonym for it:
SQL> create synonym t for timestamp;
Synonym created.
then use T in your code.
your code also has numerous other typos. missing ; and mistyped varchar2. Finally char(10) for the time will mean its blank padded with 2 trailing spaces (as the length of the string will be 8 chars).
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
select * from (
select t_tmp_a.*, rownum t_tmp_id from (
select t.*, i.counts
from table1 t, (select id, count(id) counts from table2 group by id) i
where t.id=i.id and t.kindid in (0,1,3) order by t.id desc
) t_tmp_a where rownum <= 20) t_tmp_b where t_tmp_id >= 11;
table1 and table2 have more then 2 million data per table, when execute this query need 18s , before this query execute we should calculation total count need about 7s, so it spends more than 25s, any idea to optimiza it?
Pagination is usually a mechanism for displaying a result to a human being. No human being wants to read two million rows of data.
So if this query is indeed presenting rows to a real person then what you need to address is reducing the size of the whole result to something which is human-sized. So you need to apply additional filters in the database and return a focused result set. Not only will your users thank you, so will your network administrator.
On the other hand, if the intended recipient of this data deluge is a computer or other mechanical device then just give it the whole thing. Machines mostly don't care about pages, or if they do (spreadsheets, printers, etc) they have built-in sub-routines to handle pagination for us.
So that leaves us with the problem that your original query takes a long time to execute. Without any explain plan or statistics (how many rows in table1 fit the search criteria? how restricting are those values for kindid?) it is hard to solve this.
"kindid is a type which only have three choose(0,1,3)"
Fnord. If KINDID can only have three choices what is the point of using it in the WHERE clause?
In fact removing it from the WHERE clause may dramatically improve the performance of your query. Unless you have gathered histograms for that column Oracle will assume that in (0,1,3) is somehow going to restrict the result set; whereas that is only going to be true if the majority of rows have a NULL in that column. If that is the case it would be better to use kindid is not null.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have finished my first real PL/SQL stored proc, this stored proc works as expected. I am new to PL/SQL, could you please point anything wrong or bad coding ?
This code is assuming a naming convention, for example, 't_company' table will use 'companyId' as its primary key and its type is number.
Thank you very much.
create or replace
package body test_erp AS
procedure init_data is
begin
logMessage('procedure init_data');
SAVEPOINT do_insert;
insert into t_company(companyId, companyName) values(gen_key('t_company'), 'IBM');
COMMIT;
exception
WHEN OTHERS THEN
rollback to do_insert;
logMessage('roll back , due to '|| SQLERRM);
end init_data;
end test_erp;
It will call this function
create or replace
function gen_key(tblName varchar2)
return number is
l_key number := 1000;
l_tmpStr varchar(2000); -- not good, how to fix it ?
begin
l_tmpStr := substr(tblName, 3, length(tblName));
EXECUTE IMMEDIATE ' SELECT CASE WHEN MAX('||l_tmpStr||'Id) IS NULL THEN 1000 ELSE MAX('||l_tmpStr||'Id)+1 END FROM '|| tblName into l_key;
logmessage('gen primary key '|| tblName ||' '||l_key);
return l_key;
end;
Your key_gen procedure is rather problematic. Generating keys by doing a MAX(key)+1 is slow and will not work in a multiuser environment. Assuming you have two users, it is relatively easy for both users to see the same MAX(key) and try to insert rows with the same primary key.
Oracle provides sequences in order to efficiently generate primary keys in a multi-user environment. You would be much better served using sequences to generate your keys. Conventionally, you would create one sequence per table, i.e.
CREATE SEQUENCE company_seq;
Your INSERT statement would then be something like
insert into t_company(companyId, companyName) values(company_seq.nextval, 'IBM');
Or you could create a trigger on the table to automatically populate the primary key.
Additionally, while it is fine to catch exceptions in order to log them, you really want to re-raise that exception so that the caller is aware that the INSERT failed.
Using function in your case gen_key is very slow and it's incorrect database-written and also very inefficiently.
So my advice is to create SEQUENCE that is generally used for this.Then you should create TRIGGER for generating new PK for each INSERT or directly add it with NEXTVAL.
So, your SEQUENCE can looks like this:
CREATE SEQUENCE YOUR_COMP_SEQ
MINVALUE 1
MAXVALUE 999999
START WITH 1
INCREMENT BY 1
NOCACHE
;
Then i recommend to you use meant TRIGGER:
CREATE OR REPLACE TRIGGER AUTOSET_ID_COMP
BEFORE INSERT ON t_company
FOR EACH ROW
BEGIN
SELECT YOUR_COMP_SEQ.NEXTVAL INTO :NEW.companyId FROM DUAL;
END;
And finally just call query:
INSERT INTO t_company(companyName) VALUES('SomeValue');
If you don't want to create TRIGGER so you can do it directly like this:
INSERT INTO t_company(companyId, companyName)
VALUES(YOUR_COMP_SEQ.NEXTVAL, 'SomeValue');
Note: Of course, you can create for every TABLE its own SEQUENCE and then use TRIGGERS for each TABLE.
Note 2: Sequences are very good but there is some problem that for example you added to table 20 rows, so IDs are 1,2,3, ... etc. and for example you will delete 15. row and since this ID 15 you can't use, anymore.
Update:
Answer and Solution is updated after a little discussion with #Ben, thanks.