Can Oracle database transactions help in this scenario? - oracle

I am working with an Oracle database (11g Release 2). Imagine multiple connections doing the following simultaneously:
Start transaction
Check if a specific value exists in a table of unique values
If the value does not exist, insert it
Commit transaction
It seems to me that the only way to prevent conflicts is to block connections from performing the above 4-step sequence while any other connection is currently performing the 4-step sequence.
Can transactions achieve this kind of broad locking/blocking in Oracle?
Thanks in advance for your answers and advice on how to best deal with this scenario.

Add a unique check constraint, and implement an exception handler to get the next sequence and try again.
This is assuming you're using pl/sql.
An alternative would be using an Oracle sequence, with cache size 1. This will also ensure no gaps in the sequence
2. SELECT * FROM table_name FOR UPDATE to block all reads from other sessions...

Related

Is okay to create indexes on a table while insertion

Is it okay to create an index on a table while , lets say when are there some tasks which creates some new rows into the table at the same time?? Would there be any locking issues???
EX: FEEDBACK TABLE --> creating an index on (Name, feedbackrule) while there are any inserts happening simultaneously , is this BAD?? if so what.
I'm assuming, Oracle will just not use this index when the inserts are happening, later this will be used.
Normally, creating an index requires locking the table, so all the DML operations would block; and if there are active transactions on the table when you initiate the index creation, you'd likely get the error "ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired".
If the table is small, this may not be much of an issue - transactions would just be blocked for a few moments. But if it is very large it would be a bad idea to try creating an index while the table is in use.
However, if you using Enterprise Edition, you can add the ONLINE keyword to your CREATE INDEX statement, which will allow transactions to proceed against the table while the index is building. It may still cause slower performance.

"Tracking" (debugging) what is happening in a Firebird 2.5 database

I need some advice in how to handle the following:
Firebird 2.5 (I use Delphi XE2).
I have 2 structure identical database.
I have some triggers and SP on the table that I copy.
I copy 1-2-3 records from one table database to another.
trigger After Insert (execute one stored procedure that recompute some other records from the same table).
The database are identical but in one database the after insert work ok and in the other one no.
Also both trigger ARE working. (for testing purpose I raise an exception and is fire the exception).
So i want to find a way to track what is happening and to find the problem.
How to monitor/debug this situation.
also how to track deadlock?
any advice?
Check whether the not working trigger is active.

Sybase ASE remote row insert locking

Im working on an application which access a Sybase ASE 15.0.2 ,where the current code access a remote database
(CIS) to insert a row using a proxy table definition (the destination table is a DOL - DRL table - The PK
row is defined as identity ,and is always growing). The current code performs a select to check if the row
already exists to avoid duplicate data to be inserted.
Since the remote table also have a PK definition on the table, i do understand that the PK verification will
be done again prior to commiting the row.
Im planning to remove the select check since its being effectively done again by the PK verification,
but im concerned about if when receiving a file with many duplicates, the table may suffer
some unecessary contention when the data is tried to be commited.
Its not clear to me if Sybase ASE tries to hold the last row and writes the data prior to check for the
duplicate. Also, if the table is very big, im concerned also about the time it will spend looking the
whole index to find duplicates.
I've found some documentation for SQL anywhere, but not ASE in the following link
http://dcx.sybase.com/1200/en/dbusage/insert-how-transact.html
The best i could find is the following explanation
https://groups.google.com/forum/?fromgroups#!topic/comp.databases.sybase/tHnOqptD7X8
But it doesn't enlighten in details how the row is locked (and if there is any kind of
optimization to write it ahead or at the same time of PK checking)
, and also if it will waste a full PK look if im positively inserting a row which the PK
positively greater than the last row commited
Thanks
Alex
Unlike SqlAnywhere there is no option for ASE to set wait_for_commit. The primary key constraint is checked during the insert and not at the commit time. The problem as I understand from your post I see is if you have a mass insert from a file that may contain duplicates is to load into a temp table , check for duplicates, remove the duplicates and then insert the unique ones. Mass insert are lot faster though it still checks for primary key violations. However there is no cost associated as there is no rolling back. The insert statement is always all or nothing. Even if one row is duplicate the entire insert statement will fail. Check before insert in more of error free approach as opposed to use of constraint to the verification because it is going to fail and rollback is going to again be costly.
Thanks Mike
The link does have a very quick explanation about the insert from the CIS perspective. Its a variable to keep an eye on given that CIS may become a representative time consumer
if its performing data and syntax checking if it will be done again when CIS forwards the insert statement to the target server. I was afraid that CIS could have some influence beyond the network traffic/time over the locking/PK checking
Raju
I do agree that avoiding the PK duplication by checking if the row already exists by running a select and doing in a batch, but im currently looking for a stop gap solution, and that may be to perform the insert command in batches of about 50 rows and leave the
duplicate key check for the PK.
Hopefully the PK check will be done over a join of the 50 newly inserted rows, and thus
avoid to traverse the index for each single row...
Ill try to test this and comment back
Alex

Connect to vertica database using odbc in asp.net and return a dataset

I have inserted some rows in vertica database using insert command on terminal.It shows when i read the record using select command.But i am not able to see record when connect to database using ODBC connection also i am able to find that row when restart the vertica.Please help me to solved out the problem.
Did you COMMIT; after you inserted the rows? It's a simple thing, but one that I've overlooked myself many times in the past.
To elaborate a bit beyond Bobby W's response.
When you perform an insert it will show the data to your current session. This allows a user to perform operations and use 'temporary' data and not affect/corrupt what other people are doing. It is session based data. That is why you can insert and see the data, but when connecting from a 2nd source, are unable to see it.
To 'commit' the data to the database you need to issue the COMMIT; statement as Bobby W mentioned.
Failing to issue COMMIT; is something I've also overlooked more than a few times.
To clarify, you can see the rows after you restart? Are you connecting to the database as the same user from ODBC and vsql?
By default Vertica ISOLATION level is READ COMMITTED mode which means other sessions read only data that is COMMITTED. If you've inserted but not committed, with this level, other sessions cannot read the data you've inserted

What is the fastest way to insert data into an Oracle table?

I am writing a data conversion in PL/SQL that processes data and loads it into a table. According to the PL/SQL Profiler, one of the slowest parts of the conversion is the actual insert into the target table. The table has a single index.
To prepare the data for load, I populate a variable using the rowtype of the table, then insert it into the table like this:
insert into mytable values r_myRow;
It seems that I could gain performance by doing the following:
Turn logging off during the insert
Insert multiple records at once
Are these methods advisable? If so, what is the syntax?
It's much better to insert a few hundred rows at a time, using PL/SQL tables and FORALL to bind into insert statement. For details on this see here.
Also be careful with how you construct the PL/SQL tables. If at all possible, prefer to instead do all your transforms directly in SQL using "INSERT INTO t1 SELECT ..." as doing row-by-row operations in PL/SQL will still be slower than SQL.
In either case, you can also use direct-path inserts by using INSERT /*+APPEND*/, which basically bypasses the DB cache and directly allocates and writes new blocks to data files. This can also reduce the amount of logging, depending on how you use it. This also has some implications, so please read the fine manual first.
Finally, if you are truncating and rebuilding the table it may be worthwhile to first drop (or mark unusable) and later rebuild indexes.
Regular insert statements are the slowest way to get data in a table and not meant for bulk inserts. The following article references a lot of different techniques for improving performance: http://www.dba-oracle.com/oracle_tips_data_load.htm
Drop the index, then insert the rows, then re-create the index.
If dropping the index doesn't speed things up enough, you need the Oracle SQL*Loader:
http://www.oracle.com/technology/products/database/utilities/htdocs/sql_loader_overview.html
Suppose you have taken eid,ename,sal,job. So create a table first as:
SQL>create table tablename(eid number, ename varchar2(20),sal number,job char(10));
Now insert data:-
SQL>insert into tablename values(&eid,'&ename',&sal,'&job');
Check this link
http://www.dba-oracle.com/t_optimize_insert_sql_performance.htm
main points to consider for your
case is to use Append hint as this
will directly append into the table
instead of using freelist. If you can afford to turn off logging than use append with nologging hint to do it
Use a bulk insert instead instead of iterating in PL/SQL
Use sqlloaded to load the data directly into the table if you are getting data from a file feed
Here are my recommendations on fast insert.
Trigger - Disable any triggers associated with a table. Enable after Inserts are complete.
Index - Drop Index and re-create it after your Inserts are complete.
Stale stats - Re-analyze table and index stats.
Index de-fragmentation - Rebuild Index if needed
Use No Logging -Insert using INSERT APPEND (Oracle only). This approach is very risky approach, no redo logs are generated therefore you can’t do a rollback - make a backup of table before you start and don't try on live tables. Check if your db has similar option
Parallel Insert: Running parallel insert will get the job faster.
Use Bulk Insert
Constraints - Not much overhead during inserts but still a good idea to check, if it is still slow after even after step 1
You can learn more on http://www.dbarepublic.com/2014/04/slow-insert.html
Maybe one of your best option is to avoid Oracle as much as possible actually.
I've been baffled by this myself, but very often a Java process can outperform many of the Oracle's utilities which either use OCI (read: SQL Plus) or will take up so much of your time to get right (read: SQL*Loader).
This doesn't prevent you to use specific hints either (like /APPEND/).
I've been pleasantly surprised each time I've turned to that kind of solution.
Cheers,
Rollo

Resources