I have a global temp table which is set as delete on commit. How does it behave on concurrency issue? I mean what happens if another session wants to use that global temporary table? The answer will probably not be "they share the same data".
Now, if my guess is correct :), is the table locked until the first connection commits, or does the dbms create a global temp table for each connection? ( something like an instance of the table? )
From the documentation:
The data in a temporary table is visible only to the session that inserts the data into the table.
Each session will have its logical independent copy of the temporary table.
Since you can not see other sessions' data and since Oracle deals with locks at the row level, you can not be blocked by other sessions' DML. Concurrent DML (Insert, Delete, Update) won't affect other sessions.
Only DDL will need a lock on the table (ie: ALTER TABLE...)
Related
It is a well known fact that in an Oracle database it is not possible to make a transaction out of multiple DDL statements.
However, is there any way to lock a specific set of database objects within the current connection so that after a DDL query is executed, all locks are held until they are explicitly released?
An obvious solution of this kind doesn't work, because executing the DDL statement automatically commits the transaction, and with it, the locks are released:
LOCK TABLE x ....;
LOCK TABLE y ....;
ALTER TABLE x ....; -- Does not work properly since table locks are released here
ALTER TABLE y ....;
ALTER TABLE x ....;
COMMIT;
The DBMS_LOCK option doesn't work either, because it is an advisory lock, and the concurrent thread must respect this lock and at least be aware of its existence.
Moreover, it is not controlled which statements can be executed by concurrent threads/sessions. It is possible to execute a query only in the current session, and it must be ensured that no intermediate queries on tables X and Y are executed from other sessions until the current session has ended.
Are there any ideas how this can be implemented?
PS: Please don't mention the high-level task or XY problem. There is no high-level task. The question is posed exactly as it is.
A bit of a joke (breaks all dependent PL/SQL), but... ;)
ALTER TABLE x RENAME TO x__my_precious;
ALTER TABLE y RENAME TO y__my_precious;
ALTER TABLE x__my_precious ...;
ALTER TABLE y__my_precious ...;
ALTER TABLE x__my_precious RENAME TO x;
ALTER TABLE y__my_precious RENAME TO y;
I'm pretty sure what you're trying to do isn't possible with Oracle's native transaction control. DDL will always end a transaction, so no lock on that object is going to survive it. Even if you immediately attempted to lock it after the DDL, another waiting session could slip in and obtain the lock before you do.
You can, however, serialize access to the table by utilizing another dummy table or row in a dummy table, assuming you control the code of any process wishing to access the table. If this is the case, then before accessing the table, attempt to lock the dummy table or a row in it first, and only if it succeeds continue with accessing the main table. Then the process that does DDL can take out that same lock (preventing other processes from proceeding), then do the DDL in a subroutine (named PL/SQL block) with PRAGMA AUTONOMOUS_TRANSACTION. That way the DDL ends the autonomous transaction rather than the main one, which still holds the lock on the dummy table.
You have to use a dummy table because if you tried to use the same table you want to modify you'll deadlock yourself. Of course, this only works if you can make all other processes do the lock-the-dummy-table safety check before they can proceed.
Lastly, albeit what I said above should work, it is likely that you're trying to do something you shouldn't do. DDL against hot objects isn't a good idea. Whatever you're trying to do, there is probably a better way than modifying objects on the fly like this. Even if you are able to keep other locked out, you are likely to cause object reference errors, package invalidations, SQL cursor invalidations, etc.. it can be a real mess.
What is the difference between row lock and table lock in Oracle database.
will for loop with update statement trigger table lock ??
Any DML statement on a table is going to acquire a table lock. But it is terribly unlikely that this table lock is going to affect another session in a way that limits concurrency. When your session updates rows, there will be a row exclusive table lock which will stop another session from doing DDL on the table (say, adding or removing a column) while there are active, uncommitted transactions involving the table. But presumably, you're not generally trying to modify the structure of the table at the same time that you're updating rows in the table (or understand that when you deploy these DDL changes that you'll block other sessions for a short period of time and you're picking your deployment times accordingly).
The specific rows that you are updating will be locked in order to prevent another session from modifying those rows until your transaction either commits or rolls back. Those row level locks are generally the locks that cause performance and scalability issues. Ideally, your code would be structured to hold the locks for as little time as possible (updating data in sets is much faster than doing row-by-row updates) and to minimize the probability that two sessions will try to update the same row simultaneously.
I am using Oracle GTT tables with condition ON COMMIT DELETE ROWS.
Generally GTT table's data is only session specific. One session cannot see the GTT data of other session in general.
But is their any way to access/read GTT table that belongs to another session? Is there some kind of global session.
In plain words, NO.
The definition of a temporary table is visible to all sessions, but
the data in a temporary table is visible only to the session that
inserts the data into the table.
Just think, if the data is visible to other sessions, the purpose of a GTT is defeated.
I came accross creating the temporary table in oracle. But could not understand the best use of this.
Can someone help me to understand what is the features and benefits of using a temporary table in Oracle (create temporary table temp_table) over an ordinary table (create table temp_table)
)
From the concepts guide:
A temporary table definition persists in the same way as a permanent
table definition, but the data exists only for the duration of a
transaction or session. Temporary tables are useful in applications
where a result set must be held temporarily, perhaps because the
result is constructed by running multiple operations.
And:
Data in a temporary table is private to the session, which means that
each session can only see and modify its own data.
So one aspect is that the data is private to your session. Which is also true of uncommitted data in a permanent table, but with a temporary table the data can persist and yet stay private across a commit (based on the on commit clause on creation).
Another aspect is that they use temporary segments, which means you generate much less redo and undo overhead using a temporary table than you would if you put the same data temporarily into a permanent table, optionally updated it, and then removed it when you'd finished with it. You also avoid contention and locking issues if more than one session needs its own version of the temporary data.
Given below are some points why and when we should temporary table :-
1)Temporary tables are created for storing the data in a tabular form, for easy retrieval of it when needed, with in that particular session.
2)It also add a security purpose of keeping the data available only for that particular session.
3) When a code goes long and a lot of cursors are opened it better to put the data in a temporary table so that it can be easily fetched at the time needed.
I need to perform a query 2.5 million times. This query generates some rows which I need to AVG(column) and then use this AVG to filter the table from all values below average. I then need to INSERT these filtered results into a table.
The only way to do such a thing with reasonable efficiency, seems to be by creating a TEMPORARY TABLE for each query-postmaster python-thread. I am just hoping these TEMPORARY TABLEs will not be persisted to hard drive (at all) and will remain in memory (RAM), unless they are out of working memory, of course.
I would like to know if a TEMPORARY TABLE will incur disk writes (which would interfere with the INSERTS, i.e. slow to whole process down)
Please note that, in Postgres, the default behaviour for temporary tables is that they are not automatically dropped, and data is persisted on commit. See ON COMMIT.
Temporary table are, however, dropped at the end of a database session:
Temporary tables are automatically dropped at the end of a session, or
optionally at the end of the current transaction.
There are multiple considerations you have to take into account:
If you do want to explicitly DROP a temporary table at the end of a transaction, create it with the CREATE TEMPORARY TABLE ... ON COMMIT DROP syntax.
In the presence of connection pooling, a database session may span multiple client sessions; to avoid clashes in CREATE, you should drop your temporary tables -- either prior to returning a connection to the pool (e.g. by doing everything inside a transaction and using the ON COMMIT DROP creation syntax), or on an as-needed basis (by preceding any CREATE TEMPORARY TABLE statement with a corresponding DROP TABLE IF EXISTS, which has the advantage of also working outside transactions e.g. if the connection is used in auto-commit mode.)
While the temporary table is in use, how much of it will fit in memory before overflowing on to disk? See the temp_buffers option in postgresql.conf
Anything else I should worry about when working often with temp tables? A vacuum is recommended after you have DROPped temporary tables, to clean up any dead tuples from the catalog. Postgres will automatically vacuum every 3 minutes or so for you when using the default settings (auto_vacuum).
Also, unrelated to your question (but possibly related to your project): keep in mind that, if you have to run queries against a temp table after you have populated it, then it is a good idea to create appropriate indices and issue an ANALYZE on the temp table in question after you're done inserting into it. By default, the cost based optimizer will assume that a newly created the temp table has ~1000 rows and this may result in poor performance should the temp table actually contain millions of rows.
Temporary tables provide only one guarantee - they are dropped at the end of the session. For a small table you'll probably have most of your data in the backing store. For a large table I guarantee that data will be flushed to disk periodically as the database engine needs more working space for other requests.
EDIT:
If you're absolutely in need of RAM-only temporary tables you can create a table space for your database on a RAM disk (/dev/shm works). This reduces the amount of disk IO, but beware that it is currently not possible to do this without a physical disk write; the DB engine will flush the table list to stable storage when you create the temporary table.