I have a very simple table that has an ID (generated by a sequence), and a NAME. I inserted a couple rows which got cached but after a while I wanted to remove them because I wanted to redo my table so I issued a couple of DELETE statements to remove all records (I don't have the privileges to do a TRUNCATE).
After deleting the old rows, I again inserted a couple other records but I didn't bother resetting the sequence.
On PHP, when I SELECT everything on that table, I still get those old deleted rows. But on PL/SQL when I SELECT on that table, it only shows me the new records.
Is the problematic cache on the PHP or Oracle side? If it's on the Oracle side, how do I clear it out?
Thanks!
Related
Forgive me to ask a silly question.
Would a temporary table be dropped automatically in Oracle (12c)?
Yesterday I have executed the following DDL to create a temporary table:
Create global temporary table my_1st_t_table on commit preserve rows as
select
*
from
other_table
where
selected_col = 'T';
After that I have executed following statements:
commit;
select count(*) from my_1st_t_table;
Yesterday, the last select statement returned 2000 rows.
After that I disconnected my VPN and also switched off my client laptop.
Today I rerun the last select statement after restarted my computer and reconnected to the VPN.
It returned 0 rows. So this means the table was still there but just all rows being deleted after my session.
However, may I ask when will my temporary table be dropped?
Thanks in advance!
A temporary table in Oracle is much different than a temp table in other database platforms such as MS SQL Server, and the "temporary" nomenclature invariably leads to confusion.
In Oracle, a temporary table just like other tables, and does not get "dropped". However, the rows in the table only exist within the context of the session that inserted the rows. Once the session is terminated, assuming the session did not delete the rows, Oracle will delete the rows in the table for that session.
So bottom line, the data is temporary, the table structure is permanent, until the table is dropped.
I am using oracle sql developer client on windows to connect to Oracle Database.
One regular day I got an email that my computer using SQL developer, has deleted ~400 rows from table with ~700 rows (not all of them).
Naturally I started investigating how and why this happened.
I examined all of the queries executed from my client (SQL History) and NONE of them (I am 100% sure about this) deletes anything from the table.
Also there are no triggers or any foreign keys related to this.
In the server logs we say the exact queries and they where one for each row, also they where very specific (obviously generated), the only way for this to happen is to manually delete row from the UI of the client. All 400+ queries were executed in the same second.
But to do so you need to go to the table select 400 rows and click delete.
I always use SQL queries to do things and never would have used the client to delete rows (atleast knowingly).
Other thing is if I somehow deleted the rows through the client I would have deleted all of them or one of them not a specific number of rows.
My question did someone had similar experience?
Can anyone guess how could you delete specific rows with some king of shortcut or something like that?
Thanks.
I am updating a list of rows using the method tabledata().insertAll from the bigquery object. After the execution, the return shows no errors. However, my tables still continue with no data written.
Could be a problem of permission. If so, why there's no errors returned?
Thanks
This can happen if you do the insert right after deleting and re-creating the table.
The streaming buffer of a deleted table is not deleted right at the time that table is deleted, which can cause new inserts to be delivered to this old streaming buffer.
From BigQuery documentation:
deleting and/or recreating a table may create a period of time where streaming inserts are effectively delivered to the old table and will not be present in the newly created table.
And in this case, no errors would be returned.
References:
https://cloud.google.com/bigquery/troubleshooting-errors#metadata-errors-for-streaming-inserts
https://github.com/GoogleCloudPlatform/google-cloud-php/issues/871#issuecomment-361339800
https://cloud.google.com/bigquery/streaming-data-into-bigquery
I am using JdbcTemplate and Oracle stored procedure. In oracle store procedure I have a select query in which I have IN clause like 'IN (SELECT ID FROM GLOBAL_TEMP_TABLE)'.
And the definition of temp table is ON COMMIT PRESERVE ROWS.
However, when I am calling stored procedure from java it give me more records than I expected, seems temp table is storing data from previous session. Need your help.
Without looking at any code, it is hard to tell.
Yet, the symptoms you describe might only be caused because you are still accessing your data from the same session.
From Oracle-Base: Global Temporary Tables (GTT):
The ON COMMIT DELETE ROWS clause indicates that the data should be deleted at the end of the transaction.
the ON COMMIT PRESERVE ROWS clause indicates that rows should be preserved until the end of the session.
That is, in your case, you need to close the session to clear the data.
You cannot access data from a previous or other session when you select rows from a global temporary table.
There are 2 options:
Your session is not new
It's not a temporary table
Keep in mind if you use ON COMMIT PRESERVE ROWS you have to delete the rows yourself. The data is kept until the session ends.
To find out if your session is still the same, query is:
select sid,serial,logon_time from v$session
and write it to a log file.
Im working on an application which access a Sybase ASE 15.0.2 ,where the current code access a remote database
(CIS) to insert a row using a proxy table definition (the destination table is a DOL - DRL table - The PK
row is defined as identity ,and is always growing). The current code performs a select to check if the row
already exists to avoid duplicate data to be inserted.
Since the remote table also have a PK definition on the table, i do understand that the PK verification will
be done again prior to commiting the row.
Im planning to remove the select check since its being effectively done again by the PK verification,
but im concerned about if when receiving a file with many duplicates, the table may suffer
some unecessary contention when the data is tried to be commited.
Its not clear to me if Sybase ASE tries to hold the last row and writes the data prior to check for the
duplicate. Also, if the table is very big, im concerned also about the time it will spend looking the
whole index to find duplicates.
I've found some documentation for SQL anywhere, but not ASE in the following link
http://dcx.sybase.com/1200/en/dbusage/insert-how-transact.html
The best i could find is the following explanation
https://groups.google.com/forum/?fromgroups#!topic/comp.databases.sybase/tHnOqptD7X8
But it doesn't enlighten in details how the row is locked (and if there is any kind of
optimization to write it ahead or at the same time of PK checking)
, and also if it will waste a full PK look if im positively inserting a row which the PK
positively greater than the last row commited
Thanks
Alex
Unlike SqlAnywhere there is no option for ASE to set wait_for_commit. The primary key constraint is checked during the insert and not at the commit time. The problem as I understand from your post I see is if you have a mass insert from a file that may contain duplicates is to load into a temp table , check for duplicates, remove the duplicates and then insert the unique ones. Mass insert are lot faster though it still checks for primary key violations. However there is no cost associated as there is no rolling back. The insert statement is always all or nothing. Even if one row is duplicate the entire insert statement will fail. Check before insert in more of error free approach as opposed to use of constraint to the verification because it is going to fail and rollback is going to again be costly.
Thanks Mike
The link does have a very quick explanation about the insert from the CIS perspective. Its a variable to keep an eye on given that CIS may become a representative time consumer
if its performing data and syntax checking if it will be done again when CIS forwards the insert statement to the target server. I was afraid that CIS could have some influence beyond the network traffic/time over the locking/PK checking
Raju
I do agree that avoiding the PK duplication by checking if the row already exists by running a select and doing in a batch, but im currently looking for a stop gap solution, and that may be to perform the insert command in batches of about 50 rows and leave the
duplicate key check for the PK.
Hopefully the PK check will be done over a join of the 50 newly inserted rows, and thus
avoid to traverse the index for each single row...
Ill try to test this and comment back
Alex