What happens when the process is interrupted in between the shrink space process.
E.g.
I am having a loop of 10 tables and each iteration is shrinking space of the table. When the process is doing for 5th table, it is interrupted by the user and operation is cancelled. What will happen to the previous 4 table and 5th table. Will the space be shrinked or we need to re-run the process.
Related
Need to drop 3 columns from table having about 3 million rows. It is taking about 3 hours to drop 3 columns. I'm thinking of using CHECKPOINT but not sure if what CHECKPOINT number I need to use. Also is it safe to use CHECKPOINT option?
We are on Oracle 12.2
So far, I've tried this:
ALTER TABLE <table1> SET UNUSED (<col1>, <col2>, <col3>);
ALTER TABLE <table1> DROP UNUSED COLUMNS;
Is your goal to improve performance, or to conserve UNDO space used?
CHECKPOINT will not improve performance. Your command must alter all 3 million rows of data, regardless. CHECKPOINT will limit the amount of UNDO rows that exist at any one time, but it won't limit the total number of UNDO rows that need to be created over the course of the transaction. If anything, checkpointing - which will clear UNDO records and write more to REDO - will introduce even more disk I/O operations to your transaction and slow it down further.
CHECKPOINT is really only useful if you have limited disk capacity for your UNDO tablespace, in which case the number of rows should depend on the amount of UNDO space used per row and the total amount of space you can allow for the transaction. That may take experimentation to determine - start high and work down until the transaction completes - you want was few checkpoints as possible while staying within your UNDO storage threshhold.
Also, per the documentation (https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/ALTER-TABLE.html#GUID-552E7373-BF93-477D-9DA3-B2C9386F2877) anything that goes wrong after a checkpoint has been applied and before the transaction is complete may leave the table in an unstable/unusable state. Therefore it is not entirely safe. Use with caution, and be prepared to restore from backup if things go wrong.
If this is a one off operation, just re-create the table without the columns.
I've found a gap of 18 numbers in my oracle sequence that appeared during holidays and found no traces of this crime in log files. Can I see somehow a time when sequence is updated or any other way to trace the disappearance of sequence numbers?
As the other answers/comments state, sequences are never guaranteed to be gap free. Gaps can happen in many ways.
One of the things about sequences are that they default to CACHE 20. That means that when NEXTVAL is called, the sequence in the database is actually increased by 20, the value is returned, and the next 19 values are just returned from memory (the shared pool.)
If the sequence is aged out of the shared pool, or the shared pool is flushed or the database restarted, the values "left over" in the sequence memory cache is lost, as the sequence next time picks up the value from the database, which was 20 higher than last time.
As you state it happened during holidays, I guess it to be a likely cause that your sequence used 2 values from memory, then the DBA did maintenance in the holidays that caused the shared pool to flush, and the remaining 18 values in the sequence memory cache is gone.
That is normal behaviour and the reason why sequences work fast. You can get "closer" to gap-free by NOCACHE NOORDER, but it will cost performance and never be quite gap-free anyway.
A sequence is not guaranteed to produce a gap free sequence of numbers. It is used to produce a series of (unique) numbers that is highly scalable in a multi-user environment.
If your application requires that some numeric identifier on a table is sequential without gaps then you will need another way of generating those identifiers.
The oracle sequence are NOORDER in default. please refer the oracle documentation https://docs.oracle.com/cd/B13789_01/server.101/b10759/statements_6014.htm . hence NOORDER behavior cannot be as expected. You can check whether your oracle sequence is in ORDER if not then it is some other problem.
As part of maintenance, we remove a considerable amount of records from one table and wish to dispose of this space for other table. The thing is that I'm checking the size of the table and it shows the same original size before the delete. What am I missing here?
To check the size, I'm looking at the dba_segments.
There are some concepts you need to know about tables before you can really understand space usage. I will try and give you the 5 second tour ...
A table is made up of extents, these can vary in size, but lets just assume they are 1MB chunks here.
When you create a table, normally 1 extent is added to it, so even though the table has no rows, it will occupy 1MB.
When you insert rows, Oracle has a internal pointer for the table known as the high water mark (HWM). Below the HWM, there are formatted data block, and above it there are unformated blocks. It starts at zero.
If you take a new table as an example, and start inserting rows, the HWM will move up, giving more and more of that first extent to be used by the table. When the extent is all used up, another one will be allocated and the process repeats.
Lets says you fill a three 1MB extents and then delete all the rows - the table is empty, but Oracle does not move the HWM back down, or free those used extents. So the table will take 3MB on disk even though it is empty, but there is 3MB of free space to be reused in it. New inserts will go into that space below the HWM until it is filled up again before the HWM is move again.
To recover the space, if your table is using ASSM, you can use the command:
alter table t1 shrink space;
If you are not using ASSM, the you need to think about a table reorg to recover the space.
If you want to "reclaim" the space, the easiest method is:
ALTER TABLE table MOVE TABLESPACE different_tablespace;
ALTER TABLE table MOVE TABLESPACE original_tablespace;
Providing you have:
Some downtime in which to do it
A second tablespace with enough space to transfer the table into.
Take a look at this site
about the table size after deleting rows.
Space is effectively reused when you delete. Your database will not show any new free
space in dba_free_space -- it will have more blocks on freelists and more empty holes in
index structures.
SELECT SUM(BYTES)
FROM DBA_SEGMENTS
WHERE SEGMENT_NAME LIKE 'YOUR TABLE NAME%';
You will get the right answer.
Given: SQL Server 2008 R2. Quit some speedin data discs. Log discs lagging.
Required: LOTS LOTS LOTS of inserts. Like 10.000 to 30.000 rows into a simple table with two indices per second. Inserts have an intrinsic order and will not repeat, as such order of inserts must not be maintained in short term (i.e. multiple parallel inserts are ok).
So far: accumulating data into a queue. Regularly (async threadpool) emptying up to 1024 entries into a work item that gets queued. Threadpool (custom class) has 32 possible threads. Opens 32 connections.
Problem: performance is off by a factor of 300.... only about 100 to 150 rows are inserted per second. Log wait time is up to 40% - 45% of processing time (ms per second) in sql server. Server cpu load is low (4% to 5% or so).
Not usable: bulk insert. The data must be written as real time as possible to the disc. THis is pretty much an archivl process of data running through the system, but there are queries which need access to the data regularly. I could try dumping them to disc and using bulk upload 1-2 times per second.... will give this a try.
Anyone a smart idea? My next step is moving the log to a fast disc set (128gb modern ssd) and to see what happens then. The significant performance boost probably will do things quite different. But even then.... the question is whether / what is feasible.
So, please fire on the smart ideas.
Ok, anywering myself. Going to give SqlBulkCopy a try, batching up to 65536 entries and flushing them out every second in an async fashion. Will report on the gains.
I'm going through the exact same issue here, so I'll go through the steps i'm taking to improve my performance.
Separate the log and the dbf file onto different spindle sets
Use basic recovery
you didn't mention any indexing requirements other than the fact that the order of inserts isn't important - in this case clustered indexes on anything other than an identity column shouldn't be used.
start your scaling of concurrency again from 1 and stop when your performance flattens out; anything over this will likely hurt performance.
rather than dropping to disk to bcp, and as you are using SQL Server 2008, consider inserting multiple rows at a time; this statement inserts three rows in a single sql call
INSERT INTO table VALUES ( 1,2,3 ), ( 4,5,6 ), ( 7,8,9 )
I was topping out at ~500 distinct inserts per second from a single thread. After ruling out the network and CPU (0 on both client and server), I assumed that disk io on the server was to blame, however inserting in batches of three got me 1500 inserts per second which rules out disk io.
It's clear that the MS client library has an upper limit baked into it (and a dive into reflector shows some hairy async completion code).
Batching in this way, waiting for x events to be received before calling insert, has me now inserting at ~2700 inserts per second from a single thread which appears to be the upper limit for my configuration.
Note: if you don't have a constant stream of events arriving at all times, you might consider adding a timer that flushes your inserts after a certain period (so that you see the last event of the day!)
Some suggestions for increasing insert performance:
Increase ADO.NET BatchSize
Choose the target table's clustered index wisely, so that inserts won't lead to clustered index node splits (e.g. autoinc column)
Insert into a temporary heap table first, then issue one big "insert-by-select" statement to push all that staging table data into the actual target table
Apply SqlBulkCopy
Choose "Bulk Logged" recovery model instad of "Full" recovery model
Place a table lock before inserting (if your business scenario allows for it)
Taken from Tips For Lightning-Fast Insert Performance On SqlServer
Suppose an Oracle instance has to be recovered after a disaster. Do sequences get reset to the initial state, or the last saved state, or are cached values preserved?
Thank you very much. :-)
The sequnce values are stored in the SYSTEM.SEQ$ (I think) table, and a cache is maintained in memory of the next values to be used, with the size of that cache being dependent on the CACHE value for the sequence.
When the cache is exhausted the SEQ$ table is updated to a new value (in a non-consistent manner -- ie. without the user session's transacton control applying) and the next say 100 values (if CACHE=100) are read from memory.
Let's suppose that you're using a sequence with a cache size of 20. When you select a certain value from the sequence, say 1400, the SEQ$ table is updated to a value of 1420. Even if you rollback your transaction the SEQ$ still has that value until the next 20 sequence values have been used, at which time SEQ$ gets updated to 1440. If you have then just used value 1423 and an instance crash occurs, then when the system restarts the next value to be read from the sequnce will be 1440.
So, yes the integrity of the sequence will be preserved and numbers will not be "reissued". Note that the same applies to a graceful shutdown -- when you restart you will get a new value of 1440 in the above example. Sequences are not guaranteed to be gap free in practice for this reason (also because using a value and then rolling back does not restore that value to the cache).
Not that I have any experience with this, but I very much assume that a recovery to a consistent system change number state would also return the sequence to the last saved state. Anything else would be fairly useless in terms of recovery.
As for cached values, those are (can be) lost even when the instance shuts downs in an orderly manner (*): Instances cache a number of sequence values in memory (SGA) instead of going to the database every time. Unused sequence values that the instance has reserved can "disappear", leaving you with gaps in the sequence.
(*) 8i documentation mentions that this can happen with parallel instances (RAC), in which case the sequence may not even be strictly ascending (but still unique), 10g docs say that it happens in case of an instance failure.