How ROWID fetches records fast? - oracle

Suppose I run a sql query and DB makes use of index structures to arrive at a ROWID(assuming an INDEX SCAN, like in Oracle), and now DB wants to use it to fetch the actual records.
So how does ROWID helps in fast access of the record? I assume ROWID must be somehow mapped to internal record storage.
I understand index is basically combination of B-tree and doubly linked list. But how are actual records stored such that ROWID fetches them fast.

A ROWID is simply a 10 byte physical row address that contains the relative file number, the block number within that file, and the row number within that block. See this succinct explanation:
Oracle FAQs - ROWID
With that information, Oracle can make an I/O read request of a single block by calculating the block offset byte position in the file and the length of the block. Then it can use the block's internal row map to jump directly to the byte offset within the block of the desired row. It doesn't have to scan through anything.
You can pull a human-readable representation of these three components by using this query against any (heap) table, any row:
SELECT ROWIDTOCHAR(rowid) row_id,
dbms_rowid.rowid_relative_fno(rowid) fno,
dbms_rowid.rowid_block_number(rowid) blockno,
dbms_rowid.rowid_row_number(rowid) rowno
FROM [yourtable]
WHERE ROWNUM = 1 -- pick any row
The fast retrieval is also often aided by the fact that single block reads are frequently bypassed altogether because the block is already in the buffer cache. Or if it is not in Oracle's buffer cache, a single block read from many cooked filesystems, unless disabled by the setting of filesytemio_options, may be cached at the OS level and never go to storage. And if you use a storage appliance, it probably has it's own caching mechanism. All of these caching mechanisms likely give caching preference to small reads over large ones, so single block reads from Oracle are likely to avoid hitting magnetic disk altogether, more so than multiblock reads associated with table scans.
But be careful - just because ROWID is the fastest way to retrieve a single row does not mean it is the fastest way to retrieve many rows. Because of the overhead of a read call, many single calls accumulate a lot of wasted overhead. When pulling large amounts of rows it is frequently more efficient to do a full table scan, especially when Oracle uses direct path reads to do so, than utilize ROWIDs either manually or via indexes.

Related

Data cleanup in Oracle DB is taking long time for 300 billion records

Problem statement:
There is address table in Oracle which is having relationship with multiple tables like subscriber, member etc.
Currently design is in such a way that when there is any change in associated tables, it increments record version throughout all tables.
So new record is added in address table even if same address is already present, resulting into large number of duplicate copies.
We need to identify and remove duplicate records, and update foreign keys in associated tables while making sure it doesn't impact the running application.
Tried solution:
We have written a script for cleanup logic, where unique hash is generated for every address. If calculated hash is already present then it means address is duplicate, where we merge into single address record and update foreign keys in associated tables.
But the problem is there are around 300 billion records in address table, so this cleanup process is taking lot of time, and it will take several days to complete.
We have tried to have index for hash column, but process is still taking time.
Also we have updated the insertion/query logic to use addresses as per new structure (using hash, and without version), in order to take care of incoming requests in production.
We are planning to do processing in chunks, but it will be very long an on-going activity.
Questions:
Would like to if any further improvement can be made in above approach
Will distributed processing will help here? (may be using Hadoop Spark/hive/MR etc.)
Is there any some sort of tool that can be used here?
Suggestion 1
Use built-in delete parallel
delete /*+ parallel(t 8) */ mytable t where ...
Suggestion 2
Use distributed processing (Hadoop Spark/hive) - watch out for potential contention on indexes or table blocks. It is recommended to have each process to work on a logical isolated subset, e.g.
process 1 - delete mytable t where id between 1000 and 1999
process 2 - delete mytable t where id between 2000 and 2999
...
Suggestion 3
If more than ~30% of the table need to be deleted - the fastest way would be to create an empty table, copy there all required rows, drop original table, rename new, create all indexes+constraints. Of course it requires downtime and it greatly depends on number of indexes - the more you have the longer it will take
P.S. There are no "magic" tools to do it. In the end they all run the same sql commands as you can.
It's possible use oracle merge instruction to insert data if you use clean sql.

Is Clickhouse Buffer Table appropriate for realtime ingestion of many small inserts?

I am writing an application that plots financial data and interacts with a realtime feed of such data. Due to the nature of the task, live market data may be received very frequently in one-trade-at-a-time fashion. I am using the database locally and I am the only user. There is only one program (my middleware) that will be inserting data to the db. My primary concern is latency - I want to minimize it as much as possible. For that reason, I would like to avoid having a queue (in a sense, I want the Buffer Table to fulfill that role). A lot of the analytics Clickhouse calculates for me are expected to be realtime (as much as possible) as well. I have three questions:
Clarify some limitations/caveats from the Buffer Table documentation
Clarify how querying works (regular queries + materialized views)
What happens when I query the db when data is being flushed
Question 1) Clarify some limitations/caveats from the Buffer Table documentation
Based on Clickhouse documentation, I understand that many small INSERTs are sub-optimal to say the least. While researching the topic I found that the Buffer Engine [1] could be used as a solution. It made sense to me, however when I read Buffer's documentation I found some caveats:
Note that it does not make sense to insert data one row at a time, even for Buffer tables. This will only produce a speed of a few thousand rows per second, while inserting larger blocks of data can produce over a million rows per second (see the section “Performance”).
A few thousand rows per second is absolutely fine for me, however I am concerned about other performance considerations - if I do commit data to the buffer table one row at a time, should I expect spikes in CPU/memory? If I understand correctly, committing one row at a time to a MergeTree table would cause a lot of additional work for the merging job, but it should not be a problem if Buffer Table is used, correct?
If the server is restarted abnormally, the data in the buffer is lost.
I understand that this refers to things like power outage or computer crashing. If I shutdown the computer normally or stop the clickhouse server normally, can I expect the buffer to flush data to the target table?
Question 2) Clarify how querying works (regular queries + materialized views)
When reading from a Buffer table, data is processed both from the buffer and from the destination table (if there is one).
Note that the Buffer tables does not support an index. In other words, data in the buffer is fully scanned, which might be slow for large buffers. (For data in a subordinate table, the index that it supports will be used.)
Does that mean I can use queries against the target table and expect Buffer Table data to be included automatically? Or is it the other way around - I query the buffer table and the target table is included in the background? If either is true (and I don't need to aggregate both tables manually), does that also mean Materialized Views would be populated? Which table should trigger the materialized view - the on-disk table or the buffer table? Or both, in some way?
I rely on Materialized Views a lot and need them updated in realtime (or as close as possible). What would be the best strategy to accomplish that goal?
Question 3) What happens when I query the db when data is being flushed?
My two main concerns here are with regards to:
Running a query at the exact time flushing occurs - is there a risk of duplicated records or omitted records?
At which point are Materialized Views of the target table populated (I suppose it depends on whether it's the target table or the buffer table that triggers the MV)? Is flushing the buffer important in how I structure the MV?
Thank you for your time.
[1] https://clickhouse.tech/docs/en/engines/table-engines/special/buffer/
A few thousand rows per second is absolutely fine for me, however I am
concerned about other performance considerations - if I do commit data
to the buffer table one row at a time, should I expect spikes in
CPU/memory?
No Buffer tables engine don't produce CPU\Memory spikes
If I understand correctly, committing one row at a time to
a MergeTree table would cause a lot of additional work for the merging
job, but it should not be a problem if Buffer Table is used, correct?
Buffer table engine is works as memory buffer which periodically flushed the batch of rows to underlying *MergeTree table, parameters of Buffer table is a size and frequency of flushes
If I shutdown the computer normally or stop the clickhouse server normally, can I expect the buffer to flush data to the target table?
Yes, when server stop normally, Buffer tables will flush their data.
I query the buffer table and the target table is included in the background?
Yes, this is right behavior, when you SELECT from Buffer table, SELECT also will pass into underlying *MergeTree table and flushed data will read from *MergeTree
does that also mean Materialized Views would be populated?
It is not clear,
do you CREATE MATERIALIZED VIEW as trigger FROM *MergeTree table or trigger FROM the Buffer table, and which Table Engine do you use for TO table clause?
I would suggest CREATE MATERIALIZED VIEW as trigger FROM underlying MergeTree table

Sybase TEXT vs Oracle CLOB performance

We're in the process of converting our database from Sybase to Oracle and we've hit a performance problem. In Sybase, we had a TEXT field and replaced it with a CLOB in Oracle.
This is how we accessed the data in our java code:
while(rs.next()) {
String clobValue = rs.getString(1); // This takes 176ms in Oracle!
.
.
}
The database is across the country, but still, we didn't have any performance problems with Sybase and its retrieval of TEXT data.
Is there something we can do to increase this performance?
By default, LOBs are not fetched along with the table data and it takes an extra round-trip to the database to fetch them in getString.
If you are using Oracle's .NET provider, you may set InitialLOBFetchSize in the data reader settings to a value large enough to accommodate your large objects in memory so they could be fetched in all their entirety along with the other data.
Some other options:
Are the LOB columns being stored in-line (in the data row) or out-of-line (in a separate place)? If the LOB columns tend to be small (under 4k in size), you can use the ENABLE STORAGE IN ROW clause to tell Oracle to store the data in-line where possible.
If your LOBs are larger and frequently used, are they being stored in the buffer cache? The default in 10g is that LOBs are NOCACHE, meaning each i/o operation against them involve direct reads to the database, a synchronous disk event, which can be slow. A database trace would reveal significant waits on direct path read / direct path write events.
This chapter of the Oracle Application Developer's Guide - Large Objects would be valuable reading.
We decided to take a different approach which will allow us to ignore clob performance.
Our current code (I didn't write it!) queries a table in the database and retrieves all of the information in the table, including the clobs, even though it wasn't quite necessary to retrieve them all # the time. Instead, we created another field with the first 4k characters in a varchar and query that instead. Then, when we need the full clob, we query it on an individual basis, rather than all clobs for all records.

How does oracle db writer decide whether or not to do multiblock / sequential writes

We have a test system which matches our production system like for like. 6 months ago we did some testing on new hardware, and found the performance limit of our system.
However, now we are re-doing the testing with a view to adding further hardware, and we have found the system doesnt perform as it used to.
The reason for this is because on one specific volume we are now doing random I/O which used to be sequential. Further to this it has turned out that the activity on this volume by oracle which is 100% writes, is actually in 8k blocks, where before it was up to 128k.
So something has caused the oracle db writer to stop batching up it's writes.
We've extensively checked our config, and cannot see any difference between our test and production systems. We've also opened a call with Oracle but at this stage information is slow in forthcoming.
so; Ultimately this is 2 related questions:
Can you rely on oracle multiblock writes? Is that a safe thing to engineer/tune your system for?
Why would oracle change its behaviour?
We're not at this stage necessarily blaming oracle - it may well be reacting to something in the environment - but what?
The OS/arch is solaris/sparc.
Oh; I forgot to mention, the insert table has no indexes, and only a couple of foreign keys - it's designed as a bucket for as fast an insert as possible. It's also partitioned on the key field.
Thanks for any tips!
More description of the workload would allow some hypotheses.
If you are updating random blocks, then the DBWR process(es) are going to have little choice but to do single-block writes. Indexes especially are likely to have writes all over the place. If you have an index of character values and need to insert a new 'M' record where there isn't room, it will get a new block for the index and split the current block. You'll have some of those 'M' records in the original block, and some in the new block (while will be the last [used] block in the last extent).
I suspect you are most likely to get multi-block writes when bulk inserting into tables, as new blocks will be allocated and written to. Potentially, initially you had (say) 1GB of extents allocated and were writing into that space. Now you might have reached the limit of that and be creating new extents (say 50 Mb) which it may be getting from scattered file locations (eg other tables that have been dropped).

Is it reasonable to use small blobs in Oracle?

In Oracle LongRaw and Varchar2 have a max length of 4kb in Oracle, but I need to store objects of 8kb & 16kb, so I'm wondering what's a good solution. I know I could use a Blob, but a Blob has variable length and is basically an extra file behind the scenes if I'm correct, a feature and a Price I'm not interested in paying for my Objects.
Are there any other solutions or datatypes that are more suited to this kind of need?
Thanks
A blob is not a file behind the scene. It is stored in the database. Why does it matter that it has variable length? You can just use a blob column (or clob if your data is text data) and it gets stored in its own segment.
You should use a BLOB.
A BLOB is not stored as an extra file, it's stored as a block in one of your datafiles (just like other data). If the BLOB becomes too large for a single block (which may not happen in your case) then it will continue in another block.
If your BLOB data is really small, you can get Oracle to store it inline with other data in your row (like a varchar2).
Internally, Oracle is doing something similar to what PAX suggested. The chunks are as big as a DB block minus some overhead. If you try and re-invent Oracle features on top of Oracle it's only going to be slower than the native feature.
You will also have to re-implement a whole heap of functionality that is already provided in DBMS_LOB (length, comparisons, etc).
Why don't you segment the binary data and store it in 4K chunks? You could either have four different columns for these chunks (and a length column for rebuilding them into your big structure) or the more normalized way of another table with the chunks in it tied back to the original table record.
This would provide for expansion should you need it in future.
For example:
Primary table:
-- normal columns --
chunk_id integer
chunk_last_len integer
Chunk table:
chunk_id integer
chunk_sequence integer
chunk varchar2(whatever)
primary key (chunk_id,chunk_sequence)
Of course, you may find that your DBMS does exactly that sort of behavior under the covers for BLOBs and it may be more efficient to let Oracle handle it, relieving you of the need to manually reconstruct your data from individual chunks. I'd measure the performance of each to figure out the best approach.
Don't store binary data in varchar2 columns, unless you are willing to encode them (base64 or similar). Character set issues might corrupt your data otherwise!
Try the following statement to see the effect:
select * from (select rownum-1 original, ascii(chr(rownum-1)) data from user_tab_columns where rownum<=256) where original<>data;
Varchar2 is of variable length just as well. If you need to store binary data of any bigger than small size in your database, you'll have to look in blob's direction. Another solutiuon is of course storing the binary somewhere on the file system, and storing the path to the file as a varchar in the db.

Resources