Oracle 10g small Blob or Clob not being stored inline? - performance

According to the documents I've read, the default storage for a CLOB or BLOB is inline, which means that if it is less than approx 4k in size then it will be held in the table.
But when I test this on a dummy table in Oracle (10.2.0.1.0) the performance and response from Oracle Monitor (by Allround Automations) suggest that it is being held outwith the table.
Here's my test scenario ...
create table clobtest ( x int primary key, y clob, z varchar(100) )
;
insert into clobtest
select object_id, object_name, object_name
from all_objects where rownum < 10001
;
select COLUMN_NAME, IN_ROW
from user_lobs
where table_name = 'CLOBTEST'
;
This shows: Y YES (suggesting that Oracle will store the clob in the row)
select x, y from CLOBTEST where ROWNUM < 1001 -- 8.49 seconds
select x, z from CLOBTEST where ROWNUM < 1001 -- 0.298 seconds
So in this case, the CLOB values will have a maximum length of 30 characters, so should always be inline. If I run Oracle Monitor, it shows a LOB.Length followed by a LOB.Read() for each row returned, again suggesting that the clob values are held outwith the table.
I also tried creating the table like this
create table clobtest
( x int primary key, y clob, z varchar(100) )
LOB (y) STORE AS (ENABLE STORAGE IN ROW)
but got exactly the same results.
Does anyone have any suggestions how I can force (persuade, encourage) Oracle to store the clob value in-line in the table? (I'm hoping to achieve similar response times to reading the varchar2 column z)
UPDATE: If I run this SQL
select COLUMN_NAME, IN_ROW, l.SEGMENT_NAME, SEGMENT_TYPE, BYTES, BLOCKS, EXTENTS
from user_lobs l
JOIN USER_SEGMENTS s
on (l.Segment_Name = s. segment_name )
where table_name = 'CLOBTEST'
then I get the following results ...
Y YES SYS_LOB0000398621C00002$$ LOBSEGMENT 65536 8 1

The behavior of Oracle LOBs is the following.
A LOB is stored inline when:
(
The size is lower or equal than 3964
AND
ENABLE STORAGE IN ROW has been defined in the LOB storage clause
) OR (
The value is NULL
)
A LOB is stored out-of-row when:
(
The value is not NULL
) AND (
Its size is higher than 3964
OR
DISABLE STORAGE IN ROW has been defined in the LOB storage clause
)
Now this is not the only issue which may impact performance.
If the LOBs are finally not stored inline, the default behavior of Oracle is to avoid caching them (only inline LOBs are cached in the buffer cache with the other fields of the row). To tell Oracle to also cache non inlined LOBs, the CACHE option should be used when the LOB is defined.
The default behavior is ENABLE STORAGE IN ROW, and NOCACHE, which means small LOBs will be inlined, large LOBs will not (and will not be cached).
Finally, there is also a performance issue at the communication protocol level. Typical Oracle clients will perform 2 additional roundtrips per LOBs to fetch them:
- one to retrieve the size of the LOB and allocate memory accordingly
- one to fetch the data itself (provided the LOB is small)
These extra roundtrips are performed even if an array interface is used to retrieve the results. If you retrieve 1000 rows and your array size is large enough, you will pay for 1 roundtrip to retrieve the rows, and 2000 roundtrips to retrieve the content of the LOBs.
Please note it does not depend on the fact the LOB is stored inline or not. They are complete different problems.
To optimize at the protocol level, Oracle has provided a new OCI verb to fetch several LOBs in one roundtrips (OCILobArrayRead). I don't know if something similar exists with JDBC.
Another option is to bind the LOB on client side as if it was a big RAW/VARCHAR2. This only works if a maximum size of the LOB can be defined (since the maximum size must be provided at bind time). This trick avoids the extra rountrips: the LOBs are just processed like RAW or VARCHAR2. We use it a lot in our LOB intensive applications.
Once the number of roundtrips have been optimized, the packet size (SDU) can be resized in the net configuration to better fit the situation (i.e. a limited number of large roundtrips). It tends to reduce the "SQL*Net more data to client" and "SQL*Net more data from client" wait events.

If you're "hoping to achieve similar response times to reading the varchar2 column z", then you'll be disappointed in most cases.
If you're using a CLOB I suppose you need to store more than 4,000 bytes, right? Then if you need to read more bytes that's going to take longer.
BUT if you have a case where yes, you use a CLOB, but you're interested (in some instances) only in the first 4,000 bytes of the column (or less), then you have a chance of getting similar performance.
It looks like Oracle can optimize the retrieval if you use something like DBMS_LOB.SUBSTR and ENABLE STORAGE IN ROW CACHE clause with your table. Example:
CREATE TABLE clobtest (x INT PRIMARY KEY, y CLOB)
LOB (y) STORE AS (ENABLE STORAGE IN ROW CACHE);
INSERT INTO clobtest VALUES (0, RPAD('a', 4000, 'a'));
UPDATE clobtest SET y = y || y || y;
INSERT INTO clobtest SELECT rownum, y FROM all_objects, clobtest WHERE rownum < 1000;
CREATE TABLE clobtest2 (x INT PRIMARY KEY, z VARCHAR2(4000));
INSERT INTO clobtest2 VALUES (0, RPAD('a', 4000, 'a'));
INSERT INTO clobtest2 SELECT rownum, z FROM all_objects, clobtest2 WHERE rownum < 1000;
COMMIT;
In my tests on 10.2.0.4 and 8K block, these two queries give very similar performance:
SELECT x, DBMS_LOB.SUBSTR(y, 4000) FROM clobtest;
SELECT x, z FROM clobtest2;
Sample from SQL*Plus (I ran the queries multiple times to remove physical IO's):
SQL> SET AUTOTRACE TRACEONLY STATISTICS
SQL> SET TIMING ON
SQL>
SQL> SELECT x, y FROM clobtest;
1000 rows selected.
Elapsed: 00:00:02.96
Statistics
------------------------------------------------------
0 recursive calls
0 db block gets
3008 consistent gets
0 physical reads
0 redo size
559241 bytes sent via SQL*Net to client
180350 bytes received via SQL*Net from client
2002 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1000 rows processed
SQL> SELECT x, DBMS_LOB.SUBSTR(y, 4000) FROM clobtest;
1000 rows selected.
Elapsed: 00:00:00.32
Statistics
------------------------------------------------------
0 recursive calls
0 db block gets
2082 consistent gets
0 physical reads
0 redo size
18993 bytes sent via SQL*Net to client
1076 bytes received via SQL*Net from client
68 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1000 rows processed
SQL> SELECT x, z FROM clobtest2;
1000 rows selected.
Elapsed: 00:00:00.18
Statistics
------------------------------------------------------
0 recursive calls
0 db block gets
1005 consistent gets
0 physical reads
0 redo size
18971 bytes sent via SQL*Net to client
1076 bytes received via SQL*Net from client
68 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1000 rows processed
As you can see, consistent gets are quite higher, but SQL*Net roundtrips and bytes are nearly identical in the last two queries, and that apparently makes a big difference in execution time!
One warning though: the difference in consistent gets might become a more likely performance issue if you have large result sets, as you won't be able to keep everything in buffer cache and you'll end up with very expensive physical reads...
Good luck!
Cheers

Indeed, it is stored within the row. You are likely dealing with the simple overhead of using a LOB instead of a varchar. Nothing is free. The DB probably doesn't know ahead of time where to find the row, so it probably still "follows a pointer" and does extra work just in case the LOB is big. If you can get by with a varchar, you should. Even old hacks like 2 varchars to deal with 8000 characters might solve your business case with higher performance.
LOBS are slow, difficult to query, etc. On the positive, they can be 4G.
What would be interesting to try is to shove something just over 4000 bytes into that clob, and see what the performance looks like. Maybe it is about the same speed? This would tell you that it's overhead slowing you down.
Warning, at some point network traffic to your PC slows you down on these kind of tests.
Minimize this by wrapping in a count, this isolates the work to the server:
select count(*) from (select x,y from clobtest where rownum<1001)
You can achieve similar effects with "set autot trace", but there will be tracing overhead too.

There are two indirections when it comes to CLOBs and BLOBs:
The LOB value might be stored in a different database segment than the rest of the row.
When you query the row, only the non-LOB fields are contained in the result set and accessing the LOB-fields requries one or more additional round trips between the client and the server (per row!).
I don't quite know how you measure the execution time and I've never used Oracle Monitor, but you might primarily be affected by the second indirection. Depending on the client software you use, it is possible to reduce the round trips. E.g. when you use ODP.NET, the parameter is called InitialLobFetchSize.
Update:
One one to tell which of the two indirections is relevant, you can run your LOB query with 1000 rows twice. If the time drops significantly from the first to the second run, it's indirection 1. On the second run, the caching pays off and access to the separate database segment isn't very relevant anymore. If the time stays about the same, it's the second indirection, namely the round trips between the client and the server, which cannot improve between two runs.
The time of more than 8 seconds for 1000 rows in a very simple query indicate it's indirection 2 because 8 seconds for 1000 rows can't really be explained with disk access unless your data is very scattered and your disk system under heavy load.

This is the key information (how to read LOB without extra roundtrips), which is not available in Oracle's documentation I think:
Another option is to bind the LOB on client side as if it was a big
RAW/VARCHAR2. This only works if a maximum size of the LOB can be
defined (since the maximum size must be provided at bind time). This
trick avoids the extra rountrips: the LOBs are just processed like RAW
or VARCHAR2. We use it a lot in our LOB intensive applications.
I had problem with loading simple table (few GB) with one blob column ( 14KB => thousands of rows) and I was investigating it for a long time, tried a lot of lob storage tunings (DB_BLOCK_SIZE for new tablespace, lob storage specification - CHUNK ), sqlnet.ora settings, client prefetching attributes, but this (treat BLOB as LONG RAW with OCCI ResultSet->setBufferData on client side) was the most important thing (persuade oracle to send blob column immediately without sending lob locator at first and loading each lob separately based on lob locator.
Now I can get even ~ 500Mb/s throughput (with columns < 3964B).
Our 14KB blob will be separated into multiple columns - so it'll be stored in row to get almost sequential reads from HDD. With one 14KB blob (one column) I get ~150Mbit/s because of non-sequential reads (iostat: low amount of merged read requests).
NOTE: don't forget to set also lob prefetch size/length:
err = OCIAttrSet(session, (ub4) OCI_HTYPE_SESSION, (void *) &default_lobprefetch_size, 0, (ub4) OCI_ATTR_DEFAULT_LOBPREFETCH_SIZE, errhp);
But I don't know how is it possible to achieve the same fetching throughput with ODBC connector. I was trying it without any success.

Related

Why is Oracle RESULT_CACHE not reducing the number of LAST_CR_BUFFER gets?

I'm doing some tests with the Oracle result_cache and came across something that looks strange (to me, anyway).
I've created the following table:
create table CACHE_TEST(
COL1 int,
COL2 int
);
And inserted some dummy data into it:
insert into CACHE_TEST select * from(
select level as COL1, level*3 as COL2 from DUAL
connect by level <= 100
);
Now I run an Autotrace on the following query:
select * from CACHE_TEST;
As expected, it shows a normal full table scan. I then run an Autotrace on the following query several times, and expect it to be using the result cache:
select /*+ RESULT_CACHE */ * from CACHE_TEST;
The Autotrace shows that it is indeed using the cache, but the number of buffer gets and cost is exactly the same as the first query.
Interestingly, if I do some kind of aggregate, eg:
select /*+ RESULT_CACHE */ AVG(COL1) FROM CACHE_TEST;
It reduces the buffer gets to zero, but the cost is still the same.
Can anyone explain why:
The result cache doesn't seem to reduce the number of buffer gets unless you do an aggregate?
Even when it does use the result cache the cost is still reportedly the same (even though I see a marked performance increase)?
If you're selecting all the data from the table, why would you expect it to require fewer reads to read all that data from the table (which is likely completely in the buffer cache) rather than from the result cache? Either way, you're reading the same number of blocks and getting the same amount of data back.
The result cache is really helpful when your query is doing some sort of expensive calculation. That could be an aggregate-- reading a single cached value is obviously more efficient than reading every row from a table. Or it could be a query that does a lot of work to figure out which subset of the table to read (joining through some other tables, for example).

Oracle: Coercing VARCHAR2 and CLOB to the same type without truncation

In an app that supports MS SQL Server, MySQL, and Oracle, there's a table with the following relevant columns (types shown here are for Oracle):
ShortText VARCHAR2(1700) indexed
LongText CLOB
The app stores values 850 characters or less in ShortText, and longer ones in LongText. I need to create a view that returns that data, whichever column it's in. This works for SQL Server and MySQL:
SELECT
CASE
WHEN ShortText IS NOT NULL THEN ShortText
ELSE LongText
END AS TheValue
FROM MyTable
However, on Oracle, it generates this error:
ORA-00932: inconsistent datatypes: expected CHAR got CLOB
...meaning that Oracle won't implicitly convert the two columns to the same type, so the query has to do it explicitly. Don't want data to get truncated, so the type used has to be able to hold as much data as a CLOB, which as I understand it (not an Oracle expert) means CLOB, only, no other choices are available.
This works on Oracle:
SELECT
CASE
WHEN ShortText IS NOT NULL THEN TO_CLOB(ShortText)
ELSE LongText
END AS TheValue
FROM MyTable
However, performance is amazingly awful. A query that returns LongText directly took 70-80 ms for about 9k rows, but the above construct took between 30 and 60 seconds, unacceptable.
So:
Are there any other Oracle types I could coerce both columns to
that can hold as much data as a CLOB? Ideally something more
text-oriented, like MySQL's LONGTEXT, or SQL Server's NTEXT (or even
better, NVARCHAR(MAX))?
Any other approaches I should be looking at?
Some specifics, in particular ones requested by #Guido Leenders:
Oracle version: Oracle Database 11g 11.2.0.1.0 64bit Production
Not certain if I was the only user, but the relative times are still striking.
Stats for the small table where I saw the performance I posted earlier:
rowcount: 9,237
varchar column total length: 148,516
clob column total length: 227,020
The to_clob is pretty expensive, so try to avoid it. But I think it should perform reasonable well for 9K rows. Following test case based upon one of the applications we develop which has the similar datamodel behaviour:
create table bubs_projecten_sample
( id number
, toelichting varchar2(1700)
, toelichting_l clob
)
begin
for i in 1..10000
loop
insert into bubs_projecten_sample
( id
, toelichting
, toelichting_l
)
values
( i
, case when mod(i, 2) = 0 then 'short' else null end
, case when mod(i, 2) = 0 then rpad('long', i, '*') else null end
)
;
end loop;
commit;
end;
Now make sure everything in cache and dirty blocks written out:
select *
from bubs_projecten_sample
Test performance:
create table bubs_projecten_flat
as
select id
, to_clob(toelichting) toelichting_any
from bubs_projecten_sample
where toelichting is not null
union all
select id
, toelichting_l
from bubs_projecten_sample
where toelichting_l is not null
The create table take less than 1 second on a normal entry level server, including writing out the data, 17K consistent gets, 4K physical reads. Stored on disk (note the rpad) is 25K for toelichting and 16M for toelichting_l.
Can you further elaborate on the problem?
Please check that large CLOBs are not stored inline. Normally large CLOBs are stored in a separate system-maintained table. Storing large CLOBs inside a table can make going through the table with a Full Table Scan expensive.
Also, I can imagine populating both columns always. You still have the benefits of indexing working for the first so many characters. You just need to memorize in the table using an indicator whether the CLOB or the shortText column is leading.
As a side note; I see a difference between 850 and 1700. I would recommend making them equal, but remember to check that you are creating the table using character semantics. That can be done on statement level by using: "varchar2(850 char)". Please note that Oracle will actually create a column that fits 850 * 4 bytes (in AL32UTF8 at least, there the "32" stands for "4 bytes at most per character"). Good luck!

Oracle 10 MERGE performance

I had a very strange performance related problem with MERGE in Oracle 10. In a few words, I have a stored procedure that calculates and stores user rank based on her activity in the system and contains just one MERGE statement:
MERGE INTO user_ranks target USING
([complex query that returns rank_id and user_id])src ON
(src.user_id = target.user_id)
WHEN MATCHED THEN UPDATE SET target.rank_id = src.rank_id
WHEN NOT MATCHED THEN INSERT (target.user_id, target.rank_id)
VALUES (src.user_id, src.rank_id);
// user_ranks table structure:
CREATE TABLE user_ranks (user_id INT NOT NULL
PRIMARY KEY USING INDEX (CREATE UNIQUE INDEX UQ_uid_uranks ON user_ranks(user_id)),
rank_id INT NOT NULL,
CONSTRAINT FK_uid_uranks FOREIGN KEY (user_id) REFERENCES users(id),
CONSTRAINT FK_rid_uranks FOREIGN KEY(rank_id) REFERENCES ranks(id));
// no index on rank_id - intentionally, ranks table is a lookup with
// a very few records and no delete/update allowed
The subquery which is used as a source for MERGE returns at most 1 record (user_id is passed as parameter to the procedure). It's quite expensive but execution time is acceptable (1-1.2 sec). The problem is that MERGE execution time hikes to more than 40 seconds, and I have no clue why. I tried using LEADING hint with no success. But if I split the statement into 2 parts, first one - run SELECT subquery and store result(rank_id) into variable and then merge - MERGE ... USING (SELECT user_id, rank_id FROM DUAL)src ... everything works just fine. From what I read, there are known issues with Oracle's MERGE, but they are mostly related to triggers (no triggers in my case). It also says that MERGE works slower than combination of INSERT and UPDATE, but I believe the "normal" difference is around 5-10%, not 30 times...
I'm trying to understand what I did wrong... Thanks for your suggestions.
Update
Execution plan is quite long to post it here, in short : subquery cost by itself is 12737, with merge - 76305. Stats output for merge :
> Statistics
108 recursive calls
4 db block gets
45630447 consistent gets
24905 physical reads
0 redo size
620 bytes sent via SQL*Net to client
1183 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
2 sorts (memory)
0 sorts (disk)
subquery alone :
> Statistics
8 recursive calls
0 db block gets
34 consistent gets
0 physical reads
0 redo size
558 bytes sent via SQL*Net to client
234 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
If you've set up SQL*Plus autotrace it would be a matter of seconds to see which part of the actual execution plan has caused the most physical and logical I/O and used the most memory.
Note that the information you get using this method is much more precise than a simple explain plan.

Oracle 10g : amount of redo generated for the first insert statement in a transaction

I am trying to see the amount of redo generated by different insert statements.
I see that for the first insert in the transaction , the redo size is being shown as zero.
The very next insert generates a redo of 2664 bytes (probably for the last two inserts).
All subsequent inserts generate the expected number of redo.
The database I am using is 10.2.0.4
create table temp (
x int, y char(1000), z date);
Table created.
set autotrace traceonly statistics;
sql> insert into temp values (1, user, sysdate );
1 row created.
Statistics
----------------------------------------------------------
1 recursive calls
5 db block gets
1 consistent gets
0 physical reads
0 redo size
358 bytes sent via SQL*Net to client
319 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
1 rows processed
--Showing only redo size for subsequent inserts...
sql> insert into temp values (2, user, sysdate);
1 row created.
Statistics
------------
2664 redo size
sql> insert into temp values (3, user, sysdate);
1 row created.
Statistics
----------------------------------------------------------
1300 redo size
sql> insert into temp values (4, user, sysdate);
1 row created.
Statistics
----------------------------------------------------------
1368 redo size
Can someone please explain why this happens?
Thanks,
Rajesh.
Reproduced on XE.
If you Google "In Memory Undo" and "Private Redo Threads" there is some discussion which might be relevant. EG this
"For "small" transactions, 10g
generates private redo and doesn't
apply the changes to the blocks until
the commit. However the flag
(x$bh.flag) has bit 3 set to 1 to show
that private redo exists for the
block.
When the commit occurs, the redo is
applied to the block, at which point
the block is marked as dirty, the
private redo is then copied to the
public redo buffer and LGWR is posted
to write the redo to disc. (The
treatment of the related undo blocks
is similar)."
This beaviour is in fact due to the private redo mechanism as Gary pointed out.
However, the changes are pushed to the public strands after a considerable amount of red is generated and not after commit.
This question has been answered on the Oracle technology forum.
Please read the comments by Jonathan Lewis and Tanel Poder in the following thread.
http://forums.oracle.com/forums/thread.jspa?messageID=3915905&#3915905

Inserts are 4x slower if table has lots of record (400K) vs. if it's empty

(Database: Oracle 10G R2)
It takes 1 minute to insert 100,000 records into a table. But if the table already contains some records (400K), then it takes 4 minutes and 12 seconds; also CPU-wait jumps up and “Free Buffer Waits” become really high (from dbconsole).
Do you know what’s happing here? Is this because of frequent table extents? The extent size for these tables is 1,048,576 bytes. I have a feeling DB is trying to extend the table storage.
I am really confused about this. So any help would be great!
This is the insert statement:
begin
for i in 1 .. 100000 loop
insert into customer
(id, business_name, address1,
address2, city,
zip, state, country, fax,
phone, email
)
values (customer_seq.nextval, dbms_random.string ('A', 20), dbms_random.string ('A', 20),
dbms_random.string ('A', 20), dbms_random.string ('A', 20),
trunc (dbms_random.value (10000, 99999)), 'CA', 'US', '798-779-7987',
'798-779-7987', 'asdfasf#asfasf.com'
);
end loop;
end;
Here dstat output (CPU, IO, MEMORY, NET) for :
Empty Table inserts: http://pastebin.com/f40f50dbb
Table with 400K records: http://pastebin.com/f48d8ebc7
Output from v$buffer_pool_statistics
ID: 3
NAME: DEFAULT
BLOCK_SIZE: 8192
SET_MSIZE: 4446
CNUM_REPL: 4446
CNUM_WRITE: 0
CNUM_SET: 4446
BUF_GOT: 1407656
SUM_WRITE: 1244533
SUM_SCAN: 0
FREE_BUFFER_WAIT: 93314
WRITE_COMPLETE_WAIT: 832
BUFFER_BUSY_WAIT: 788
FREE_BUFFER_INSPECTED: 2141883
DIRTY_BUFFERS_INSPECTED: 1030570
DB_BLOCK_CHANGE: 44445969
DB_BLOCK_GETS: 44866836
CONSISTENT_GETS: 8195371
PHYSICAL_READS: 930646
PHYSICAL_WRITES: 1244533
UPDATE
I dropped indexes off this table and performance improved drastically even when inserting 100K into 600K records table (which took 47 seconds with no CPU wait - see dstat output http://pastebin.com/fbaccb10 ) .
Not sure if this is the same in Oracle, but in SQL Server the first thing I'd check is how many indexes you have on the table. If it's a lot the DB has to do a lot of work reindexing the table as records are inserted. It's more difficult to reindex 500k rows than 100k.
The indices are some form of tree, which means the time to insert a record is going to be O(log n), where n is the size of the tree (≈ number of rows for the standard unique index).
The fastest way to insert them is going to be dropping/disabling the index during the insert and recreating it after, as you've already found.
Even with indexes, 4 minutes to insert 100,000 records seems like a problem to me.
If this database has I/O problems, you haven't fixed them and they will appear again. I would recommend that you identify the root cause.
If you post the index DDL, I'll time it for a comparison.
I added indexes on id and business_name. Doing 10 iterations in a loop, the average time per 100,000 rows was 25 seconds. This was on my home PC/server all running on a single disk.
Another trick to improve performance is to turn on or set the cache higher on your sequence(customer_seq). This will allow oracle to allocate the sequence into memory instead of hitting the object for each insert.
Be careful with this one though. In some situations this will cause gaps your sequence to have gaps between values.
More information here:
Oracle/PLSQL: Sequences (Autonumber)
Sorted inserts always take longer the more entries there are in the table.
You don't say which columns are indexed. If you had indexes on fax, phone or email, you would have had a LOT of duplicates (ie every row).
Oracle 'pretends' to have non-unique indexes. In reality every index entry is unique with the rowid of the actual table row being the deciding factor. The rowid is made up of the file/block/record.
It is possible that, once you hit a certain number of records, the new ones were getting rowids which meant that had to be fitted into the middle of existing indexes with a lot of index re-writing going on.
If you supply full table and index creation statements, others would be able to reproduce the experience which would have allowed for more evidence based responses.
i think it has to do with the extending the internal structure of the file, as well as building database indexes for the added information - i believe the database arranges the data in a non-linear fashion that helps speed up data retrieval on selects

Resources