Sybase TEXT vs Oracle CLOB performance - performance

We're in the process of converting our database from Sybase to Oracle and we've hit a performance problem. In Sybase, we had a TEXT field and replaced it with a CLOB in Oracle.
This is how we accessed the data in our java code:
while(rs.next()) {
String clobValue = rs.getString(1); // This takes 176ms in Oracle!
.
.
}
The database is across the country, but still, we didn't have any performance problems with Sybase and its retrieval of TEXT data.
Is there something we can do to increase this performance?

By default, LOBs are not fetched along with the table data and it takes an extra round-trip to the database to fetch them in getString.
If you are using Oracle's .NET provider, you may set InitialLOBFetchSize in the data reader settings to a value large enough to accommodate your large objects in memory so they could be fetched in all their entirety along with the other data.

Some other options:
Are the LOB columns being stored in-line (in the data row) or out-of-line (in a separate place)? If the LOB columns tend to be small (under 4k in size), you can use the ENABLE STORAGE IN ROW clause to tell Oracle to store the data in-line where possible.
If your LOBs are larger and frequently used, are they being stored in the buffer cache? The default in 10g is that LOBs are NOCACHE, meaning each i/o operation against them involve direct reads to the database, a synchronous disk event, which can be slow. A database trace would reveal significant waits on direct path read / direct path write events.
This chapter of the Oracle Application Developer's Guide - Large Objects would be valuable reading.

We decided to take a different approach which will allow us to ignore clob performance.
Our current code (I didn't write it!) queries a table in the database and retrieves all of the information in the table, including the clobs, even though it wasn't quite necessary to retrieve them all # the time. Instead, we created another field with the first 4k characters in a varchar and query that instead. Then, when we need the full clob, we query it on an individual basis, rather than all clobs for all records.

Related

Data Injected to SQL DB too large in comparaison to Oracle Source

Hello I am using Azure Data Factory to inject data from Oracle to SQL DB, data are extracted in csv format. The problem is in the source I am reading like 10 Go of Data but when they're injected into Azure SQL DB the data size becomes 40 Go.
Is that normal ? and is there a way to lower the destination data size ?
Thank you
Try setting the table compression level. If the table has many number columns, SQL Server's ROW compression is similar to Oracle's default storage style for numbers and should produce a similar storage size to Oracle
PAGE compression provides additional compression strategies and may give you ~3X compression.
And if the table has many millions of rows, use a clustered cloumnstore table, and may provide ~10X compression.

Delphi: ClientDataSet is not working with big tables in Oracle

We have a TDBGrid that connected to TClientDataSet via TDataSetProvider in Delphi 7 with Oracle database.
It goes fine to show content of small tables, but the program hangs when you try to open a table with many rows (for ex 2 million rows) because TClientDataSet tries to load the whole table in memory.
I tried to set "FetchOnDemand" to True for our TClientDataSet and "poFetchDetailsOnDemand" to True in Options for TDataSetProvider, but it does not help to solve the problem. Any ides?
Update:
My solution is:
TClientDataSet.FetchOnDemand = T
TDataSetProvider.Options.poFetchDetailsOnDemand = T
TClientDataSet.PacketRecords = 500
I succeeded to solve the problem by setting the "PacketRecords" property for TCustomClientDataSet. This property indicates the number or type of records in a single data packet. PacketRecords is automatically set to -1, meaning that a single packet should contain all records in the dataset, but I changed it to 500 rows.
When working with RDBMS, and especially with large datasets, trying to access a whole table is exactly what you shouldn't do. That's a typical newbie mistake, or a borrowing from old file based small database engines.
When working with RDBMS, you should load the rows you're interested in only, display/modify/update/insert, and send back changes to the database. That means a SELECT with a proper WHERE clause and also an ORDER BY - remember row ordering is never assured when you issue a SELECT without an OREDER BY, a database engine is free to retrieve rows in the order it sees fit for a given query.
If you have to perform bulk changes, you need to do them in SQL and have them processed on the server, not load a whole table client side, modify it, and send changes row by row to the database.
Loading large datasets client side may fali for several reasons, lack of memory (especially 32 bit applications), memory fragmentation, etc. etc., you will flood the network probably with data you don't need, force the database to perform a full scan, maybe flloding the database cache as well, and so on.
Thereby client datasets are not designed to handle millions of billions of rows. They are designed to cache the rows you need client side, and then apply changes to the remote data. You need to change your application logic.

Slow Performance on Sql Express after inserting big chunks of data

We have noticed that our queries are running slower on databases that had big chunks of data added (bulk insert) when compared with databases that had the data added on record per record basis, but with similar amounts of data.
We use Sql 2005 Express and we tried reindexing all indexes without any better results.
Do you know of some kind of structural problem on the database that can be caused by inserting data in big chunks instead of one by one?
Thanks
One tip I've seen is to turn off Auto-create stats and Auto-update stats before doing the bulk insert:
ALTER DATABASE databasename SET AUTO_CREATE_STATISTICS OFF WITH NO_WAIT
ALTER DATABASE databasename SET AUTO_UPDATE_STATISTICS OFF WITH NO_WAIT
Afterwards, manually creating statistics by one of 2 methods:
--generate statistics quickly using a sample of data from the table
exec sp_createstats
or
--generate statistics using a full scan of the table
exec sp_createstats #fullscan = 'fullscan'
You should probably also turn Auto-create and Auto-update stats back on when you're done.
Another option is to check and defrag the indexes after a bulk insert. Check out Pinal Dave's blog post.
Probably SQL Server allocated new disk space in many small chunks. When doing big transactions, it's better to pre-allocate much space in both the data and log files.
That's an interesting question.
I would have guessed that Express and non-Express have the same storage layout, so when you're Googling for other people with similar problems, don't restrict yourself to Googling for problems in the Express version. On the other hand though, bulk insert is a common-place operation and performance is important, so I wouldn't consider it likely that this is a previously-undetected bug.
One obvious question: which is the clustered index? Is the clustered index also the primary key? Is the primary key unassigned when you insert, and therefore initialized by the database? If so then maybe there's a difference (between the two insert methods) in the pattern or sequence of successive values assigned by the database, which affects the way in which the data is clustered, which then affects performance.
Something else: as well as indexes, people say that SQL uses statistics (which it created as a result of runing previous queries) to optimize its execution plan. I don't know any details of that, but as well as "reindexing all indexes", check the execution plans of your queries in the two test cases to ensure that the plans are identical (and/or check the associated statistics).

Oracle CLOB performance

I am running queries against an Oracle 10g with JDBC (using the latest drivers and UCP as DataSource) in order to retrieve CLOBs (avg. 20k characters). However the performance seems to be pretty bad: the batch retrieval of 100 LOBs takes 4s in average. The operation is also neither I/O nor CPU nor network bound judging from my observations.
My test setup looks like this:
PoolDataSource dataSource = PoolDataSourceFactory.getPoolDataSource();
dataSource.setConnectionFactoryClassName("...");
dataSource.setConnectionPoolName("...");
dataSource.setURL("...");
dataSource.setUser("...");
dataSource.setPassword("...");
dataSource.setConnectionProperty("defaultRowPrefetch", "1000");
dataSource.setConnectionProperty("defaultLobPrefetchSize", "500000");
final LobHandler handler = new OracleLobHandler();
JdbcTemplate j = new JdbcTemplate(dataSource);
j.query("SELECT bigClob FROM ...",
new RowCallbackHandler() {
public void processRow(final ResultSet rs) throws SQLException {
String result = handler.getClobAsString(rs, "bigClob");
}
});
}
I experimented with the fetch sizes but to no avail. Am I doing something wrong? Is there a way to speed up CLOB retrieval when using JDBC?
The total size of the result set is in the ten thousands - measured over the span of the whole retrieval the initial costs
Is there an Order By in the query? 10K rows is quite a lot if it has to be sorted.
Also, retrieving the PK is not a fair test versus retrieving the entire CLOB. Oracle stores the table rows with probably many in a block, but each of the CLOBs (if they are > 4K) will be stored out of line, each in a series of blocks. Scanning the list of PK's is therefore going to be fast. Also, there is probably an index on the PK, so Oracle can just quickly scan the index blocks and not even access the table.
4 seconds does seem a little high, but it is 2MB that needs to be possible read from disk and transported over the network to your Java program. Network could be an issue. If you perform an SQL trace of the session it will point you at exactly where the time is being spent (disk reads or network).
My past experience of using oracle LOB type data to store large data has not been good. It is fine when it is under 4k since it store it locally like varchar2. Once it is over 4k, you start seeing performance degrade. Perhaps, things may have improved since I last tried it a couple of years ago, but here are the things I found in the past for your information:
As clients need to get LOBs via oracle server, you may consider the following interesting situation.
lob data will compete limited SGA
cache with other data type if oracle
decide to cache it. As clob data are
general big, so it may push other
data
lob data get poor disk read if
oracle decide not to cache it, and
stream the data to the client.
fragmentation is probably something
that you haven't encountered yet. You will see if your applications delete lobs, and oracle tries to reuse the lob. I don't know if oracle support online defragmenting the disk for lob (they have for indexes, but it takes long time when we tried it previous).
You mentioned 4s for 100 lobs of avg 20k, so it's 40ms per lobs. Remember each lob needs to have to retrieved via separate Lob locater (it is not in the result set by default). That is an additional round trip for each lob, I assume (I am not 100% sure on this since it was a while ago) If that is the case, I assume that will be at least 5ms extra time per round trip in serial order, right? If so, your performance is already first limited by sequential lob fetches. You should be able to verify this by tracking the time spent in sql execution vs lob content fetching. Or you can verify this by excluding the lob column as suggested by the previous answer in the post, which should tell you if it is lob related.
Good luck
I had a similar issue and found the JDBC Lobs making a network call when accessin the lobs.
As of Oracle 11.2g JDBC Driver you can use a prefetch.
This speeded up access by 10 times...
statement1.setFetchSize(1000);
if (statement1 instanceof OracleStatement) {
((OracleStatement) statement1).setLobPrefetchSize(250000);
}
Thanks for all the helpful suggestions. Despite being flagged as answer to the problem my answer is that there seems to be no good solution. I tried using parallel statements, different storage characteristics, presorted temp. tables and other things. The operation seems not to be bound to any characteristic visible through traces or explain plans. Even query parallelism seems to be sketchy when CLOBs are involved.
Undoubtedly there would be better options to deal with with large CLOBs (especially compression) in an 11g environment but atm. I am stuck with 10g.
I have opted now for an additional roundtrip to the database in which I'll preprocess the CLOBs into a size optimized binary RAW. In previous deployments this has always been a very fast option and will likely be worth the trouble of maintaining an offline computed cache. The cache will be invalided and update using a persistent process and AQ until someone comes up with a better idea.

Is it reasonable to use small blobs in Oracle?

In Oracle LongRaw and Varchar2 have a max length of 4kb in Oracle, but I need to store objects of 8kb & 16kb, so I'm wondering what's a good solution. I know I could use a Blob, but a Blob has variable length and is basically an extra file behind the scenes if I'm correct, a feature and a Price I'm not interested in paying for my Objects.
Are there any other solutions or datatypes that are more suited to this kind of need?
Thanks
A blob is not a file behind the scene. It is stored in the database. Why does it matter that it has variable length? You can just use a blob column (or clob if your data is text data) and it gets stored in its own segment.
You should use a BLOB.
A BLOB is not stored as an extra file, it's stored as a block in one of your datafiles (just like other data). If the BLOB becomes too large for a single block (which may not happen in your case) then it will continue in another block.
If your BLOB data is really small, you can get Oracle to store it inline with other data in your row (like a varchar2).
Internally, Oracle is doing something similar to what PAX suggested. The chunks are as big as a DB block minus some overhead. If you try and re-invent Oracle features on top of Oracle it's only going to be slower than the native feature.
You will also have to re-implement a whole heap of functionality that is already provided in DBMS_LOB (length, comparisons, etc).
Why don't you segment the binary data and store it in 4K chunks? You could either have four different columns for these chunks (and a length column for rebuilding them into your big structure) or the more normalized way of another table with the chunks in it tied back to the original table record.
This would provide for expansion should you need it in future.
For example:
Primary table:
-- normal columns --
chunk_id integer
chunk_last_len integer
Chunk table:
chunk_id integer
chunk_sequence integer
chunk varchar2(whatever)
primary key (chunk_id,chunk_sequence)
Of course, you may find that your DBMS does exactly that sort of behavior under the covers for BLOBs and it may be more efficient to let Oracle handle it, relieving you of the need to manually reconstruct your data from individual chunks. I'd measure the performance of each to figure out the best approach.
Don't store binary data in varchar2 columns, unless you are willing to encode them (base64 or similar). Character set issues might corrupt your data otherwise!
Try the following statement to see the effect:
select * from (select rownum-1 original, ascii(chr(rownum-1)) data from user_tab_columns where rownum<=256) where original<>data;
Varchar2 is of variable length just as well. If you need to store binary data of any bigger than small size in your database, you'll have to look in blob's direction. Another solutiuon is of course storing the binary somewhere on the file system, and storing the path to the file as a varchar in the db.

Resources