H2 DB file size increase rule - h2

I've noticed that the file size doesn't increase each time I've written a dataset into the DB. Instead it increases only once in a while by some amount. Is it some constant or percentage based rule? When does the file size increase? Does the file size increase more at once if the DB size or the file size is bigger?
I'm using an embedded H2 version 1.4.197 with MV store and encryption.

Related

what is the default Space size in JPOS and How to increase the default space size in JPOS?

I am using JPOS transaction manager for developement. I want to know the exact default space size in JPOS.How to increase it.what should i do for increasing it.

How to figure out the optimal fetch size for the select query

In JDBC the default fetch size is 10, but I guess that's not the best fetch size when I have a million rows. I understand that a fetch size too low reduces performance, but also if the fetch size is too high.
How can I find the optimal size? And does this have an impact on the DB side, does it chew up a lot of memory?
If your rows are large then keep in mind that all the rows you fetch at once will have to be stored in the Java heap in the driver's internal buffers. In 12c, Oracle has VARCHAR(32k) columns, if you have 50 of those and they're full, that's 1,600,000 characters per row. Each character is 2 bytes in Java. So each row can take up to 3.2MB. If you're fetching rows 100 by 100 then you'll need 320MB of heap to store the data and that's just for one Statement. So you should only increase the row prefetch size for queries that fetch reasonably small rows (small in data size).
As with (almost) anything, the way to find the optimal size for a particular parameter is to benchmark the workload you're trying to optimize with different values of the parameter. In this case, you'd need to run your code with different fetch size settings, evaluate the results, and pick the optimal setting.
In the vast majority of cases, people pick a fetch size of 100 or 1000 and that turns out to be a reasonably optimal setting. The performance difference among values at that point are generally pretty minimal-- you would expect that most of the performance difference between runs was the result of normal random variation rather than being caused by changes in the fetch size. If you're trying to get the last iota of performance for a particular workload in a particular configuration, you can certainly do that analysis. For most folks, though, 100 or 1000 is good enough.
The default value of JDBC fetch size property is driver specific and for Oracle driver it is 10 indeed.
For some queries fetch size should be larger, for some smaller.
I think a good idea is to set some global fetch size for whole project and overwrite it for some individual queries where it should be bigger.
Look at this article:
http://makejavafaster.blogspot.com/2015/06/jdbc-fetch-size-performance.html
there is description on how to set up fetch size globally and overwrite it for carefully selected queries using different approaches: Hibernate, JPA, Spring jdbc templates or core jdbc API. And some simple benchmark for oracle database.
As a rule of thumb you can:
set fetchsize to 50 - 100 as global setting
set fetchsize to 100 - 500 (or even more) for individual queries
JDBC does have default prefetch size of 10. Check out
OracleConnection.getDefaultRowPrefetch in JDBC Javadoc
tl;dr
How to figure out the optimal fetch size for the select query
Evaluate some maximal amount of memory (bytesInMemory)
4Mb, 8Mb or 16Mb are good starts.
Evaluate the maximal size of each column in the query and sum up
those sizes (bytesPerRow)
...
Use this formula: fetch_size = bytesInMemory / bytesPerRow
You may adjust the formula result to have predictable values.
Last words, test with different bytesInMemory values and/or different queries to appreciate the results in your application.
The above response was inspired by the (as of this writing attic) Apache MetaModel project. They found an answer for this exact question. To do so, they built a class for calculating a fetch size given a maximal memory amount. This class is based on an Oracle whitepaper explaining how Oracle JDBC drivers manage memory.
Basically, the class is constructed with a maximal memory amount (bytesInMemory). Later, it is asked a fetch size for a Query (an Apache Metamodel class). The Query class helps find the number of bytes (bytesPerRow) a typical query results row would have. The fetch size is then calculated with the below formula:
fetch_size = bytesInMemory / bytesPerRow
The fetch size is also adjusted to stay in this range : [1,25000]. Other adjustments are made along during the calculation of bytesPerRow but that's too much details for here.
This class is named FetchSizeCalculator. The link leads to the full source code.

How Buffer cache works in oracle database

My question is, in Oracle database if there is a 5Gb table and the SGA size is 10GB, if I select the 5Gb table, it will fit into 10GB SGA size.
But if my table is more than 10GB and my SGA size is 5Gb, then how do select queries work, does it display all the rows of 10GB table, and how does the buffer CACHE works?
If we have a table which is of more size than than the SGA, then we will have to define the size of SGA again to the required size, and then the problem is solved.
we can define the size of SGA and PGA while creating the database.
Buffered Cache Working:
When we request data from disk, at minimum oracle reads one block.Even if we request only one row, many rows in the same table are likely to be retrieved, which lie in the same block. The same goes for column.
A block in the buffered cache can be in one of the three states:
Free: Currently not used.
Pinned: Currently being accessed.
Dirty: Block has been modified but currently not been written to disk.
A block write up is triggered when one of the following happens:
The database is issued a shutdown command.
A full or partial checkpoint occurs.
A recovery time threshold occurs, which is again set by us.
A free block is needed and none are found after a given amount of time( we use LRU algorithm here).
Certain Data Definition Language commands(DDL).
Every three seconds. There are many other reasons, the algorithm is complex and can change with each release of oracle.

How to increase row size in SQL Server CE

While inserting rows into SQL Server CE in WP7 I'm getting a SQLCeException that says
The table definition or the row size exceeds the maximum row size of
8060 bytes.
What is the way to increase the row size?
SQL Server has a hard and fixed system limit at 8K for a page size.
There is NO WAY to increase that.
The solution will be to re-architect your design to work around this.

Is there a max file size hard limit for a .csv file?

Somehow one of our batches failed last night, the file size that would have been generated is 500mb with around 500,000 records (rows) in it.
Just wonderingly, is there any file size hard limit for .csv files?
I understand that Excel application has 60k row hard limit, but thats opening.
The maximum file size of any file on a filesystem is determined by the filesystem itself - not by the file type or filename suffix. So the answer is no.
But, as you said, the application you are using to process the file might have limitations.
Depending on the way you are handling the CSV file, you might run into memory and/or CPU bottlenecks with large files.
Also, on 32 bit Linux systems there exists a 2GB file size limit which will limit your maximum usable CSV size regardless of your CPU power and memory. Having said that, CSV files that big are a good sign that you should consider a more efficient and robust solution for handling your data such as a database system.

Resources