HSQLDB how to insert 1 million of records - insert

I am developing an GWT application.
In order to test my DataGrid I created a button which makes calls to my server.
When I hit on it 1 million of records should be inserted into the database.
I created an alias
CREATE FUNCTION PUBLIC.GENERATENAME() RETURNS VARCHAR(32768) SPECIFIC GENERATENAME_10073 LANGUAGE JAVA NOT DETERMINISTIC NO SQL CALLED ON NULL INPUT EXTERNAL NAME 'CLASSPATH:com.package.sql.Helper.generateName'
And created a stored procedure
CREATE PROCEDURE PUBLIC.GENERATE() SPECIFIC GENERATE_10073 LANGUAGE SQL NOT DETERMINISTIC MODIFIES SQL DATA NEW SAVEPOINT LEVEL BEGIN ATOMIC DECLARE VAL_P BIGINT;TRUNCATE TABLE PUBLIC.CONTACT;SET VAL_P=1;LOOP_LABEL:WHILE VAL_P<=1000 DO INSERT INTO PUBLIC.CONTACT VALUES VAL_P,PUBLIC.GENERATENAME(),PUBLIC.GENERATENAME();SET VAL_P=VAL_P+1;END WHILE LOOP_LABEL;END
My table is a simple one
CREATE MEMORY TABLE PUBLIC.CONTACT(CONTACT_ID BIGINT GENERATED BY DEFAULT AS IDENTITY(START WITH 1) NOT NULL PRIMARY KEY,FIRST_NAME VARCHAR(10) NOT NULL,SECOND_NAME VARCHAR(10) NOT NULL)
I tested and realized I can't insert 1M rows at once, or can I?
What is the best way to insert such a huge amount of data?
I am using HSQLDB version 2.2.4

Because you use CREATE MEMORY TABLE, all the data is stored in memory. You may have to increase your Java heap allocation to store the data.
With file databases, you can use CREATE CACHED TABLE to reduce memory use.

Related

After adding index global temporary table data will not get fetched

Need some help to identify the reason for the below issue.
I have created a global temporary table as below:
Create global temporary table glo_temp_table
(
row_no NUMBER not null,
resource_id VARCHAR2(40),
company_id VARCHAR2(20),
);
This table’s data gets inserted during the runtime by a function which later used by a another function to fetch data using a cursor. This functionally works fine without any issue. Problem starts when I add an index below (to clear this is not done during the run time.):
CREATE INDEX SS ON glo_temp_table (resource_id);
Now no data will gets fetched by the cursor. Is there any specific reason for this behavior? How can I created a such a index to work properly?
Oracle db veriosn is 12c Release 12.1.0.1.0
This table only has the below constrain only.
alter table glo_temp_table
add constraint glo_temp_table_PK primary key (ROW_NO);

Oracle sql does not release space from temp when executing finishes

There is a table(lets say TKubra) which has 2.255.478 record in it.
And there is a query like:
select *
from kubra.tkubra
where ckubra is null
order by c1kubra asc;
ckubra does not have null records. It has 3 thousand record of ids and rest of it has empty space characters.
ckubra has index but when the statement executes, it does full table scan and its cost is 258.794.
And the result returns null as normally.
When the statement executes, it consumes temporary tablespace space and does not release space after finishes.
what causes of this ?
This is the query and the results for the temporary tablespace usage:
Oracle does not store information about NULL values in normal (BTree) indexes. Thus, when you query using a condition like WHERE CKUBRA IS NULL the database engine has to perform a full table scan to generate the answer.
However, bitmap indexes do store NULL values, so if you want to be able to use an index to find NULL values you can create a bitmap index on the appropriate fields, as in:
CREATE BITMAP INDEX KUBRA.TKUBRA_2
ON KUBRA.TKUBRA(CKUBRA);
Once you've created an index, remember to gather statistics on the table:
BEGIN
DBMS_STATS.GATHER_TABLE_STATS('KUBRA', 'TKUBRA');
END;
That may allow the database to use an index to find NULL values - but be aware that bitmap indexes are intended for low-update applications, such as data warehouses, and using one on a transactional table (one which is frequently updated) may lead to performance problems.
Still, it's something to play with - and you can always drop it later.
Best of luck.
Temporary tablespace is not released until all the rows are returned, or the cursor is closed, or the session is closed.
Are you sure all those 19 sessions are truly done executing the query? It looks like it's returning a lot of data, which implies it may take a while for the application to retrieve all the rows.
If you run the query in an IDE like SQL Developer, it will normally only return the Top N rows. Your IDE may imply the query has finished, but if there are more rows to receive it is not truly finished yet.

Kudu table column containing created timestamp

We are trying to create a kudu table that should contain a column holding the timestamp when the records are getting inserted.
We tried the below :
create table clcs.table_a (
store_nbr string,
load_dttm timestamp default now(),
primary key ( store_nbr)
)
But the load_dttm timestamp is always the time when the table got created and NOT the time when records are getting inserted.
Any directions would be highly appreciated. Thanks in advance!
You are thinking of Kudu as a database, which it is not. It is a storage layer. Drop the default from your Kudu DDL and instead use whatever function call is available in your SQL language processor that is performing the insert, such as now(), current_timestamp(), or CURRENT_TIMESTMAP (Impala, Impala, and Hive, respectively). Take note of whether the function call is deterministic (repeatable for the lifetime of the INSERT transaction) or not, depending on what time you want to record, the row or set of rows insert.

Populating Tables into Oracle in-memory segments

I am trying to load the tables into oracle in-memory database. I have enable the tables for INMEMORY by using sql+ command ALTER TABLE table_name INMEMORY. The table also contains data i.e. the table is populated. But when I try to use the command SELECT v.owner, v.segment_name name, v.populate_status status from v$im_segments v;, it shows no rows selected.
What can be the problem?
Have you considered this?
https://docs.oracle.com/database/121/CNCPT/memory.htm#GUID-DF723C06-62FE-4E5A-8BE0-0703695A7886
Population of the IM Column Store in Response to Queries
Setting the INMEMORY attribute on an object means that this object is a candidate for population in the IM column store, not that the database immediately populates the object in memory.
By default (INMEMORY PRIORITY is set to NONE), the database delays population of a table in the IM column store until the database considers it useful. When the INMEMORY attribute is set for an object, the database may choose not to materialize all columns when the database determines that the memory is better used elsewhere. Also, the IM column store may populate a subset of columns from a table.
You probably need to run a select against the date first

Efficiency of storing an Oracle User Defined Type in an index organized table

Are there known issues with storing user defined types within index organized tables in Oracle 10G?
CREATE OR REPLACE TYPE MyList AS VARRAY(256) OF NUMBER(8,0);
CREATE TABLE myTable (
id NUMBER(10,0) NOT NULL,
my_list MyList NOT NULL)
CONSTRAINT pk_myTable_id PRIMARY KEY(id))
ORGANIZATION INDEX NOLOGGING;
With this type and table setup, I loaded via insert append ~2.4M records and it took 20G of space at which point I ran out of disk space. Looking at the size of the data types, this seemed to be taking up a lot of space for what was being stored. I then changed the table to be a regular table (not IOT) and I stored 6M+ records in ~7G of storage, added the PK index which took an additional 512M.
I've used IOT many times in the past, but not with a user defined type.
Why is it that the storage requirements when using a UDT and IOT are so high?
AFAICR, Oracle always stores VARRAY's out of row in the IOT's.
I'll now try to find the references in the docs to confirm this.

Resources