Populating Tables into Oracle in-memory segments - oracle

I am trying to load the tables into oracle in-memory database. I have enable the tables for INMEMORY by using sql+ command ALTER TABLE table_name INMEMORY. The table also contains data i.e. the table is populated. But when I try to use the command SELECT v.owner, v.segment_name name, v.populate_status status from v$im_segments v;, it shows no rows selected.
What can be the problem?

Have you considered this?
https://docs.oracle.com/database/121/CNCPT/memory.htm#GUID-DF723C06-62FE-4E5A-8BE0-0703695A7886
Population of the IM Column Store in Response to Queries
Setting the INMEMORY attribute on an object means that this object is a candidate for population in the IM column store, not that the database immediately populates the object in memory.
By default (INMEMORY PRIORITY is set to NONE), the database delays population of a table in the IM column store until the database considers it useful. When the INMEMORY attribute is set for an object, the database may choose not to materialize all columns when the database determines that the memory is better used elsewhere. Also, the IM column store may populate a subset of columns from a table.
You probably need to run a select against the date first

Related

Move Range Interval partition data from one table to history table in other database

We have a primary table that is Range partitioned by date with a 1-month interval. It's also a list sub-partitioned with 4 distinct values. So essentially it is one month partition having 4 sub-partitions.
Database: Oracle 19c
I need advice on how to effectively move the partition/sub-partition data from active schema to historical schema in another database.
Also, there are about 30 tables that are referenced partitioned on the primary table for which the data needs to be moved as well. Overall I'm looking to move about 2500 subpartitions
I'm not sure if an exchange partition would be the right approach in this scenario?
TIA
You could use exchange to get the data rapidly out of your active table, but you would still then to send that table over the wire to the remote history database to load it in.
In which case, using "exchange" probably is just adding more steps to the process for little gain. (There are still potential uses here depending on how you want to handle indexing etc).
But simplest is perhaps just transferring the data over, assuming a common structure between the two tables, ie
insert /*+ APPEND */ into history_table#remote_db
select * from active_table partition ( myparname )
I can't remember if partition naming syntax is supported over a db link, but if not, then the appropriate date predicates will do the same trick, and then just follow up with:
alter table active_table truncate partition myparname;

Create a sequence not related to any primary key in Oracle SQL Developer Data Modeler

I need to create a sequence whose value is going to be read by call to .NEXTVAL in PL/SQL code and saved in more then one record of a specific table column, so my design doesn't require to define a PK on the aforementioned column.
I cannot find out how to edit the sequence tab in Oracle Data Modeler (I'm on version 4.1.1) when the PK checkbox is not selected (all the sequence related fields are disabled).
Any idea?
In the relational model, choose your DB and within that you will find sequence as an item to create. You can also create other types of object here.

Delphi XE7 TFDTable View RowID error

I'm converting a Delphi 5 / BDE application to Delphi XE7 / FireDAC. One of my forms has a TFDTable component that points to an Oracle view containing a group by clause in its create statement.
This used to work fine in the BDE application, but with FireDAC I'm getting this error.
ORA-01446: cannot select ROWID from, or sample, a view with DISTINCT,
GROUP BY, etc.
I understand the error I'm getting from Oracle, but I'm not selecting ROWID, FireDAC is! Is there a property in the TFDTable that I can set to prevent it from adding ROWID to the query? If not, how am I supposed to use this view?
FireDAC fetches ROWID because it tries to identify tuples in the resultset for possible updates. For stopping that just enable the ReadOnly option which will properly make the grouped view resultset read only (properly as one cannot just identify particular tuples if they are grouped in a resultset for updating).
The SQL command is generated in TFDPhysCommandGenerator.GenerateSelectTable method, if you wanted to know the source of this problem. There is appended generic unique tuple identifier to the select list according to the ReadOnly property setting (which is ROWID for Oracle DBMS).
Include fiMeta in FetchOptions.Items.
TFDQuery, TFDTable, TFDMemTable, and TFDCommand automatically retrieve
the unique identifying columns (mkPrimaryKeyFields) for the main
(first) table in the SELECT ... FROM ... statements, when fiMeta is
included in FetchOptions.Items. Note:
mkPrimaryKeyFields querying may be time consuming;
the application may need to explicitly specify unique identifying columns, when FireDAC fails to determine them correctly.
To explicitly specify columns, exclude fiMeta from FetchOptions.Items,
and use one of the following options:
set UpdateOptions.KeyFields to a ';' separated list of column names;
include pfInKey into the corresponding TField.ProviderFlags property.
When the application creates persistent fields, then initially
TField.ProviderFlags will be set correctly. After that, automatic
field setup will not happen, when the DB structure or query is
changed. You should manually update ProviderFlags to adjust the column
list. Also, if the primary key consists of several fields, then all of
them must be included into persistent fields. Row Identifying Columns
Alternatively, a row identifying column may be included into the
SELECT list. When FireDAC founds such columns, it will not retrieve
mkPrimaryKeyFields metadata and it will use this column. The supported
DBMSs are the following:
DBMS Row identifying column
Firebird DB_KEY
Informix ROWID
Interbase DB_KEY / RDB$DB_KEY
Oracle ROWID
PostgreSQL OID. The table must be created with OIDs.
SQLite ROWID
Source : http://docwiki.embarcadero.com/RADStudio/XE8/en/Unique_Identifying_Fields_%28FireDAC%29

Update Index Organized Tables using multiple UPDATE queries (temporary duplicates)

I need to update the primary key of a large Index Organized Table (20 million rows) on Oracle 11g.
Is it possible to do this using multiple UPDATE queries? i.e. Many smaller UPDATEs of say 100,000 rows at a time. The problem is that one of these UPDATE batches could temporarily produce a duplicate primary key value (there would be no duplicates after all the UPDATEs have completed.)
So, I guess I'm asking is it somehow possible to temporarily disable the primary key constraint (but which is required for an IOT!) or alter the table temporarily some other way. I can have exclusive and offline access to this table.
The only solution I can see is to create a new table and when complete, drop the original table and rename the new table to the original table name.
Am I missing another possibility?
You can't disable / drop the primary key constraint from an IOT, since it is a unique index by definition.
When I need to change an IOT like this, I either do a CTAS (create table as) for a new plain heap table, do my maintenance, and then CTAS a new IOT.
Something like:
create table t_temp as select * from t_iot;
-- do maintenance
create table t_new_iot as select * from t_temp;
If, however, you need to simply add or join a new field to the existing key, you can do this in one step by creating the new IOT structure, then populating directly from the old IOT with a query.
Unfortunately, this is one of the downsides to IOTs.
I would recommend following method:
Create new IOT table partitioned by system with single partition
with exactly same structure as current one.
Lock current IOT table to prevent any DML.
insert into new table as select from current table changing PK values in select. This step
could be repeated several times if needed. In this case it's better
to do it in another session to keep lock on original table.
Exchange partition of new table with original table.

Adding column with default value

I have a table A (3 columns) in production which is around 10 million records. I wanted to add one more column to that table and also I want to make default value to 1. Is it going to impact production DB performance If add a column with default value 1 or something else. What would be best approach to this to avoid any kind of performance impact on DB? your thoughts are much appreciated!!
In Oracle 11g the process of adding a new column with a default value has been considerably optimized. If a newly added column is specified as NOT NULL, default value for that column is maintained in the data dictionary and it's no longer required for a default value of a column to be stored for all records in a table, so it's no longer required to update each record with a default value. Such an optimization considerably reduces amount of time the table is exclusively locked during the operation.
alter table <tab_name> add(<col_name> <data_type> default <def_val> not null)
Moreover, column with a default value added that way will not consume space, until you deliberately start to update that column or insert a record with a non default value for that column. So the operation of adding a new column with a default value and not null constraint specified completes pretty quick.
i think that it is better that you create a table as backup table with this syntax:
create table BackUpTable as SELECT * FROM YourTable;
alter table BackUpTable add (newColumn number(5,0)default 1);

Resources