Alter large table on Aurora Serverless MySQL - alter-table

We use AWS Aurora Serverless MySQL v2.07.1. We would like to alter one of our table and add a column to this table.
The issue is that the table contains more than 500 Millions rows, which gives us a timeout error (~45 seconds) when we try to alter the table.
ALTER TABLE studentAidProgram ADD details TEXT
We don't have any index that allows us to sort the entire table and migrate it to a new table with the new schema. And the timeout issue does not allow us to add a new index.
Does someone has an idea/tool of how we can alter this large table on Aurora Serverless ?

Related

Exchange Partition giving -Table or index is not partitioned. Invalid syntax

I have 2 schemas TBCAM and AR_TBCAM. There is a table called BKP_COST_EVENT in TBCAM which I have partitioned and I have moved the partition data into another simple table in AR_TBCAM schema called BKP_COST_EVENT_P2016. Now the data has moved to AR_TBCAM schema via this query
ALTER TABLE BKP_COST_EVENT EXCHANGE PARTITION P2016 WITH TABLE AR_TBCAM.BKP_COST_EVENT_P2016 INCLUDING INDEXES WITHOUT VALIDATION;
Now I want to bring the partition data back into the original table BKP_COST_EVENT.
But when I run this query standing on AR_TBCAM
ALTER TABLE BKP_COST_EVENT_P2016 EXCHANGE PARTITION P2016 WITH TABLE TBCAM.BKP_COST_EVENT INCLUDING INDEXES WITHOUT VALIDATION;
It is giving this error:
Error starting at line : 1 in command -
ALTER TABLE BKP_COST_EVENT_P2016 EXCHANGE PARTITION P2016 WITH TABLE TBCAM.BKP_COST_EVENT INCLUDING INDEXES WITHOUT VALIDATION
Error report -
ORA-14501: object is not partitioned
14501. 00000 - "object is not partitioned"
*Cause: Table or index is not partitioned. Invalid syntax.
*Action: Retry the command with correct syntax.
Can anyone suggest what am I doing wrong? Or how to bring/restore the data back to my TBCAM schema table BKP_COST_EVENT.
I have not dropped the partition p2016 in the original BKP_COST_EVENT
In exchange partition syntax first table should be the one which is partitioned, second should be unpartitioned one.
So, your first command was correct but 2nd command is wrong.
If you are bringing back data to same table's partition run the same command again.
ALTER TABLE BKP_COST_EVENT EXCHANGE PARTITION P2016 WITH TABLE AR_TBCAM.BKP_COST_EVENT_P2016 INCLUDING INDEXES WITHOUT VALIDATION;
Also, if there are no indexes to be moved, its better not to use including indexes clause.

Postgres - Can execute trigger for re-ordering rows in table having clustered index?

I am currently working on migrating SQL Server database to Postgres. I found that there is a provision in Postgres to cluster table based on an index which is similar to clustered indexes in Microsoft SQL Server. However the clustering of table is done only one time as per Postgres documentation. And also it is found that we need to exclusively execute the command 'Cluster' on table periodically if we require for each new update or insert operations on the table.
So, I am thinking about adding a trigger that issues 'CLUSTER' command on that particular table based on an index so that the result would be similar to clustered index in MS SQL.
Can anyone have idea if there are any problems in adding trigger for clustering table for each and every update/insert operation?

Populating Tables into Oracle in-memory segments

I am trying to load the tables into oracle in-memory database. I have enable the tables for INMEMORY by using sql+ command ALTER TABLE table_name INMEMORY. The table also contains data i.e. the table is populated. But when I try to use the command SELECT v.owner, v.segment_name name, v.populate_status status from v$im_segments v;, it shows no rows selected.
What can be the problem?
Have you considered this?
https://docs.oracle.com/database/121/CNCPT/memory.htm#GUID-DF723C06-62FE-4E5A-8BE0-0703695A7886
Population of the IM Column Store in Response to Queries
Setting the INMEMORY attribute on an object means that this object is a candidate for population in the IM column store, not that the database immediately populates the object in memory.
By default (INMEMORY PRIORITY is set to NONE), the database delays population of a table in the IM column store until the database considers it useful. When the INMEMORY attribute is set for an object, the database may choose not to materialize all columns when the database determines that the memory is better used elsewhere. Also, the IM column store may populate a subset of columns from a table.
You probably need to run a select against the date first

Oracle Is it necessary to gather table stats for a new table with new index?

I just created a new table on 11gR2 and loaded it with data with no index. After the loading was completed, I created several indexes on the new table including primary constraint.
CREATE TABLE xxxx (col1 varchar2(20), ...., coln varhcar2(10));
INSERT INTO xxxx SELECT * FROM another_table;
ALTER TABLE xxxx ADD CONSTRAINT xxxc PRIMARY KEY(col_list);
CREATE INDEX xxxx_idx1 ON xxxx (col3,col4);
AT this point do I still need to use DBMS_STATS.GATHER_TABLE_STATS(v_owner,'XXXX') to gather table stats?
If yes, why? since Oracle says in docs "Oracle Database now automatically collects statistics during index creation and rebuild".
I don't want to wait for automatic stats gathering over night because I need to report the actual size of the table and its index immediately after the above operations. I think running DBMS_STATS.GATHER_TABLE_STATS may give me a more accurate usage data. I could be wrong though.
Thanks in advance,
In Oracle 11gR2 you still need to gather table statistics. I guess you read documentation for Oracle 12c, which automatically collects the statistics but only for direct path inserts, which is not your case, your insert is conventional. Also if you gather statistics (with default options) for brand new table that hasn't been used for queries no histograms will be generated.
Index statistics are gathered when index is built so it's not needed to gather its statistics explicitly. When you later gather table statistics you should use the DBMS_STATS.GATHER_TABLE_STATS option cascade => false so that index statistics aren't gathered twice.
You can simply check the statistics using
SELECT * FROM ALL_TAB_COL_STATISTICS WHERE TABLE_NAME = 'XXXX';

Create table as select using dblink doesn't move all the rows

We are moving from an amazon ec2 database to an amazon rds database. Most of the tables are small and could be moved using sql developer copy commands but a couple are bigger (3M+ records). In order to speed those up, I created database links between the two system. Those work fine. I then ran the following:
create table schema.tablename as select * from schema.tablename#ec2db;
ec2db is the old database. The table there contains 3,503,064 records. However the NEW databse table only contains 3,454,685 records. No errors were generated during the create table statement. This is repeatable (ie: I drop the table and run it again, and it loads the same number of records.)
Any ideas why this would happen? Why would the contents of a table when I do a select (*) not be the same as the contents of the same table (fully specified) when I do a create table?

Resources