I have a postgresql database and I have 1TB data. I want to migrate this data from postgresql to Oracle. I can handle it via DMS(Database Migration Service). When I migrate my data, Oracle indexes look like UNUSABLE then I try to rebuild it I got error because data is too big. Do you have any suggestion?
Found why index state changed from VALID to UNUSABLE http://www.dba-oracle.com/t_indexes_invalid_unusable.htm
And solved on AWS DMS, set DMS Endpoints → Target(Oracle) → Modify → Endpoint settings → Key (useDirectPathFullLoad) Value(false). For more detail: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Oracle.html
Related
I've been researching and looking for ideas but the only thing close to a solution i've found has been where someone used pyspark to convert an oracle table into hdfs and then from hdfs into cassandra but I was hoping there was another/a clear solution to this data migration.
Title suggests that it is Cassandra > Oracle. Message text says Oracle > HDFS > Cassandra (i.e. the opposite direction). What exactly are you trying to do?
Suppose it is the title that is correct. If there's no tool which would do the migration for you, from my - developer's - point of view, creating a database link in my Oracle schema which points to Cassandra might be a good option. Then I'd just write some SQL code to migrate data I need. Here's how: Access Cassandra Data as a Remote Oracle Database.
Shortly:
connect to Cassandra as an ODBC data source
set connection properties for compatibility with Oracle
configure the ODBC gateway, Oracle Net and Oracle database
write queries
Years ago I wrote an app to capture data into H2 datafiles for easy transport and archival purposes. The application was written with H2 1.4.192.
Recently, I have been revisiting some load code relative to that application and I have found that there are some substantial gains to be had in some things I am doing in H2 1.4.200.
I would like to be able to load the data that I had previously saved to the other databases. But I had some tables that used a now invalid precision scale specification. Here is an example:
CREATE CACHED TABLE PUBLIC.MY_TABLE(MY_COLUMN DATETIME(23,3) SELECTIVITY 5)
H2 databases created with 1.4.192 that contain tables like this will not load on 1.4.200,
they will get the following error:
Scale($"23") must not be bigger than precision({1}); SQL statement:
CREATE CACHED TABLE PUBLIC.MY_TABLE(MY_COLUMN DATETIME(23,3) SELECTIVITY 5) [90051-200] 90051/90051 (Help)
My question is how can I go about correcting the invalid table schema? My application utilizes a connection to an H2 database and then loads the data it contains into another database. Ideally I'd like to have my application be able to detect this situation and repair it automatically so the app can simply utilize the older data files. But in H2 1.4.200 I get the error right up front upon connection.
Is there a secret/special mode that will allow me to connect 1.4.200 to the database to repair its schema? I hope???
Outside of that it seems like my only option is have a separate classloader for different versions of H2 and have remedial operations happen in a classloader and the load operations happen in another classloader. Either that or start another instance of the JVM to do remedial operations.
I wanted to check for options before I did a bunch of work.
This problem is similar to this reported issue, but there was no specifics in how he performed his resolution.
This data type is not valid and was never supported by H2, but old H2, due to bug, somehow accepted it.
You need to export your database to a script with 1.4.192 Beta using
SCRIPT TO 'source.sql'
You need to use the original database file, because if you opened file from 1.4.192 Beta with 1.4.200, it may be corrupted by it, such automatic upgrade is not supported.
You need to replace DATETIME(23,3) with TIMESTAMP(3) or whatever you need using a some text editor. If exported SQL is too large for regular text editors, you can use a stream editor, such as sed:
sed 's/DATETIME(23,3)/TIMESTAMP(3)/g' source.sql > fixed.sql
Now you can create a new database with 1.4.200 and import the edited script into it:
RUNSCRIPT FROM 'fixed.sql'
The question sums it up basically.
I have an external (not in AWS) MySQL v5.7 database that is 40Gb (it has one table of 16Gb). This database will be imported to RDS MySQL by another team.
I will then need to have a copy of it in my microservice but not the whole DB, just some tables (among them, the 16Gb one).
The optimal solution would be a "read replica" of just some columns of various tables.
It will need to be in constant sync with the "Master" table.
I would also like to use Aurora instead of MySQL (reason: speed & cost).
I googled for information but couldn't find anything helpful besides
AWS Data migration service.
Anyone here has experience with this? And what would you suggest?
Yes you can definetly use DMS service. You can use transformation rule and put db name and table name , which will help in selective migration.
Note :- Binlog_format has to be set true for any kind of source of RDS . They should also have backup enabled.
I am new to AWS DMS service. Plans are to migrate on-prem Oracle to Redshift. Before going into production environment, currently trying out a test Oracle RDS in AWS which is a small subset of actual database as source. So far have been successful in the bulk load and incremental migration from RDS to Redshift.
When it comes to on-prem oracle , particularly for the incremental load
1) As per document : http://docs.aws.amazon.com/dms/latest/sbs/CHAP_On-PremOracle2Aurora.Steps.ConfigureOracle.html, the on-prem needs to be enabled with supplemental logging. Plans are to use the following two commands.
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
The production database has multiple logging locations. Is there any other log settings other than the above two that I should be looking into for DMS to pick up multiple log locations?
2) In the same link given, point 4 says 'Create or configure a database account to be used by AWS DMS.'
Where should I create this user? on-prem oracle or AWS?
How do I configure DMS to use this user?
You need to read this documentation:
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html
For your second question; You need to create a user in the Oracle source database, the section 'Working with a Self-Managed Oracle Database as a Source for AWS DMS' tells you all of the grants you need to give.
For your first question, if you look at the SQL Server documentation;
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.SQLServer.html
It specifies the limitation of; 'SQL Server backup to multiple disks isn't supported. If the backup is defined to write the database backup to multiple files over different disks, AWS DMS can't read the data and the AWS DMS task fails.'
I can't see a similar stipulation in the oracle documentation, first link, I would hazard a guess that DMS is able, in the case of oracle, to determine and cope with multiple logging locations from a configuration value inside the database.
Does MonetDB support online schema changes? For example adding/changing a column on the fly while the tables are loaded in memory. Many in-memory databases have to restarted to get the schema changes reflected. So, I was wondering if MonetDB took care of this issue.
Yes. MonetDB supports SQL-99 which includes DDL statements (that are immediately reflected in the schema).