I am using Oracle Database in AWS RDS (r19). I am trying to extract some information about the instance using SQL statements only. I can reliably query the vCPU counts and also get relevant SGA details.
How can I determine the RDS instance information, like RDS type (for example "db.m5.xlarge") and perhaps region details, and standby info (if any) with SQL queries only? I want to see if this can be done exclusively using SQL from within the database. The information needed is the instance type.
Let me know if I can provide any additional details or clarifications.
Thanks.
Related
I have a couple of questions on AWS RDS service for migrating an oracle 19c server on RHEL with 3 standalone instances and databases:
Can RDS instance support multiple oracle standalone instances/databases or only one instance?
If I have an existing RDS service running in AWS, can I migrate another on-premise oracle database to the RDS database as another oracle schema?
Have not tried it yet.
RDS for Oracle limits one instance to a single database. However you can have multiple schemas in one database.
An account can have up to 40 BYOL type Oracle RDS instances, or up to 10 instances where license is included. You can also increase these limits by contacting AWS support.
See here for more details.
Each RDS instance equals one database. However, you can set up multiple RDS instances, if you want to.
For migrations, please have a look at the Database Migration Service (DMS). Regarding your schema in particular, please check the Schema Conversion Tool which is a part of the DMS.
Sorry I can’t be more specific as the questions are rather vague themselves.
Is there are way to reverse engineer CLI for an existing Oracle RDS Database ?
I already have an existing Oracle RDS Database and want to create another one with exact same parameters ( except database name ).
Instead of using GUI, I want to use CLI ( "aws rds create-db-instance" ).
You could use describe-db-instances to get detailed information about your existing db instance. Then, based on the information obtained, you could create new db instance using create-db-instance.
If this is something that you will be doing often (i.e. creating same instances), it would be better to look into CloudFormation. This would allow you to provision reproducible db instances across different regions and accounts.
I am new to AWS DMS service. Plans are to migrate on-prem Oracle to Redshift. Before going into production environment, currently trying out a test Oracle RDS in AWS which is a small subset of actual database as source. So far have been successful in the bulk load and incremental migration from RDS to Redshift.
When it comes to on-prem oracle , particularly for the incremental load
1) As per document : http://docs.aws.amazon.com/dms/latest/sbs/CHAP_On-PremOracle2Aurora.Steps.ConfigureOracle.html, the on-prem needs to be enabled with supplemental logging. Plans are to use the following two commands.
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
The production database has multiple logging locations. Is there any other log settings other than the above two that I should be looking into for DMS to pick up multiple log locations?
2) In the same link given, point 4 says 'Create or configure a database account to be used by AWS DMS.'
Where should I create this user? on-prem oracle or AWS?
How do I configure DMS to use this user?
You need to read this documentation:
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html
For your second question; You need to create a user in the Oracle source database, the section 'Working with a Self-Managed Oracle Database as a Source for AWS DMS' tells you all of the grants you need to give.
For your first question, if you look at the SQL Server documentation;
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.SQLServer.html
It specifies the limitation of; 'SQL Server backup to multiple disks isn't supported. If the backup is defined to write the database backup to multiple files over different disks, AWS DMS can't read the data and the AWS DMS task fails.'
I can't see a similar stipulation in the oracle documentation, first link, I would hazard a guess that DMS is able, in the case of oracle, to determine and cope with multiple logging locations from a configuration value inside the database.
Current Setup:
SQL Server OLTP database
AWS Redshift OLAP database updated from OLTP
via SSIS every 20 minutes
Our customers only have access to the OLAP Db
Requirement:
One customer requires some additional tables to be created and populated to a schedule which can be done by aggregating the data already in AWS Redshift.
Challenge:
This is only for one customer so I cannot leverage the core process for populating AWS; the process must be independent and is to be handed over to the customer who do not use SSIS and don't wish to start. I was considering using Data Pipeline but this is not yet available in the market in which the customer resides.
Question:
What is my alternative? I am aware of numerous partners who offer ETL like solutions but this seems over the top, ultimately all I want to do is execute a series of SQL statements on a schedule with some form of error handling/ alert. Preference of both customer and management is to not use a bespoke app to do this, hence the intended use of Data Pipeline.
For exporting data from AWS Redshift to another data source using datapipeline you can follow a template similar to https://github.com/awslabs/data-pipeline-samples/tree/master/samples/RedshiftToRDS using which data can be transferred from Redshift to RDS. But instead of using RDSDatabase as the sink you could add a JdbcDatabase (http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-jdbcdatabase.html). The template https://github.com/awslabs/data-pipeline-samples/blob/master/samples/oracle-backup/definition.json provides more details on how to use the JdbcDatabase.
There are many such templates available in https://github.com/awslabs/data-pipeline-samples/tree/master/samples to use as a reference.
I do exactly the same thing as you, but I use lambda service to perform my ETL. One drawback of lambda service is, it can run max of 5 mins (Initially 1 min) only.
So for ETL's greater than 5 minutes, I am planning to set up PHP server in AWS and with SQL injection I can run my queries, scheduled at any time with help of cron function.
We have two divisions in our company, one uses E1 on Oracle 11g the other uses SAP on Oracle 11g.
We also have a SQL Server system we use to data warehouse information once a night from both system to run our report server against.
The question I have is for pooled tables in SAP, such as A016, how would I get that information out of SAP?
Currently we have SSIS's setup with a linked server to the two Oracle servers which pull the data we need I just don't have the knowledge of SAP to find the Pooled tables.
if I can't pull the pooled tables because they don't physically exist is there a tool I can use in SAP to find out what tables the pooled table is getting it's information from? This way I can rebuild that table in SQL using a open query and some fun Joins.
Thanks
You have to access those tables using the application server. They can't be accessed directly from the database.
You'll probably want to write an ABAP program to extract the data you need go from there.