I have an integration between Oracle Planning and Oracle ERP Cloud. I am attempting to run via Data Management page. When I execute a Data Load Rule or attempt to change the POV period, I can not see all of my mapped periods. It only shows one period.
When I check my Period Mapping, in Global I have all my periods mapped. But in Application, just the one period (the one that is showing). I tested adding a new period here and it now shows up when I try to execute the rule.
How do I modify my rule or (Data Management as a whole) to use the Global mapping for my periods?
Related
I am trying to insert data from a SQL table to an Oracle table using activity Copy Data in Data Factory, on the first try it runs fine but on the second try it throws an error that an index on the target table (Oracle) has been corrupted.
Searching in different forums I found that apparently the Copy Data activity sends the insert statement in the following way: INSERT /*+ SYS_DL_CURSOR */ INTO
any idea how to fix this???
Thank you very much for the help
As per the error index is not corrupted. It was used twice. May be the operation was not planned according to the schedule and worked parallelly.
The Copy activity is executed on an integration runtime. You can use different types of integration runtimes for different data copy scenarios:
When you're copying data between two data stores that are publicly accessible through the internet from any IP, you can use the Azure integration runtime for the copy activity. This integration runtime is secure, reliable, scalable, and globally available.
When you're copying data to and from data stores that are located on-premises or in a network with access control (for example, an Azure virtual network), you need to set up a self-hosted integration runtime.
Use either of the two operations mentioned above, the error will be resolved.
Check link for support document: https://learn.microsoft.com/en-us/azure/data-factory/copy-activity-overview
I need to set the query execution time limit for a specific schema on Autonomous Database on Shared Infrastructure.
There is a setting on the "Set Resource Management Rules" page under Administration tab of Service Console but it is applied for the complete DB instance, which we do not want to.
I don't think that is currently possible.
That would require a modification to the existing (pre-supplied) resource manager plans, and access to DBMS_RESOURCE_MANAGER is blocked on Autonomous.
i have a parallel job that writes in oracle table. I want to manually write warnings in Datastage's log if some event occur. For example if a certain value for a certain column is inserted i want to track this information in the log. Could this be achieved somehow?
To write custom messages into the logs for a particular jobs data stream, you can use a combination of a copy stage, transformer, and peak stage. The peak stage is the one that writes to the logs. I like to set the peak stage to run in sequential mode, so that your messages are kept together in single entries in the log, instead across nodes.
Also, you can peak the rejects of the oracle stage. maybe combine this with the above option (using a funnel stage and a standard column schema).
Lastly, if you'd actually like to query the logs themselves and write those logs out somewhere else or use them in a job (amoungst allother data kept about jobs in the repository). You can directly query the DSODB schema in the XMETA database. I.e. the DataStage repository (by default DB2).
You would need to have the DataStage Operations Console up and running for that (not sure what version of DataStage you're running). If DataStage is running on a single tier and using the default DB2 database. You can simply catalog the DSODB database so that it's available as a connection in the DB2 connector. Else you'd need to install a DB2 client on the DataStage engine tier and catalog the database there.
All the best!
Twitter: #InforgeAcademy
DataStage tips and Tricks: https://www.inforgeacademy.com/blog/
Current Setup:
SQL Server OLTP database
AWS Redshift OLAP database updated from OLTP
via SSIS every 20 minutes
Our customers only have access to the OLAP Db
Requirement:
One customer requires some additional tables to be created and populated to a schedule which can be done by aggregating the data already in AWS Redshift.
Challenge:
This is only for one customer so I cannot leverage the core process for populating AWS; the process must be independent and is to be handed over to the customer who do not use SSIS and don't wish to start. I was considering using Data Pipeline but this is not yet available in the market in which the customer resides.
Question:
What is my alternative? I am aware of numerous partners who offer ETL like solutions but this seems over the top, ultimately all I want to do is execute a series of SQL statements on a schedule with some form of error handling/ alert. Preference of both customer and management is to not use a bespoke app to do this, hence the intended use of Data Pipeline.
For exporting data from AWS Redshift to another data source using datapipeline you can follow a template similar to https://github.com/awslabs/data-pipeline-samples/tree/master/samples/RedshiftToRDS using which data can be transferred from Redshift to RDS. But instead of using RDSDatabase as the sink you could add a JdbcDatabase (http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-jdbcdatabase.html). The template https://github.com/awslabs/data-pipeline-samples/blob/master/samples/oracle-backup/definition.json provides more details on how to use the JdbcDatabase.
There are many such templates available in https://github.com/awslabs/data-pipeline-samples/tree/master/samples to use as a reference.
I do exactly the same thing as you, but I use lambda service to perform my ETL. One drawback of lambda service is, it can run max of 5 mins (Initially 1 min) only.
So for ETL's greater than 5 minutes, I am planning to set up PHP server in AWS and with SQL injection I can run my queries, scheduled at any time with help of cron function.
Today I was hit by a successful 2pc that wasn't materialized in Oracle. The other participant was MSMQ which materialized fine.
The problem is that I did not get an Exception in the application (using c# odp.net). Later I found the In-Doubt Transactions in sys.dba_2pc_pending.
Could I somehow have detected this in my application?
EDIT: This is not about getting 2pc to work. It does work, and for more than a year until a day where some rows where missing. Please read about In-Doubt Oracle transactions link1 and pending transactions link2
My first thoughts is to make sure that distributed transaction processing is enabled on the oracle listener.
In my case no error was thrown. We use RAC and the service did not have distributed transaction processing enabled. In a stand-alone system I'm not sure what this would do, but in the case of RAC it serves the purpose of identifying the primary node for handling the transaction. Without it, a second operation that was supposed to be in the same operation just ended up starting a new transaction and deadlocked with the first.
I have also had significant amounts of time go by without an issue. By luck (there's probably more) it just so happened that transactions were never split over the nodes. But then a year later the same symptoms creap up and in all cases either the service didn't have the DTP flag checked or the wrong service name (one without DTP) was being used.
From:http://docs.oracle.com/cd/B19306_01/rac.102/b14197/hafeats.htm#BABBBCFG
Enabling Distributed Transaction Processing for Services For services
that you are going to use for distributed transaction processing,
create the service using Enterprise Manager, DBCA, or SRVCTL and
define only one instance as the preferred instance. You can have as
many AVAILABLE instances as you want. For example, the following
SRVCTL command creates a singleton service for database crm,
xa_01.service.us.oracle.com, whose preferred instance is RAC01:
srvctl add service -d crm -s xa_01.service.us.oracle.com -r RAC01 -a
RAC02, RAC03
Then mark the service for distributed transaction
processing by setting the DTP parameter to TRUE; the default is FALSE.
Enterprise Manager enables you to set this parameter on the Cluster
Managed Database Services: Create Service or Modify Service page. You
can also use the DBMS_SERVICE package to modify the DTP property of
the singleton service as follows:
EXECUTE DBMS_SERVICE.MODIFY_SERVICE(service_name
=>'xa_01.service.us.oracle.com', DTP=>TRUE);