Non XA transaction for multiple schema's on the same instance - oracle

Currently I am using Weblogic with Oracle.
I have one instance of Oracle DB and two legacy schemas so I use tow datasources.
To keep transactionality I use XA but from time to time there are HeuristicExceptions thrown causing some inconsistency on data level
Now because it is the same instance is not possible somehow not to use XA and define a datasource that has access to both schemas ?
In this way i will not use XA anymore and avoid having data inconsitency.
Thanks

Do not use dblink. It is overkill. And also this might not be related to XA. Best solution is to use tables from both schemas from a single datasource. Either prefix tables in your queries by schema name, or create synonyms in one schema pointing onto tables in the other schema.
It is only matter database privileges. No need to deal with XA nor dblinks.
One db user need to have grants to manipulate tables in both schemas.
PS: you can use distributed transactions on connections pointing into the same database. If you insist on it. But in your case, there no need for that.

You can connect one schema and create a DBLink for the other to give access to the second. I think that transaction will work through both schema.
http://docs.oracle.com/cd/B28359_01/server.111/b28310/ds_concepts004.htm

Related

Syncing data between services using Kafka JDBC Connector

I have a system with a microservice architecture. It has two services: Service A and Service B each with it's own database like in the following diagram.
As far as I understand having a separate database for each service is a better approach. In this design each service is the owner of its data, it's responsible for creating, updating, deleting and enforcing constraints.
In order to have Service A data in Database B I was thinking of using JDBC Kafka Connector, but I am not sure if Table1 and Table2 in Database B should enforce constraints from Database A.
If the constraint, like the foreign key from Table2 to Table1 should exist in Database B then, is there a way to have the connector know about this?
What are other common or better ways to sync data or solve this problem?
The easiest solution seems to sync per table without any constraints in Database B. That would make things easier but it could also lead to a situation where Service's A data in Service B is inconsistent. For example having entries in Table2 that point to a non-existing entry in Table1
If the constraint, like the foreign key from Table2 to Table1 should
exist in Database B then, is there a way to have the connector know
about this?
No unfortunately the "Kafka JDBC Connector" does not know about constraints.
Based on your question I assume that Table1 and Table2 are duplicated tables in Database B which exist in Database A. In Database A you have constraints which you are not sure you should add in Database B?
If that is the case then I am not sure if using "Kafka JDBC Connector" to sync data is the best choice.
You have a couple options:
Enforce the usage of Constraints like Foreign Keys in Database B but you would need to update it from your application level and not through "Kafka JDBC Connector". So for this option you can not use "Kafka JDBC Connector". You would need to write some small service/worker to read the data from that Kafka topic and populate your database tables. This way you control what is saved to the db and you can validate the constraints even before trying to save to your database. But the question here is do you really need to have the Constraints? They are important in micro-service-A but do you really need them in micro-service-B as it is just a copy of the data?
Not use constraints and allow temporary inconsistency. This is common in micro-services world. When working with Distributed systems you always have to think about the CAP Theorem. So you take into account that some data might at some point be inconsistent but you have to make sure that you will eventually bring it back to consistent state. This means you would need to develop on your application level some cleanup/healing mechanism which will recognize this data and correct it. So Db constraints do not necessary have to be enforced on data which the micro-service does not own and is considered as External data to that micro-service Domain.
Rethink your design. Usually we duplicate data in micro-service-B from micro-service-A in order to avoid coupling between the services so that the service micro-service-B can live and operate even when the micro-service-A is down or not running for some reason. We also do it to reduce load from micro-service-B to micro-service-A for every operation which needs data from Table1 and Table2. Table1 and Table2 are owned by micro-service-A and micro-service-A is the only source of truth for this data. Micro-service-B is using a duplicate of that data for its operations.
Looking at your databases design following questions might help you figuring out what would be the best option for you system:
Is it necessary to duplicate the data in micro-service-B?
If I duplicate the data do I need both tables and do I need all their columns/data in micro-service-B? Usually you just store/duplicate only a subset of the Entity/Table that you need.
Do I need the same table structure in micro-service-A as in micro-service-A? You have to decide this based on your Domain but very often you Denormalize your tables and change them in order to fit the needs of micro-service-B operations. As usually all these design decisions depend on your application Domain and use case.

Can we persist two different table entity in DynamoDB under one single transaction

I have two tables in Amazon DynamoDB where I have to persist data in a single transaction Using spring boot. if the persistence fails in the second table it should rollback for the first table also.
I have tried looking into AWSLAB-amazon DynamoDB transaction but it only helps for a single table.
Try using the built-in DynamoDB transactions capability. From the limited information you give, it should do what you are looking for across regional tables. Just keep in mind that there is no rollback per se. Either all items in a transaction work or none of them. The internal transaction coordinator handles that for you though.
Now that this feature is out, you should not be looking at the AWSlabs tool most likely.

Create many transactions for one table in GeneXus

I'm having a trouble with a GeneXus' Transaction object.
I want to create two Transactions to an Informix table.
Anybody has any idea?
If you want to do a dataview over a existing database you can see how to do here http://wiki.genexus.com/commwiki/servlet/hwikibypageid?6627
If you want to create two transactions over the same table you can see the parallel transaction concept here
http://wiki.genexus.com/commwiki/servlet/hwikibypageid?20209
If two transactions share the same key, they are considered "parallel transactions". To put it bluntly, there are 2 transactions that refer to the same db table.

Retrieving tables from "Other users" in nHibernate

First of all I won't to say that I'm an expert in database handling, and less so in oracle. However right now I need to get better at it :)
I'm using nHibernate as orm, to my oracle database. It works ok, and is rather simple to use. However now I have run in to a problem that I don't know how to solve.
In the Database theres a kind of tree with the tables, views, indexes and such. At the end there are also a entry called "Other Users" in which there are some users with access to what I'm guessing is other tables. Now I would like to get data from one of those tables (I can read them manually in SQL Developer, so it's not a access problem or anything). Does anyone have any idea how I shall do that?
The account that you use in SQL Developer has at least read privilges to tables in another schema (owned by another user). You can access these tables by prefixing the table name with the schema name. In Hibernate you'll have to define the non-default-schema in the mapping.

Hibernate bug using Oracle?

I've got the problem, that I use a property in the persistence.xml which forces Hibernate to look only for tables in the given schema.
<property name="hibernate.default_schema" value="FOO"/>
Because we are using now 4 different schemas the actual solution is to generate 4 war files with a modified persistence.xml.
That not very elegant.
Does anybody know, how I can configure the schema with a property or by manipulation the JDBC connection string?
I'm using Oracle 10g, 10_2_3 Patch.
Thanks a lot.
You could create four different users on the oracle database for the four different applications, the JDBC connection would include the user.
The for the user, you can create synonyms and permissions for the tables.
E.g.
create or replace synonym USER1.tablename FOR SCHEMA1.tablename;
create or replace synonym USER2.tablename FOR SCHEMA1.tablename;
create or replace synonym USER3.tablename FOR SCHEMA2.tablename;
And when you are accessing the tables from hibernate, just leave the schema off. When logged in as USER1, it'll use SCHEMA1, etc.
That way you don't have to create four different WAR files with four different persistence.xml files. Just deploy the app four times with different jdbc connections.
Hope that helps.
If you don't want to generate four different WARs then put this property in a hibernate.properties file and put that file on the class path (but outside the webapp) for each webapp.
See this - https://www.hibernate.org/429.html
I created a method called deduceSchema that I run when I'm setting up the SessionFactory. It opens a jdbc connection using the data source (because you don't have a Hibernate session yet) and queries "select user from dual" to get the logged in user. This will be accurate if the user you log in as also owns the tables. If not, I use a jndi environment variable to override.
Once I have the schema, I modify the Hibernate configuration to set it for each table although this is only necessary if the logged in user is different than the schema:
for (Iterator iter = configuration.getTableMappings(); iter.hasNext();) {
Table table = (Table) iter.next();
table.setSchema(schema);
}

Resources