Can we persist two different table entity in DynamoDB under one single transaction - spring

I have two tables in Amazon DynamoDB where I have to persist data in a single transaction Using spring boot. if the persistence fails in the second table it should rollback for the first table also.
I have tried looking into AWSLAB-amazon DynamoDB transaction but it only helps for a single table.

Try using the built-in DynamoDB transactions capability. From the limited information you give, it should do what you are looking for across regional tables. Just keep in mind that there is no rollback per se. Either all items in a transaction work or none of them. The internal transaction coordinator handles that for you though.
Now that this feature is out, you should not be looking at the AWSlabs tool most likely.

Related

Syncing data between services using Kafka JDBC Connector

I have a system with a microservice architecture. It has two services: Service A and Service B each with it's own database like in the following diagram.
As far as I understand having a separate database for each service is a better approach. In this design each service is the owner of its data, it's responsible for creating, updating, deleting and enforcing constraints.
In order to have Service A data in Database B I was thinking of using JDBC Kafka Connector, but I am not sure if Table1 and Table2 in Database B should enforce constraints from Database A.
If the constraint, like the foreign key from Table2 to Table1 should exist in Database B then, is there a way to have the connector know about this?
What are other common or better ways to sync data or solve this problem?
The easiest solution seems to sync per table without any constraints in Database B. That would make things easier but it could also lead to a situation where Service's A data in Service B is inconsistent. For example having entries in Table2 that point to a non-existing entry in Table1
If the constraint, like the foreign key from Table2 to Table1 should
exist in Database B then, is there a way to have the connector know
about this?
No unfortunately the "Kafka JDBC Connector" does not know about constraints.
Based on your question I assume that Table1 and Table2 are duplicated tables in Database B which exist in Database A. In Database A you have constraints which you are not sure you should add in Database B?
If that is the case then I am not sure if using "Kafka JDBC Connector" to sync data is the best choice.
You have a couple options:
Enforce the usage of Constraints like Foreign Keys in Database B but you would need to update it from your application level and not through "Kafka JDBC Connector". So for this option you can not use "Kafka JDBC Connector". You would need to write some small service/worker to read the data from that Kafka topic and populate your database tables. This way you control what is saved to the db and you can validate the constraints even before trying to save to your database. But the question here is do you really need to have the Constraints? They are important in micro-service-A but do you really need them in micro-service-B as it is just a copy of the data?
Not use constraints and allow temporary inconsistency. This is common in micro-services world. When working with Distributed systems you always have to think about the CAP Theorem. So you take into account that some data might at some point be inconsistent but you have to make sure that you will eventually bring it back to consistent state. This means you would need to develop on your application level some cleanup/healing mechanism which will recognize this data and correct it. So Db constraints do not necessary have to be enforced on data which the micro-service does not own and is considered as External data to that micro-service Domain.
Rethink your design. Usually we duplicate data in micro-service-B from micro-service-A in order to avoid coupling between the services so that the service micro-service-B can live and operate even when the micro-service-A is down or not running for some reason. We also do it to reduce load from micro-service-B to micro-service-A for every operation which needs data from Table1 and Table2. Table1 and Table2 are owned by micro-service-A and micro-service-A is the only source of truth for this data. Micro-service-B is using a duplicate of that data for its operations.
Looking at your databases design following questions might help you figuring out what would be the best option for you system:
Is it necessary to duplicate the data in micro-service-B?
If I duplicate the data do I need both tables and do I need all their columns/data in micro-service-B? Usually you just store/duplicate only a subset of the Entity/Table that you need.
Do I need the same table structure in micro-service-A as in micro-service-A? You have to decide this based on your Domain but very often you Denormalize your tables and change them in order to fit the needs of micro-service-B operations. As usually all these design decisions depend on your application Domain and use case.

What's the performance penalty of long lived DB transactions interleaved with one another?

Could anyone provide an explanation or point me to a good source where it is explained the impact of long lived database transactions when there are other transactions involved?
I'm having difficulties trying to understand what is the real impact in the performance of an application of having transactions where most of the queries are reads and maybe a couple or three are writes, given the different isolation levels.
Mostly I would like to understand it in the situation where:
Neither the rows read nor the rows updated are involved in any other transaction.
The rows read are involved in another transaction but not the rows being updated and this other transaction is read only.
The rows read are involved in another transaction but not the rows being updated and this other transaction is modifying some data being read. I understand here it also affects whether the data is read before or after is being modified.
Both the rows read and the rows updated are involved in another transaction also modifying the data.
These questions come in the context of an application using micro services where all application layer services are annotated with #Transactional using JPA and PostgreSQL and, to transform the data, they need to do some network calls to other micro services within the transaction to fetch some other values.

Create many transactions for one table in GeneXus

I'm having a trouble with a GeneXus' Transaction object.
I want to create two Transactions to an Informix table.
Anybody has any idea?
If you want to do a dataview over a existing database you can see how to do here http://wiki.genexus.com/commwiki/servlet/hwikibypageid?6627
If you want to create two transactions over the same table you can see the parallel transaction concept here
http://wiki.genexus.com/commwiki/servlet/hwikibypageid?20209
If two transactions share the same key, they are considered "parallel transactions". To put it bluntly, there are 2 transactions that refer to the same db table.

Performance impact when creating Audit trail using trigger in MS SQL Server 2012

In SQL Server 2012 database we want to create audit trail for almost all major tables on Update and Delete operations.Noramally we creating Audit Trail using trigger on each table and store it on shadow table. So there is any performance impact? if huge records updated or deleted on any table. There is anyother way to implement Audit trail?
Typically, when I implement and audit trail for DB tables, I implement it via code, not in triggers. When implemented in code, you can provide additional context information, such as the reason the change was made, who made the change, what was the reason behind the change, etc., which is a very common business requirement. In a typical multi-layer application design, we have DAOs for each table and the business services which implement the updates are responsible for calling the separate DAOs for the core table update and the history entry insert. This approach is no good if you want a bunch of different sources directly making table updates to the DB, but it's a natural approach if you have a service-oriented architecture and your one set of services are the only way into and out of those tables.
If you implement audit trail using this approach, you of course need to make sure the audit trail record is inserted in the same transaction as the modification to the core table.
Whether this would perform better than a trigger-based approach, I couldn't say. My guess would be that if you are using bulk insert operations it may run faster, but would probably be slower in the more common scenario where you are updating/deleting one record at a time via SQL. It's another option you could explore, though.

Rolling back multiple transactions with JDBC

Is it possible to rollback multiple already-commited transactions with JDBC?
According to this link here: http://docs.oracle.com/javase/tutorial/jdbc/basics/transactions.html savepoints are only active for the current transaction?
Thanks.
Already committed individual or multiple transactions (unlike savepoints!) are not possible on any databases as far as I know, definitely not on Oracle. Yes, savepoints are relevant only for the current transaction.
I'm not sure what your problem is but if you want to look at old values of a recently committed table you could use SELECT AS OF or similarly, flashback the whole table or even the database.
If you think about it for a while there are lots of constrains while individual transactional rollbacks are sometimes logically impossible without violating a whole lot of data integrity rules...

Resources