I have two Frontends consuming JSON from two different Backends using the JSON Web Token. These backends act on the same database.
In the db for example I have the Driver, Customer and Trip tables. The customer or the driver can cancel a trip only if it has not been canceled beforehand by one of them. Some transactions are recorded during a cancellation.
How to prevent having a double execution in this case when simultaneously, the customer and the driver launch a request for trip cancellation?
Am usin' Spring Boot (RESTful) and Spring JPA.
Any help will be greatly appreciated.
Edit:
Assuming these backends are A & B, Customer is requesting cancellation from the backend A, and Driver from B.
Use optimistic locking. Your code would look as follows:
#Entity
public class Trip {
#Version
#NotNull
private Long version;
...
}
It works as follows. Each change modifies the version. Suppose two users (or two services) loaded the same version of the trip. Now they both try to cancel it, i.e. they both try to modify it. Besides changes they both send the version. The JPA checks if the version in the update statement is the same as in the database. So the first request wins and will be executed. During the execution the version will be incremented.
Now the 2nd request arrives and wants also to cancel the trip. The JPA will see that the version attribute in the update statement is older (less) than the version value in database. Thus the 2nd request will not be executed and an OptimisticLockException will be thrown.
You can catch this exception and inform the user that the data were change in the meanwhile and suggest user to reload the data. The user reloads the data and sees that the trip has already been cancelled.
Related
After watching this awesome talk by Martin Klepmann about how Kafka can be used to stream events so that we can get rid of 2-phase-commits, I have a couple of questions related to updating a cache only when the database is updated properly.
Problem Statement
Lets say you have a Redis cache which stores the user's profile pic and a Postgres database which is used for all the User related operations(creating, updation, deletion, etc)
I want to update my Redis cache only and only when a new user has been successfully added to my database.
How can I do that using Kafka ?
If I am to take the example given in the video then the workflow would follow something like this:
User registers
Request is handled by User Registration Micro service
User Registration Microservice inserts a new entry into the User's table.
Then generates an User Creation Event in the user_created topic.
Cache population microservice consumes the newly created User Creation Event
Cache population microservice updates the redis cache.
The problem starts what would happen if the User Registration Microservice crashed just after writing to the database, but failed to send the event to Kafka ?
What would be the correct way of handling this ?
Does the User Registration Microservice maintain the last event it published ? How can it reliably do that ? Does it write to a DB ? Then the problem starts all over again, what if it published the event to Kafka but failed before it could update its last known offset.
There are three broad approaches one can take for this:
There's the transactional outbox pattern, wherein, in the same transaction as inserting the new entry into the user table, a corresponding user creation event is inserted into an outbox table. Some process then eventually queries that outbox table, publishes the events in that table to Kafka, and deletes the events in the table. Since the inserts are in the same transaction, they either both occur or neither occurs; barring a bug in the process which publishes the outbox to Kafka, this guarantees that every user insert eventually has an associated event published (at least once) to Kafka.
There's a more event-sourcingish pattern, where you publish the user creation event to Kafka and then some consuming process inserts into the user table based on the event. Since this happens with a delay, this strongly suggests that the user registration service needs to keep state of which users it has published creation events for (with the combination of Kafka and Postgres being the source of truth for this). Since Kafka allows a message to be consumed by arbitrarily many consumers, a different consumer can then update Redis.
Change data capture (e.g. Debezium) can be used to tie into Postgres' write-ahead log (as Postgres actually event sources under the hood...) and publish an event that essentially says "this row was inserted into the user table" to Kafka. A consumer of that event can then translate that into a user created event.
CDC in some sense moves the transactional outbox into the infrastructure, at the cost of requiring that the context it inherently throws away be reconstructed later (which is not always possible).
That said, I'd strongly advise against having ____ creation be a microservice and I'd likewise strongly advise against a RInK store like Redis. Both of these smell like attempts to paper over architectural deficiencies by adding microservices and caches.
The one-foot-on-the-way-to-event-sourcing approach isn't one I'd recommend, but if one starts there, the requirement to make the registration service stateful suddenly opens up possibilities which may remove the need for Redis, limit the need for a Kafka-like thing, and allow you to treat the existence of a DB as an implementation detail.
I have created simple CRUD api using Spring Data JPA in my Spring boot application.My Post method in the controller looks like below:-
#RequestMapping(value = "/article", method = RequestMethod.POST, produces = "application/json")
public Article createArticle(#RequestBody Article article) {
return service.createArticle(article);
}
Service Method is as follows:-
#Override
public Article createArticle(Article articleModel) {
return repository.save(articleModel);
}
My JsonPayload looks like below:
{
"article_nm":"A1",
"article_identifier":"unique identifier"
}
Now I want to make my POST request as Idempotent so that even if i got the json payload with the same article_identifier again It would not create a record in DB.
I can't do any scheme/constraint change in database and article_identifier field is also not primary key in table.
I understand that first I can check in database and return the saved record in response if it already exists but here if multiple request (original and duplicate request) comes at same time, both will check in database and would not find any record with that identifier and will create 2 record (one for each). Also as it's a distributed application how can i maintain the consistency across multiple database transactions.
How can I use some locking mechanism so that there would not be 2 records with same article_identifier ever. Can somebody please suggest some refers how to implement it in Spring boot ?
Idempotency in this case is needed to solve the post-back (or double post request). The simples way would be just to check at the service level whether a post with a given information exists (as you pointed out). You can use repository.exists() variations for that.
I understand that first I can check in database and return the saved record in response if it already exists
As for
if multiple request (original and duplicate request) comes at same time, both will check in database and would not find any record with that identifier and will create 2 record (one for each)
You need to isolate the transactions from each other if it is a single database (I know you said it is not, but I'm trying to explain my reasoning so bear with me). For that spring has the following anotation: #Transactional(isolation = Isolation.SERIALIZABLE). Although in this case #Transactional(isolation = Isolation.REPEATABLE_READ) would be enough.
Also as it's a distributed application how can i maintain the consistency across multiple database transactions.
How is it distributed? You first need to think about the database. Is it a master-slave mysql / postgress / mongodb ? Is it some weird globally distributed system? Assuming it is the traditional master-slave setup then the write transaction will be handled by the master (to my knowledge all the selects belonging to the transaction will also be there) so there should be no problem. However the answer can only really be given if more details are provided.
We are trying a scenario of Rate Limiting the total no. of JSON records requested in a month to 10000 for an API.
We are storing the total count of records in a table against client_id and a Timestamp(which is primary key).
Per request we fetch record from table for that client with Timestamp with in that month.
From this record we get the current count, then increment it with no. of current records in request and update the DB.
Using the Spring Transaction, the pseudocode is as below
#Transactional(propagation=Propagation.REQUIRES_NEW, isolation=Isolation.REPEATABLE_READ)
public void updateLimitData(String clientId, currentRecordCount) {
//step 1
startOfMonthTimestamp = getStartOfMonth();
endOfMonthTimestamp = getEndOfMonth();
//step 2
//read from DB
latestLimitDetails = fetchFromDB(startOfMonthTimestamp, endOfMonthTimestamp, clientId);
latestLimitDetails.count + currentRecordCount;
//step 3
saveToDB(latestLimitDetails)
}
We want to make sure that in case of multiple threads accessing the "updateLimitData()" method, each thread get the updated data for a clientId for a month and it do not overwrite the count wrongly.
In the above scenario if multiple threads access the method "updateLimitData()" and reach the "step 3". First thread will update "count" in DB, then the second thread update "count" in DB which may not have latest count.
I understand from Isolation.REPEATABLE_READ that "Write Lock" is placed in the rows when update is called at "Step 3" only(by that time other thread will have stale data). How I can ensure that always threads get he latest count from table in multithread scenario.
One solution came to my mind is synchronizing this block but this will not work well in multi server scenario.
Please provide a solution.
A transaction would not help you unless you lock the table/row whilst doing this operation (don't do that as it will affect performance).
You can migrate this to the database, doing this increment within the database using a stored procedure or function call. This will ensure ACID and transactional safety as this is built into the database.
I recommend doing this using standard Spring Actuator to produce a count of API calls however, this will mean re-writing your service to use the actuator endpoint and not the database. You can link this to your Gateway/Firewall/Load-balancer to deny access to the API once their quote is reached. This means that your API endpoint is pure and this logic is removed from your API call. All new API's you developer will automatically get this functionality.
I have the following problem: I am working on a spring-boot application which offers REST services and use a relational (SQL) database using spring-data-jpa.
I have two REST services:
- a entity-creation service, which create the child-entity, the parent-entity and associate them in a same transaction. When this service ends, the data are committed into the database.
- an entity consultation service, which get back the parent-entity with its children
These two services are annotated with the #Transactional annotation. It production case, it works well: I can create an parent-entity with its children in one transaction (which is commited/ended), and get it in another transaction latter.
The problem is when I want to create integration-tests. My idea was to annotate each test with the #Transactional annotation, and do a rollback after each test. This way I keep my database clean between each test, and I don't have a generate the schema again or clean all the records in the database.
The integration test consists in creating a parent and its children and then reading it, everything in one transaction (as the test is annotated with #Transaction). When reading the entity previously created in the same transaction, I can get the parent entity, but the children are not fetched (null value). I am not sure to understand very well the transaction mechanism: I was thinking that using the #Transactional on the test method, the services (annotated with "#Transactional") invoked by this test should detect and use the same transaction opened by the test method (the propagation is configured to "REQUIRED"). Hence as the transaction uses the same EntityManager, this one should be able to return the relation between the parent entity and its children created previously in the same transaction, even if the data has not been committed to the database. The strange thing is that it retrieve the parent entity (which has not been yet committed into the database), but not its children. Is my understanding of the transaction concept correct? If not, could someone explains me what am I missing?
Also, if someone did something similar, could he explain me how he did it please?
My code is quite complex. I first want to know if I understand well how are transaction managed and if someone already did something similar. If really it is required, I can send more information about my implementation (how the transaction-manager and the entity-manager are initialized, the JPA entities, the services etc...)
Binding the Entity-manager in my test and calling its flush method from my test,between the creation and the reading, the reading operation works well: I get the parent entity with its children. But the data are written into the database during the creation to read it latter during the read operation. And I don't want the transaction to be committed as I need my test to work on an empty database. My misunderstanding is not so much about the Transaction mechanism, but more about the entity-manager: it does not keep as a cache the entities created and theirs relations...
This post help me.
Issue with #Transactional annotations in Spring JPA
As a final word, I am thinking about calling an SQL script before each test to empty my database.
I understand that data is always stale.
What is a way to handle a workflow task, like Approve Invoice. This task is allowed to execute once by the user. When this is processed by an async service it can take some seconds (or longer). In the meantime the user can approve the same invoice again, because the task is not updated yet in the DB.
Any ideas about this are appreciated.
The domain model must enforce consistency. The model on the write side should not be considered stale, only the projections on the read side.
It doesn't matter if the approval event hasn't been projected into the read model. But if the user sends an invalid command based on stale data, the domain model needs to know that the approval had already happened.
Your domain's repository should always get the aggregate root in its lates state (no matter if you use event sourcing or some state-based persistence as a SQL db).