I have created simple CRUD api using Spring Data JPA in my Spring boot application.My Post method in the controller looks like below:-
#RequestMapping(value = "/article", method = RequestMethod.POST, produces = "application/json")
public Article createArticle(#RequestBody Article article) {
return service.createArticle(article);
}
Service Method is as follows:-
#Override
public Article createArticle(Article articleModel) {
return repository.save(articleModel);
}
My JsonPayload looks like below:
{
"article_nm":"A1",
"article_identifier":"unique identifier"
}
Now I want to make my POST request as Idempotent so that even if i got the json payload with the same article_identifier again It would not create a record in DB.
I can't do any scheme/constraint change in database and article_identifier field is also not primary key in table.
I understand that first I can check in database and return the saved record in response if it already exists but here if multiple request (original and duplicate request) comes at same time, both will check in database and would not find any record with that identifier and will create 2 record (one for each). Also as it's a distributed application how can i maintain the consistency across multiple database transactions.
How can I use some locking mechanism so that there would not be 2 records with same article_identifier ever. Can somebody please suggest some refers how to implement it in Spring boot ?
Idempotency in this case is needed to solve the post-back (or double post request). The simples way would be just to check at the service level whether a post with a given information exists (as you pointed out). You can use repository.exists() variations for that.
I understand that first I can check in database and return the saved record in response if it already exists
As for
if multiple request (original and duplicate request) comes at same time, both will check in database and would not find any record with that identifier and will create 2 record (one for each)
You need to isolate the transactions from each other if it is a single database (I know you said it is not, but I'm trying to explain my reasoning so bear with me). For that spring has the following anotation: #Transactional(isolation = Isolation.SERIALIZABLE). Although in this case #Transactional(isolation = Isolation.REPEATABLE_READ) would be enough.
Also as it's a distributed application how can i maintain the consistency across multiple database transactions.
How is it distributed? You first need to think about the database. Is it a master-slave mysql / postgress / mongodb ? Is it some weird globally distributed system? Assuming it is the traditional master-slave setup then the write transaction will be handled by the master (to my knowledge all the selects belonging to the transaction will also be there) so there should be no problem. However the answer can only really be given if more details are provided.
Related
I have two Frontends consuming JSON from two different Backends using the JSON Web Token. These backends act on the same database.
In the db for example I have the Driver, Customer and Trip tables. The customer or the driver can cancel a trip only if it has not been canceled beforehand by one of them. Some transactions are recorded during a cancellation.
How to prevent having a double execution in this case when simultaneously, the customer and the driver launch a request for trip cancellation?
Am usin' Spring Boot (RESTful) and Spring JPA.
Any help will be greatly appreciated.
Edit:
Assuming these backends are A & B, Customer is requesting cancellation from the backend A, and Driver from B.
Use optimistic locking. Your code would look as follows:
#Entity
public class Trip {
#Version
#NotNull
private Long version;
...
}
It works as follows. Each change modifies the version. Suppose two users (or two services) loaded the same version of the trip. Now they both try to cancel it, i.e. they both try to modify it. Besides changes they both send the version. The JPA checks if the version in the update statement is the same as in the database. So the first request wins and will be executed. During the execution the version will be incremented.
Now the 2nd request arrives and wants also to cancel the trip. The JPA will see that the version attribute in the update statement is older (less) than the version value in database. Thus the 2nd request will not be executed and an OptimisticLockException will be thrown.
You can catch this exception and inform the user that the data were change in the meanwhile and suggest user to reload the data. The user reloads the data and sees that the trip has already been cancelled.
I'm using Spring boot 2.3, Spring Data REST, Spring HATEOAS, Hibernate.
Let's think to a simple use case like an user creating an invoice in a web client, or a inventory list for a warehouse. When the user submit the form, could be sent hundreds or rows and these rows can have links to other entities.
In the case of the invoice, for example, each row can have a product reference that will be passed to the sever as a link.
That link is translated by Spring into an entity using Repository. My point is that for every row, a query to get the product runs.
This means that everything will be really slow during insert (n+1 select problem).
Probably I missed somthing in the logic, but I didn't see concrete examples that focus on how to handle a big quantity of translations link -> entity.
Do you have any hint about it?
Is your point about many entities that will be created if linked entities will be returned to server? Hibernate (as well as spring) has lazy loading mechanism - https://blog.ippon.tech/boost-the-performance-of-your-spring-data-jpa-application/, so only necessary entities will be populated. Please, correct me if I miss understand your questions.
I have a database and in that database there are many tables of data. I want to fetch the data from any one of those tables by entering a query from the front-end application. I'm not doing any manipulation to the data, doing just retrieving the data from database.
Also, mapping the data requires writing so many entity or POJO classes, so I don't want to map the data to any object. How can I achieve this?
In this case, assuming the mapping of tables if not relevant, you don't need to use JPA/Hibernate at all.
You can use an old, battle tested jdbc template that can execute a query of your choice (that you'll pass from client), will serialize the response to JSONObject and return it as a response in your controller.
The client side will be responsible to rendering the result.
You might also query the database metadata to obtain the information about column names, types, etc. so that the client side will also get this information and will be able to show the results in a more convenient / "advanced" way.
Beware of security implications, though. Basically it means that the client will be able to delete all the records from the database by a simple query and you won't be able to avoid it :)
One advantage of Document DBs like Couchbase is schemaless entities. It gives me freedom to add new attributes within the document without any schema change.
Using Couchbase JsonObject and JsonDocument my code remains generic to perform CRUD operations without any need to modify it whenever new attribute is added to the document. Refer this example where no Entities are created.
However if I follow the usual Spring Data approach of creating Entity classes, I do not take full advantage of this flexibility. I will end up in code change whenever I add new attribute into my document.
Is there a approach to have generic entity using Spring Data? Or Spring Data is not really suitable for schemaless DBs? Or is my understanding is incorrect?
I would argue the opposite is true.
One way or another if you introduce a new field you have to handle the existing data that doesn't have that field.
Either you update all your documents to include that field. That is what schema based stores basically force you to do.
Or you leave your store as it is and let your application handle that issue. With Spring Data you have some nice and obvious ways to handle that in a consistent fashion, e.g. by having a default value in the entity or handling that in a listener.
I'm looking for a solution with Spring / camel to consume multiple REST services during runtime and create tables to store the data from REST API and compare the data dynamically. I don't know the schema for JSON API in advance to generate the JAVA client classes to create JPA persistent entity classes during run time.
You'll need to think through this differently. Id forget about Java class POJOs that you don't have and can't create since the class structure isn't known in advance. So anything with POJO->Entity binding would be pretty useless.
One solution is to simply parse the xml or json body manually with en event-based parser (like SAX for XML) and simply build an SQL create string as you go through the document. Your field and table names would correspond to the tags in the document. Without access to an XSD or other structure description, no meta data is available for field lengths or types. Make everything really long VARCHAR? Also perhaps an XML or other kind of database might suite your problem domain better. In any case, you could include such a thing right in your Camel route as a Processor that will process the body and create the necessary tables if they don't already exist. You could even alter a table for lengths in the process when you have a field value that is longer than what's currently defined.