Do I need to deal with concurrency? - spring

I have an application where I am managing documents. I would like to ask you whether I need to deal with concurrency.
Lets say, I will have the method below (which is in the class with #Service and #Transactional) and more requests would come which would require to use this function.
Will spring and database handle concurrency without synchronization? (my db is MySQL and JPA). So the first request to use this method will be executed, but another request will wait till the previous request will be done... so it would not happen that something would be overwritten in the database...
Thanks for help
public void updateSharing(long userId, long documentId, int approval) {
Optional<Document> optionalDocument = documentRepository.findById(documentId);
User user = userService.findUserById(userId);
if(optionalDocument.isPresent()){
Document document = optionalDocument.get();
if(document.getDocumentState().getId() == 2){
documentRepository.updateSharing(userId, documentId, approval);
if(approval == 0){
List<User> users = userService.getUsersForApprovingDocument(documentId);
Map<String, String> map = emailService.createMessage(2, user, document);
if(document.getUser().isActive()){
users.add(document.getUser());
}
setDocumentType(documentId, 3);
sendEmail(users, map.get("subject"), map.get("message"));
} else if(isDocumentApproved(documentId)){
setDocumentType(documentId, 1);
List<User> users = userService.getUsersForApprovingDocument(documentId);
if(document.getUser().isActive()){
users.add(document.getUser());
}
Map<String, String> map = emailService.createMessage(5, user, document);
sendEmail(users, map.get("subject"), map.get("message"));
}
} else if(document.getDocumentState().getId() == 1){
documentRepository.updateSharing(userId, documentId, approval);
} else {
return;
}
}
}

You don't need to deal with concurrency in this situation.
For every request, the container creates a new Thread and each Thread has it's own stack.

Related

Concurrent transaction issue in keycloak user attribute (java spring boot)

I managed our customer's point as keycloak user attribute.
I set 'point' as user attribute, and I handled it with keycloak api in Java Spring boot.
So, flow of update point is..
point = getPointByUserEmail(userEmail); // get point to update.
point -= 10; // minus point
updatePointByUserEmail(userEmail, point); // update point
public Long getPointByUserEmail(String userEmail) {
UserRepresentation userRepresentation = usersResource.search(userEmail, true).get(0);
Map<String, List<String>> attributes = userRepresentation.getAttributes();
if (attributes == null || attributes.get("point") == null)
return null;
return Long.parseLong(attributes.get("point").get(0));
}
public void updatePointByUserEmail(String userEmail, Long point) {
UserRepresentation userRepresentation = usersResource.search(userEmail, true).get(0);
UserResource userResource = usersResource.get(userRepresentation.getId());
Map<String, List<String>> attributes = userRepresentation.getAttributes();
attributes.put("point", Arrays.asList(point.toString()));
userRepresentation.setAttributes(attributes);
userResource.update(userRepresentation);
}
It works well.
But my problem is when user requests simultaneously at almost same time to update point,
It doesn't work well.
For example, there are 2 requests at once. (initial point = 100, minus point per request = 10)
I expected it would be 80 point, because 100 - (10 * 2) = 80
But it was 90 point.
So I think I need to set isolation level to transaction in point.
In JPA, there is #Lock annotation... but,, how can I do it in keycloak ?
Is there any way that I can set isolation level in keycloak api so that my function will work well ?
This is code when I handle point,
public class someController {
public ResponseEntity<String> methodToHandleRequest(#RequestBody Dto param, HttpServletRequest request) {
...
Long point = null;
try {
point = userAttributesService.getPoint();
if (point == null)
throw new NullPointerException();
} catch (Exception e) {
e.printStackTrace();
return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body("error");
}
if (point < 10)
return ResponseEntity.status(HttpStatus.PAYMENT_REQUIRED).body("you have at least 10 points " + "(current: " + point + ")");
userAttributesService.updatePoint(point - 10);
...
}
I tried managing point to use JPA, it would handle user attribute using DB.
But, when I updated user attribute data in DB.
I tried managing point to connect JPA with keycloak DB.
And I found DB table for user attribute, and there is point value !
But it doesn't update in keycloak when I updated point in DB.... :'(

How to repeat Job with Partitioner when data is dynamic with Spring Batch?

I am trying to develop a batch process using Spring Batch + Spring Boot (Java config), but I have a problem doing so. I have a software that has a database and a Java API, and I read records from there. The batch process should retrieve all the documents which expiration date is less than a certain date, update the date, and save them again in the same database.
My first approach was reading the records 100 by 100; so the ItemReader retrieve 100 records, I process them 1 by 1, and finally I write them again. In the reader, I put this code:
public class DocumentItemReader implements ItemReader<Document> {
public List<Document> documents = new ArrayList<>();
#Override
public Document read() throws Exception, UnexpectedInputException, ParseException, NonTransientResourceException {
if(documents.isEmpty()) {
getDocuments(); // This method retrieve 100 documents and store them in "documents" list.
if(documents.isEmpty()) return null;
}
Document doc = documents.get(0);
documents.remove(0);
return doc;
}
}
So, with this code, the reader reads from the database until no records are found. When the "getDocuments()" method doesn't retrieve any documents, the List is empty and the reader returns null (so the Job finish). Everything worked fine here.
However, the problem appears if I want to use several threads. In this case, I started using the Partitioner approach instead of Multi-threading. The reason of doing that is because I read from the same database, so if I repeat the full step with several threads, all of them will find the same records, and I cannot use pagination (see below).
Another problem is that database records are updated dynamically, so I cannot use pagination. For example, let's suppose I have 200 records, and all of them are going to expire soon, so the process is going to retrieve them. Now imagine I retrieve 10 with one thread, and before anything else, that thread process one and update it in the same database. The next thread cannot retrieve from 11 to 20 records, as the first record is not going to appear in the search (as it has been processed, its date has been updated, and then it doesn't match the query).
It is a little difficult to understand, and some things may sound strange, but in my project:
I am forced to use the same database to read and write.
I can have millions of documents, so I cannot read all the records at the same time. I need to read them 100 by 100, or 500 by 500.
I need to use several threads.
I cannot use pagination, as the query to the databse will retrieve different documents each time it is executed.
So, after hours thinking, I think the unique possible solution is to repeat the job until the query retrives no documents. Is this possible? I want to do something like the step does: Do something until null is returned - repeat the job until the query return zero records.
If this is not a good approach, I will appreciate other possible solutions.
Thank you.
Maybe you can add a partitioner to your step that will :
Select all the ids of the datas that needs to be updated (and other columns if needed)
Split them in x (x = gridSize parameter) partitions and write them in temporary file (1 by partition).
Register the filename to read in the executionContext
Then your reader is not reading from the database anymore but from the partitioned file.
Seem complicated but it's not that much, here is an example which handle millions of record using JDBC query but it can be easily transposed for your use case :
public class JdbcToFilePartitioner implements Partitioner {
/** number of records by database fetch */
private int fetchSize = 100;
/** working directory */
private File tmpDir;
/** limit the number of item to select */
private Long nbItemMax;
#Override
public Map<String, ExecutionContext> partition(final int gridSize) {
// Create contexts for each parttion
Map<String, ExecutionContext> executionsContexte = createExecutionsContext(gridSize);
// Fill partition with ids to handle
getIdsAndFillPartitionFiles(executionsContexte);
return executionsContexte;
}
/**
* #param gridSize number of partitions
* #return map of execution context, one for each partition
*/
private Map<String, ExecutionContext> createExecutionsContext(final int gridSize) {
final Map<String, ExecutionContext> map = new HashMap<>();
for (int partitionId = 0; partitionId < gridSize; partitionId++) {
map.put(String.valueOf(partitionId), createContext(partitionId));
}
return map;
}
/**
* #param partitionId id of the partition to create context
* #return created executionContext
*/
private ExecutionContext createContext(final int partitionId) {
final ExecutionContext context = new ExecutionContext();
String fileName = tmpDir + File.separator + "partition_" + partitionId + ".txt";
context.put(PartitionerConstantes.ID_GRID.getCode(), partitionId);
context.put(PartitionerConstantes.FILE_NAME.getCode(), fileName);
if (contextParameters != null) {
for (Entry<String, Object> entry : contextParameters.entrySet()) {
context.put(entry.getKey(), entry.getValue());
}
}
return context;
}
private void getIdsAndFillPartitionFiles(final Map<String, ExecutionContext> executionsContexte) {
List<BufferedWriter> fileWriters = new ArrayList<>();
try {
// BufferedWriter for each partition
for (int i = 0; i < executionsContexte.size(); i++) {
BufferedWriter bufferedWriter = new BufferedWriter(new FileWriter(executionsContexte.get(String.valueOf(i)).getString(
PartitionerConstantes.FILE_NAME.getCode())));
fileWriters.add(bufferedWriter);
}
// Fetching the datas
ScrollableResults results = runQuery();
// Get the result and fill the files
int currentPartition = 0;
int nbWriting = 0;
while (results.next()) {
fileWriters.get(currentPartition).write(results.get(0).toString());
fileWriters.get(currentPartition).newLine();
currentPartition++;
nbWriting++;
// If we already write on all partitions, we start again
if (currentPartition >= executionsContexte.size()) {
currentPartition = 0;
}
// If we reach the max item to read we stop
if (nbItemMax != null && nbItemMax != 0 && nbWriting >= nbItemMax) {
break;
}
}
// closing
results.close();
session.close();
for (BufferedWriter bufferedWriter : fileWriters) {
bufferedWriter.close();
}
} catch (IOException | SQLException e) {
throw new UnexpectedJobExecutionException("Error writing partition file", e);
}
}
private ScrollableResults runQuery() {
...
}
}

Hibernate queries getting slower and slower

I'm working on a process that checks and updates data from Oracle database. I'm using hibernate and spring framework in my application.
The application reads a csv file, processes the content, then persiste entities :
public class Main() {
Input input = ReadCSV(path);
EntityList resultList = Process.process(input);
WriteResult.write(resultList);
...
}
// Process class that loops over input
public class Process{
public EntityList process(Input input) :
EntityList results = ...;
...
for(Line line : input.readLine()){
results.add(ProcessLine.process(line))
...
}
return results;
}
// retrieving and updating entities
Class ProcessLine {
#Autowired
DomaineRepository domaineRepository;
#Autowired
CompanyDomaineService companydomaineService
#Transactional
public MyEntity process(Line line){
// getcompanyByXX is CrudRepository method with #Query that returns an entity object
MyEntity companyToAttach = domaineRepository.getCompanyByCode(line.getCode());
MyEntity companyToDetach = domaineRepository.getCompanyBySiret(line.getSiret());
if(companyToDetach == null || companyToAttach == null){
throw new CustomException("Custom Exception");
}
// AttachCompany retrieves some entity relationEntity, then removes companyToDetach and adds CompanyToAttach. this updates relationEntity.company attribute.
companydomaineService.attachCompany(companyToAttach, companyToDetach);
return companyToAttach;
}
}
public class WriteResult{
#Autowired
DomaineRepository domaineRepository;
#Transactional
public void write(EntityList results) {
for (MyEntity result : results){
domaineRepository.save(result)
}
}
}
The application works well on files with few lines, but when i try to process large files (200 000 lines), the performance slows drastically, and i get a SQL timeout.
I suspect cache issues, but i'm wondering if saving all the entities at the end of the processing isn't a bad practice ?
The problem is your for loop which is doing individual saves on the result and thus does single inserts slowing it down. Hibernate and spring support batch inserts and should be done when ever possible.
something like domaineRepository.saveAll(results)
Since you are processing lot of data it might be better to do things in batches so instead of getting one company to attach you should get a list of companies to attach processes those then get a list of companies to detach and process those
public EntityList process(Input input) :
EntityList results;
List<Code> companiesToAdd = new ArrayList<>();
List<Siret> companiesToRemove = new ArrayList<>();
for(Line line : input.readLine()){
companiesToAdd.add(line.getCode());
companiesToRemove.add(line.getSiret());
...
}
results = process(companiesToAdd, companiesToRemove);
return results;
}
public MyEntity process(List<Code> companiesToAdd, List<Siret> companiesToRemove) {
List<MyEntity> attachList = domaineRepository.getCompanyByCodeIn(companiesToAdd);
List<MyEntity> detachList = domaineRepository.getCompanyBySiretIn(companiesToRemove);
if (attachList.isEmpty() || detachList.isEmpty()) {
throw new CustomException("Custom Exception");
}
companydomaineService.attachCompany(attachList, detachList);
return attachList;
}
The above code is just sudo code to point you in the right direction, will need to work out what works for you.
For every line you read you are doing 2 read operations here
MyEntity companyToAttach = domaineRepository.getCompanyByCode(line.getCode());
MyEntity companyToDetach = domaineRepository.getCompanyBySiret(line.getSiret());
You can read more than one line and us the in query and then process that list of companies

Creating webhook-notifications in testing environment

I'm currently trying to create a test webhook-notification as it's shown in the documentation:
HashMap<String, String> sampleNotification = gateway.webhookTesting().sampleNotification(
WebhookNotification.Kind.SUBSCRIPTION_WENT_PAST_DUE, "my_id"
);
WebhookNotification webhookNotification = gateway.webhookNotification().parse(
sampleNotification.get("bt_signature"),
sampleNotification.get("bt_payload")
);
webhookNotification.getSubscription().getId();
// "my_id"
First off I don't know what my_id actually should be. Is it supposed to be a plan ID? Or should it be a Subscription ID?
I've tested all of it. I've set it to an existing billing plan in my vault and I also tried to create a Customer down to an actual Subscription like this:
public class WebhookChargedSuccessfullyLocal {
private final static BraintreeGateway BT;
static {
String btConfig = "C:\\workspaces\\mz\\mz-server\\mz-web-server\\src\\main\\assembly\\dev\\braintree.properties";
Braintree.initialize(btConfig);
BT = Braintree.instance();
}
public static void main(String[] args) {
WebhookChargedSuccessfullyLocal webhookChargedSuccessfullyLocal = new WebhookChargedSuccessfullyLocal();
webhookChargedSuccessfullyLocal.post();
}
/**
*
*/
public void post() {
CustomerRequest customerRequest = new CustomerRequest()
.firstName("Testuser")
.lastName("Tester");
Result<Customer> createUserResult = BT.customer().create(customerRequest);
if(createUserResult.isSuccess() == false) {
System.err.println("Could not create customer");
System.exit(1);
}
Customer customer = createUserResult.getTarget();
PaymentMethodRequest paymentMethodRequest = new PaymentMethodRequest()
.customerId(customer.getId())
.paymentMethodNonce("fake-valid-visa-nonce");
Result<? extends PaymentMethod> createPaymentMethodResult = BT.paymentMethod().create(paymentMethodRequest);
if(createPaymentMethodResult.isSuccess() == false) {
System.err.println("Could not create payment method");
System.exit(1);
}
if(!(createPaymentMethodResult.getTarget() instanceof CreditCard)) {
System.err.println("Unexpected error. Result is not a credit card.");
System.exit(1);
}
CreditCard creditCard = (CreditCard) createPaymentMethodResult.getTarget();
SubscriptionRequest subscriptionRequest = new SubscriptionRequest()
.paymentMethodToken(creditCard.getToken())
.planId("mmb2");
Result<Subscription> createSubscriptionResult = BT.subscription().create(subscriptionRequest);
if(createSubscriptionResult.isSuccess() == false) {
System.err.println("Could not create subscription");
System.exit(1);
}
Subscription subscription = createSubscriptionResult.getTarget();
HashMap<String, String> sampleNotification = BT.webhookTesting()
.sampleNotification(WebhookNotification.Kind.SUBSCRIPTION_CHARGED_SUCCESSFULLY, subscription.getId());
WebhookNotification webhookNotification = BT.webhookNotification()
.parse(
sampleNotification.get("bt_signature"),
sampleNotification.get("bt_payload")
);
System.out.println(webhookNotification.getSubscription().getId());
}
}
but all I'm getting is a WebhookNotification instance that has nothing set. Only its ID and the timestamp appears to be set but that's it.
What I expected:
I expected to receive a Subscription object that tells me which customer has subscribed to it as well as e.g. all add-ons which are included in the billing plan.
Is there a way to get such test-notifications in the sandbox mode?
Full disclosure: I work at Braintree. If you have any further questions, feel free to contact support.
webhookNotification.getSubscription().getId(); will return the ID of the subscription associated with sampleNotification, which can be anything for testing purposes, but will be a SubscriptionID in a production environment.
Receiving a dummy object from webhookTesting().sampleNotification() is the expected behavior, and is in place to help you ensure that all kinds of webhooks can be correctly caught. Once that logic is in place, in the Sandbox Gateway under Settings > Webhooks you can specify your endpoint to receive real webhook notifications.
In the case of SUBSCRIPTION_CHARGED_SUCCESSFULLY you will indeed receive a Subscription object containing add-on information as well as an array of Transaction objects containing customer information.

PrepareResponse().AsActionResult() throws unsupported exception DotNetOpenAuth CTP

Currently I'm developing an OAuth2 authorization server using DotNetOpenAuth CTP version. My authorization server is in asp.net MVC3, and it's based on the sample provided by the library. Everything works fine until the app reaches the point where the user authorizes the consumer client.
There's an action inside my OAuth controller which takes care of the authorization process, and is very similar to the equivalent action in the sample:
[Authorize, HttpPost, ValidateAntiForgeryToken]
public ActionResult AuthorizeResponse(bool isApproved)
{
var pendingRequest = this.authorizationServer.ReadAuthorizationRequest();
if (pendingRequest == null)
{
throw new HttpException((int)HttpStatusCode.BadRequest, "Missing authorization request.");
}
IDirectedProtocolMessage response;
if (isApproved)
{
var client = MvcApplication.DataContext.Clients.First(c => c.ClientIdentifier == pendingRequest.ClientIdentifier);
client.ClientAuthorizations.Add(
new ClientAuthorization
{
Scope = OAuthUtilities.JoinScopes(pendingRequest.Scope),
User = MvcApplication.LoggedInUser,
CreatedOn = DateTime.UtcNow,
});
MvcApplication.DataContext.SaveChanges();
response = this.authorizationServer.PrepareApproveAuthorizationRequest(pendingRequest, User.Identity.Name);
}
else
{
response = this.authorizationServer.PrepareRejectAuthorizationRequest(pendingRequest);
}
return this.authorizationServer.Channel.PrepareResponse(response).AsActionResult();
}
Everytime the program reaches this line:
this.authorizationServer.Channel.PrepareResponse(response).AsActionResult();
The system throws an exception which I have researched with no success. The exception is the following:
Only parameterless constructors and initializers are supported in LINQ to Entities.
The stack trace: http://pastebin.com/TibCax2t
The only thing I've done differently from the sample is that I used entity framework's code first approach, an I think the sample was done using a designer which autogenerated the entities.
Thank you in advance.
If you started from the example, the problem Andrew is talking about stays in DatabaseKeyNonceStore.cs. The exception is raised by one on these two methods:
public CryptoKey GetKey(string bucket, string handle) {
// It is critical that this lookup be case-sensitive, which can only be configured at the database.
var matches = from key in MvcApplication.DataContext.SymmetricCryptoKeys
where key.Bucket == bucket && key.Handle == handle
select new CryptoKey(key.Secret, key.ExpiresUtc.AsUtc());
return matches.FirstOrDefault();
}
public IEnumerable<KeyValuePair<string, CryptoKey>> GetKeys(string bucket) {
return from key in MvcApplication.DataContext.SymmetricCryptoKeys
where key.Bucket == bucket
orderby key.ExpiresUtc descending
select new KeyValuePair<string, CryptoKey>(key.Handle, new CryptoKey(key.Secret, key.ExpiresUtc.AsUtc()));
}
I've resolved moving initializations outside of the query:
public CryptoKey GetKey(string bucket, string handle) {
// It is critical that this lookup be case-sensitive, which can only be configured at the database.
var matches = from key in db.SymmetricCryptoKeys
where key.Bucket == bucket && key.Handle == handle
select key;
var match = matches.FirstOrDefault();
CryptoKey ck = new CryptoKey(match.Secret, match.ExpiresUtc.AsUtc());
return ck;
}
public IEnumerable<KeyValuePair<string, CryptoKey>> GetKeys(string bucket) {
var matches = from key in db.SymmetricCryptoKeys
where key.Bucket == bucket
orderby key.ExpiresUtc descending
select key;
List<KeyValuePair<string, CryptoKey>> en = new List<KeyValuePair<string, CryptoKey>>();
foreach (var key in matches)
en.Add(new KeyValuePair<string, CryptoKey>(key.Handle, new CryptoKey(key.Secret, key.ExpiresUtc.AsUtc())));
return en.AsEnumerable<KeyValuePair<string,CryptoKey>>();
}
I'm not sure that this is the best way, but it works!
It looks like your ICryptoKeyStore implementation may be attempting to store CryptoKey directly, but it's not a class that is compatible with the Entity framework (due to not have a public default constructor). Instead, define your own entity class for storing the data in CryptoKey and your ICryptoKeyStore is responsible to transition between the two data types for persistence and retrieval.

Resources