I managed our customer's point as keycloak user attribute.
I set 'point' as user attribute, and I handled it with keycloak api in Java Spring boot.
So, flow of update point is..
point = getPointByUserEmail(userEmail); // get point to update.
point -= 10; // minus point
updatePointByUserEmail(userEmail, point); // update point
public Long getPointByUserEmail(String userEmail) {
UserRepresentation userRepresentation = usersResource.search(userEmail, true).get(0);
Map<String, List<String>> attributes = userRepresentation.getAttributes();
if (attributes == null || attributes.get("point") == null)
return null;
return Long.parseLong(attributes.get("point").get(0));
}
public void updatePointByUserEmail(String userEmail, Long point) {
UserRepresentation userRepresentation = usersResource.search(userEmail, true).get(0);
UserResource userResource = usersResource.get(userRepresentation.getId());
Map<String, List<String>> attributes = userRepresentation.getAttributes();
attributes.put("point", Arrays.asList(point.toString()));
userRepresentation.setAttributes(attributes);
userResource.update(userRepresentation);
}
It works well.
But my problem is when user requests simultaneously at almost same time to update point,
It doesn't work well.
For example, there are 2 requests at once. (initial point = 100, minus point per request = 10)
I expected it would be 80 point, because 100 - (10 * 2) = 80
But it was 90 point.
So I think I need to set isolation level to transaction in point.
In JPA, there is #Lock annotation... but,, how can I do it in keycloak ?
Is there any way that I can set isolation level in keycloak api so that my function will work well ?
This is code when I handle point,
public class someController {
public ResponseEntity<String> methodToHandleRequest(#RequestBody Dto param, HttpServletRequest request) {
...
Long point = null;
try {
point = userAttributesService.getPoint();
if (point == null)
throw new NullPointerException();
} catch (Exception e) {
e.printStackTrace();
return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body("error");
}
if (point < 10)
return ResponseEntity.status(HttpStatus.PAYMENT_REQUIRED).body("you have at least 10 points " + "(current: " + point + ")");
userAttributesService.updatePoint(point - 10);
...
}
I tried managing point to use JPA, it would handle user attribute using DB.
But, when I updated user attribute data in DB.
I tried managing point to connect JPA with keycloak DB.
And I found DB table for user attribute, and there is point value !
But it doesn't update in keycloak when I updated point in DB.... :'(
Related
The goal of my springBoot webflux r2dbc application is Controller accepts a Request including a list of DB UPDATE or INSERT details, and Response a result summary back.
I can write a ReactiveCrudRepository based repository to implement each DB operation. But I don't know how to write the Service to group the executions of the list of DB operations and compose a result summary response.
I am new to java reactive programing. Thanks for any suggestions and help.
Chen
I get the hint from here: https://www.vinsguru.com/spring-webflux-aggregation/ . Ideas are :
From request to create 3 Monos
Mono<List> monoEndDateSet -- DB Row ids of update operation;
Mono<List> monoCreateList -- DB Row ids of new inserted;
Mono monoRespFilled -- partly fill some known fields;
use Mono.zip aggregate the 3 monos, map and aggregate the Tuple3 to Mono to return.
Below are key part of codes:
public Mono<ChangeSupplyResponse> ChangeSupplies(ChangeSupplyRequest csr){
ChangeSupplyResponse resp = ChangeSupplyResponse.builder().build();
resp.setEventType(csr.getEventType());
resp.setSupplyOperationId(csr.getSupplyOperationId());
resp.setTeamMemberId(csr.getTeamMemberId());
resp.setRequestTimeStamp(csr.getTimestamp());
resp.setProcessStart(OffsetDateTime.now());
resp.setUserId(csr.getUserId());
Mono<List<Long>> monoEndDateSet = getEndDateIdList(csr);
Mono<List<Long>> monoCreateList = getNewSupplyEntityList(csr);
Mono<ChangeSupplyResponse> monoRespFilled = Mono.just(resp);
return Mono.zip(monoRespFilled, monoEndDateSet, monoCreateList).map(this::combine).as(operator::transactional);
}
private ChangeSupplyResponse combine(Tuple3<ChangeSupplyResponse, List<Long>, List<Long>> tuple){
ChangeSupplyResponse resp = tuple.getT1().toBuilder().build();
List<Long> endDateIds = tuple.getT2();
resp.setEndDatedDemandStreamSupplyIds(endDateIds);
List<Long> newIds = tuple.getT3();
resp.setNewCreatedDemandStreamSupplyIds(newIds);
resp.setSuccess(true);
Duration span = Duration.between(resp.getProcessStart(), OffsetDateTime.now());
resp.setProcessDurationMillis(span.toMillis());
return resp;
}
private Mono<List<Long>> getNewSupplyEntityList(ChangeSupplyRequest csr) {
Flux<DemandStreamSupplyEntity> fluxNewCreated = Flux.empty();
for (SrmOperation so : csr.getOperations()) {
if (so.getType() == SrmOperationType.createSupply) {
DemandStreamSupplyEntity e = buildEntity(so, csr);
fluxNewCreated = fluxNewCreated.mergeWith(this.demandStreamSupplyRepository.save(e));
}
}
return fluxNewCreated.map(e -> e.getDemandStreamSupplyId()).collectList();
}
...
I have an application where I am managing documents. I would like to ask you whether I need to deal with concurrency.
Lets say, I will have the method below (which is in the class with #Service and #Transactional) and more requests would come which would require to use this function.
Will spring and database handle concurrency without synchronization? (my db is MySQL and JPA). So the first request to use this method will be executed, but another request will wait till the previous request will be done... so it would not happen that something would be overwritten in the database...
Thanks for help
public void updateSharing(long userId, long documentId, int approval) {
Optional<Document> optionalDocument = documentRepository.findById(documentId);
User user = userService.findUserById(userId);
if(optionalDocument.isPresent()){
Document document = optionalDocument.get();
if(document.getDocumentState().getId() == 2){
documentRepository.updateSharing(userId, documentId, approval);
if(approval == 0){
List<User> users = userService.getUsersForApprovingDocument(documentId);
Map<String, String> map = emailService.createMessage(2, user, document);
if(document.getUser().isActive()){
users.add(document.getUser());
}
setDocumentType(documentId, 3);
sendEmail(users, map.get("subject"), map.get("message"));
} else if(isDocumentApproved(documentId)){
setDocumentType(documentId, 1);
List<User> users = userService.getUsersForApprovingDocument(documentId);
if(document.getUser().isActive()){
users.add(document.getUser());
}
Map<String, String> map = emailService.createMessage(5, user, document);
sendEmail(users, map.get("subject"), map.get("message"));
}
} else if(document.getDocumentState().getId() == 1){
documentRepository.updateSharing(userId, documentId, approval);
} else {
return;
}
}
}
You don't need to deal with concurrency in this situation.
For every request, the container creates a new Thread and each Thread has it's own stack.
I'm facing a singular problem...
I need to update an entity, but i don't know when it is really updated
My method is
#Override
#Transactional(isolation = Isolation.SERIALIZABLE)
public void lightOn(int idInterruttore) {
Interruttore interruttore = dao.findById(idInterruttore);
String inputPin = interruttore.getInputPin();
String pinName = interruttore.getRelePin();
GpioController gpio = interruttore.getGpio();
GpioPinDigitalOutput rele = gpio.provisionDigitalOutputPin(RaspiPin.getPinByName(pinName));
try {
DateTime date = new DateTime();
Date now = date.toDate();
int i = 1;
while (getInput(inputPin, gpio) != 1) {
if(i > 1){
logger.debug(String.format("Try n %s", i));
}
pushButton(rele);
Thread.sleep(1000);
i++;
}
dao.updateInterruttore(idInterruttore, now, true);
} catch (GpioPinExistsException | InterruptedException gpe) {
logger.error("GPIO giĆ esistente", gpe);
} finally {
gpio.unprovisionPin(rele);
}
logger.debug(String.format("After the update status should be true and it's %s",
interruttore.isStato()));
}
updateInterruttore is (i used this form to be sure to call the commit after the update... I have the lock Option because multiple call can be done to this method but only the first must update
#Override
public void updateInterruttore(int idInterruttore, Date dateTime, boolean stato) {
Session session = getSession();
Transaction tx = session.beginTransaction();
String update = "update Interruttore i set i.dateTime = :dateTime, i.stato = :stato where idInterruttore = :idInterruttore";
session.createQuery(update).setTimestamp("dateTime", dateTime).setBoolean("stato", stato)
.setInteger("idInterruttore", idInterruttore).setLockOptions(LockOptions.UPGRADE).executeUpdate();
tx.commit();
}
}
Well... when I update the log says me:
After the update status should be true and it's false
This happens only the first time I call the method, the second time interruttore.isStato is correctly true.
Why this happens?
This happens because you're updating the database directly with the update statement. Hibernate does not update automatically an already loaded entity in this case. If you reload the entity after the call to dao.updateInterruttore you should get the updated data.
Two notes:
1) You are using a query to apply the update. In that case, Hibernate will no update the entity that is in the session. Unless you update the entity itself and call session.save(interruttore), then the entity will not be updated. (But the update shows up in the DB.) Furthermore, I don't understand why you just don't update the entity and save it via session.save().
2) You are annotating the service method with #Transactional. (Assuming that's Spring annotation) If you use JTA, your tx.commit() will have no effect. But once the method completes, your transaction is committed. (or rolled back if the method throws an exception) If you are not using JTA, then get rid of #Transactional and manage transaction in your DAO method, as you are doing. But that's considered bad practice.
In my current project I have an entity which can be published to other systems. For keeping track on the publications the entity has a relation called "publications". I am using Eclipselink.
This entity bean also has a "PreUpdate" annotated method.
In order to be able to keep the other systems data up to date, I created an Aspect that is executed around the call to the PreUpdate method. Depending on which properties have changed, I need to remove some of the publications. Everything is working absolutely fine.
The problem I am having is that the portal-publishing component correctly sends delete commands and removes the publication from the entities "publications" list. I can even see in the changeset that JPA has noticed the "publications" property to have changed. After the transaction is flushed, the cached entity correctly doesn't have the deleted publications anymore. Unfortunately the database still does and when the system is restarted or the Entity is loaded from the DB again, the publication metadata is there again.
I tried allmost everything. I even managed to get the deleted instances from the JPA ChangeSet in the Aspect and tried to use the entityManager to manually delete them, but nothing actually worked. I seem to be unable to delete these relational entities. Currently I am thinking about using JDBC to delete them, but this would only be my last measure.
#Transactional
#Around("execution(* de.cware.services.truck.model.Truck.jpaPreUpdate(..))")
public Object truckPreUpdate(final ProceedingJoinPoint pjp) throws Throwable {
if (alreadyExecutingMarker.get() != Boolean.TRUE) {
alreadyExecutingMarker.set(Boolean.TRUE);
final Truck truck = (Truck) pjp.getTarget();
final JpaEntityManager jpaEntityManager = (JpaEntityManager) entityManager.getDelegate();
final UnitOfWorkChangeSet changeSet = jpaEntityManager.getUnitOfWork().getCurrentChanges();
final ObjectChangeSet objectChangeSet = changeSet.getObjectChangeSetForClone(truck);
if (log.isDebugEnabled()) {
log.debug("--------------------- Truck pre update check (" + truck.getId() + ") ---------------------");
}
////////////////////////////////////////////////////////////////////////////////////////////////////////////
// If the truck date has changed, revoke all publication copies.
////////////////////////////////////////////////////////////////////////////////////////////////////////////
final ChangeRecord truckFreeDate = objectChangeSet.getChangesForAttributeNamed("lkwFreiDatum");
if (truckFreeDate != null) {
if (log.isDebugEnabled()) {
log.debug("The date 'truckFreeDate' of truck with id '" + truck.getId() + "' has changed. " +
"Revoking all publications that are not marked as main applications");
}
for (final String portal : truck.getPublishedPortals()) {
if (log.isDebugEnabled()) {
log.debug("- Revoking publications of copies to portal: " + portal);
}
portalService.deleteCopies(truck, portal);
// Get any deleted portal references and use the entityManager to finally delete them.
changeSet = jpaEntityManager.getUnitOfWork().getCurrentChanges();
objectChangeSet = changeSet.getObjectChangeSetForClone(truck);
final ChangeRecord publicationChanges = objectChangeSet.getChangesForAttributeNamed("publications");
if (publicationChanges != null) {
if (publicationChanges instanceof CollectionChangeRecord) {
final CollectionChangeRecord collectionChanges =
(CollectionChangeRecord) publicationChanges;
#SuppressWarnings("unchecked")
final Collection<ObjectChangeSet> removedPublications =
(Collection<ObjectChangeSet>)
collectionChanges.getRemoveObjectList().values();
for (final ObjectChangeSet removedPublication : removedPublications) {
final TruckPublication publication = (TruckPublication) ((org.eclipse.persistence.internal.sessions.ObjectChangeSet) removedPublication).getUnitOfWorkClone();
entityManager.remove(publication);
}
}
}
}
}
}
}
Chris
The issue is that PreUpdate is raised during the commit process, when the set of changes have already been computed, and the set of objects to delete have already been computed.
Ideally you would perform something like this in your application logic, not through a persistence event.
You could try executing a DeleteObjectQuery directly from your event (instead of using em.remove()), this may work, but in general it would be better to perform this logic in your application.
jpaEntityManager.getUnitOfWork().deleteObject(object);
Also note that getCurrentChanges() computes the changes, in a PreUpdate event the changes are already computed, so you should be able to use getUnitOfWorkChangeSet().
The only solution I found was to create a new Method for performing the delete and forcing JPA to create a new Transaction. As by this I am losing the changeSet, I had to manually find out which publications were deleted. I then simply call that helper method and the publications are correctly deleted, but I find this solution extremely ugly.
#Transactional
#Around("execution(* de.cware.services.truck.model.Truck.jpaPreUpdate(..))")
public Object truckPreUpdate(final ProceedingJoinPoint pjp) throws Throwable {
if (alreadyExecutingMarker.get() != Boolean.TRUE) {
alreadyExecutingMarker.set(Boolean.TRUE);
final Truck truck = (Truck) pjp.getTarget();
final JpaEntityManager jpaEntityManager = (JpaEntityManager) entityManager.getDelegate();
final UnitOfWorkChangeSet changeSet = jpaEntityManager.getUnitOfWork().getCurrentChanges();
final ObjectChangeSet objectChangeSet = changeSet.getObjectChangeSetForClone(truck);
if (log.isDebugEnabled()) {
log.debug("--------------------- Truck pre update check (" + truck.getId() + ") ---------------------");
}
////////////////////////////////////////////////////////////////////////////////////////////////////////////
// If the truck date has changed, revoke all publication copies.
////////////////////////////////////////////////////////////////////////////////////////////////////////////
final ChangeRecord truckFreeDate = objectChangeSet.getChangesForAttributeNamed("lkwFreiDatum");
if (truckFreeDate != null) {
if (log.isDebugEnabled()) {
log.debug("The date 'truckFreeDate' of truck with id '" + truck.getId() + "' has changed. " +
"Revoking all publications that are not marked as main applications");
}
removeCopyPublications(truck);
}
}
#Transactional(propagation = Propagation.REQUIRES_NEW)
protected void removeCopyPublications(Truck truck) {
// Delete all not-main-publications.
for (final String portal : truck.getPublishedPortals()) {
if (log.isDebugEnabled()) {
log.debug("- Revoking publications of copies to portal: " + portal);
}
final Map<Integer, TruckPublication> oldPublications = new HashMap<Integer, TruckPublication>();
for(final TruckPublication publication : truck.getPublications(portal)) {
oldPublications.put(publication.getId(), publication);
}
portalService.deleteCopies(truck, portal);
for(final TruckPublication publication : truck.getPublications(portal)) {
oldPublications.remove(publication.getId());
}
for (TruckPublication removedPublication : oldPublications.values()) {
if(!entityManager.contains(removedPublication)) {
removedPublication = entityManager.merge(removedPublication);
}
entityManager.remove(removedPublication);
entityManager.flush();
}
}
}
Why doesn't my first version work?
I had a similar problem, I have a class and its children, when I remove the children from the parent they were deleted from the DB, then I attached new children using merge on the class Parent (CascadeType.ALL) using JPA/EclipseLink, then the children didn't create on the DB but in the persistency Motor (JPA). I fix this doing the following:
1- I set shared-cache-mode to NONE in the persistence.xml file
2- When I remove the children, I executed inmediatly this:
public void remove(T entity) {
getEntityManager().remove(getEntityManager().merge(entity));
getEntityManager().getEntityManagerFactory().getCache().evictAll();
}
And that's all. I hope this would help anyone else.
CHECK REFERENCE
http://wiki.eclipse.org/EclipseLink/Examples/JPA/Caching
Currently I'm developing an OAuth2 authorization server using DotNetOpenAuth CTP version. My authorization server is in asp.net MVC3, and it's based on the sample provided by the library. Everything works fine until the app reaches the point where the user authorizes the consumer client.
There's an action inside my OAuth controller which takes care of the authorization process, and is very similar to the equivalent action in the sample:
[Authorize, HttpPost, ValidateAntiForgeryToken]
public ActionResult AuthorizeResponse(bool isApproved)
{
var pendingRequest = this.authorizationServer.ReadAuthorizationRequest();
if (pendingRequest == null)
{
throw new HttpException((int)HttpStatusCode.BadRequest, "Missing authorization request.");
}
IDirectedProtocolMessage response;
if (isApproved)
{
var client = MvcApplication.DataContext.Clients.First(c => c.ClientIdentifier == pendingRequest.ClientIdentifier);
client.ClientAuthorizations.Add(
new ClientAuthorization
{
Scope = OAuthUtilities.JoinScopes(pendingRequest.Scope),
User = MvcApplication.LoggedInUser,
CreatedOn = DateTime.UtcNow,
});
MvcApplication.DataContext.SaveChanges();
response = this.authorizationServer.PrepareApproveAuthorizationRequest(pendingRequest, User.Identity.Name);
}
else
{
response = this.authorizationServer.PrepareRejectAuthorizationRequest(pendingRequest);
}
return this.authorizationServer.Channel.PrepareResponse(response).AsActionResult();
}
Everytime the program reaches this line:
this.authorizationServer.Channel.PrepareResponse(response).AsActionResult();
The system throws an exception which I have researched with no success. The exception is the following:
Only parameterless constructors and initializers are supported in LINQ to Entities.
The stack trace: http://pastebin.com/TibCax2t
The only thing I've done differently from the sample is that I used entity framework's code first approach, an I think the sample was done using a designer which autogenerated the entities.
Thank you in advance.
If you started from the example, the problem Andrew is talking about stays in DatabaseKeyNonceStore.cs. The exception is raised by one on these two methods:
public CryptoKey GetKey(string bucket, string handle) {
// It is critical that this lookup be case-sensitive, which can only be configured at the database.
var matches = from key in MvcApplication.DataContext.SymmetricCryptoKeys
where key.Bucket == bucket && key.Handle == handle
select new CryptoKey(key.Secret, key.ExpiresUtc.AsUtc());
return matches.FirstOrDefault();
}
public IEnumerable<KeyValuePair<string, CryptoKey>> GetKeys(string bucket) {
return from key in MvcApplication.DataContext.SymmetricCryptoKeys
where key.Bucket == bucket
orderby key.ExpiresUtc descending
select new KeyValuePair<string, CryptoKey>(key.Handle, new CryptoKey(key.Secret, key.ExpiresUtc.AsUtc()));
}
I've resolved moving initializations outside of the query:
public CryptoKey GetKey(string bucket, string handle) {
// It is critical that this lookup be case-sensitive, which can only be configured at the database.
var matches = from key in db.SymmetricCryptoKeys
where key.Bucket == bucket && key.Handle == handle
select key;
var match = matches.FirstOrDefault();
CryptoKey ck = new CryptoKey(match.Secret, match.ExpiresUtc.AsUtc());
return ck;
}
public IEnumerable<KeyValuePair<string, CryptoKey>> GetKeys(string bucket) {
var matches = from key in db.SymmetricCryptoKeys
where key.Bucket == bucket
orderby key.ExpiresUtc descending
select key;
List<KeyValuePair<string, CryptoKey>> en = new List<KeyValuePair<string, CryptoKey>>();
foreach (var key in matches)
en.Add(new KeyValuePair<string, CryptoKey>(key.Handle, new CryptoKey(key.Secret, key.ExpiresUtc.AsUtc())));
return en.AsEnumerable<KeyValuePair<string,CryptoKey>>();
}
I'm not sure that this is the best way, but it works!
It looks like your ICryptoKeyStore implementation may be attempting to store CryptoKey directly, but it's not a class that is compatible with the Entity framework (due to not have a public default constructor). Instead, define your own entity class for storing the data in CryptoKey and your ICryptoKeyStore is responsible to transition between the two data types for persistence and retrieval.